<<

The Future of the Scientific Debate: Contemporary Issues Concerning

Author(s): Curtis Forbes

Source: Spontaneous Generations: A Journal for the History and , Vol. 9, No. 1 (2018) 1-11.

Published by: The University of DOI: 10.4245/sponge.v9i1.

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. The Future of the Scientific Realism Debate: Contemporary Issues Concerning Scientific Realism Curtis Forbes*

I. Introduction “Philosophy,” ’s Socrates said, “begins in wonder” (Theaetetus, 155d). Two and a half millennia later, saw fit to add: “And, at the end, when philosophical thought has done its best, the wonder remains” (1938, 168). Nevertheless, we tend to no longer wonder about many questions that would have stumped (if not vexed) the ancients:

“Why does water expand when it freezes?”

“How can one substance change into another?”

“What allows the sun to continue to shine so brightly, day after day, while all other sources of light and warmth exhaust their fuel sources at a rate in proportion to their brilliance?”

Whitehead’s addendum to Plato was not wrong, however, in the sense that we derive our answers to such questions from the theories, models, and methods of modern science, not the systems, speculations, and arguments of modern philosophy. Like philosophy, science often begins in wonder; but when scientific has done its best, it would seem, the wonder doesnot always remain. Ex cathedra, the scientific image of reality found in our current best scientific theories, inculcated in our science classrooms, and artfully presented through more popular media proffers far more answers than, say, ’s treatises on natural philosophy ever could; these answers, furthermore,

* Curtis Forbes recently received his PhD from the University of Toronto’s Institute for the History and Philosophy of Science and Technology. His on-going research project aims to make the scientific realism question more personal than is usual—rather than trying to determine which philosophical interpretation of science is best per se, his research asks how we might determine which interpretation is best for specific people. Not coincidentally, he spends much of his free time wondering whether, why, and how much he himself should believe about the modern scientific worldview.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 1 C. Forbes The Future of the Scientific Realism Debate seem vastly more precise, accurate, deep, consilient, elegant, empirically sound, and widely accepted than any given by the modern scientific worldview’s alternatives and predecessors. Making sense of reality through the lenses of our current best scientific theories can leave a curious person feeling immensely proud of our collective epistemic achievements, incredibly intellectually satisfied, and utterly convinced that the picture of reality painted by these theories is true (at least approximately speaking). But philosophers can’t help wondering: is that picture true, even as an approximation? Is our world really the way modern physical theory describes it, abuzz with various unobservable (though perhaps detectable) entities whose causal interactions make everything appear to us as it does? Is it the aim of empirical science to discover some hidden reality behind our everyday experience, and thereby satisfy our wonder? Is that even possible? These are distinctly philosophical questions, for although they are questions about the epistemic outcomes, long-term potential, and ultimate aims of scientific inquiry, scientific inquiry itself is unable toanswer them satisfactorily. The theories of modern science, taken as true, provide extremely satisfying explanations when we wonder why ice has more volume than water, why iron turns to rust, or why the sun keeps shining day after day. But when we wonder whether these theories are true, whether the methods of empirical science could ever help us arrive at true theories in the first place, or whether scientific inquiry even aims to arrive at such theories, wehave reached the limits of science and entered the domain of philosophy. “Scientific realism” refers to any philosophical thesis that constitutes an affirmative answer to one or more of the questions above, e.g., byasserting that the theories of modern science paint an approximately true picture of unobservable reality, that scientific inquiry is capable of arriving at approximately true theories (even if it hasn’t yet), and/or that the governing aim of scientific inquiry is to arrive at such theories (even if it never will). Whether they understand it as a metaphysical, epistemological, or axiological thesis, casual suggests that the average person finds some form of scientific realism to be intuitively correct; formal surveys establish that most academic philosophers do as well (Bourget and Chalmers 2014, 481). Nevertheless, realist interpretations of science certainly have their detractors, and the ongoing debate between realists and anti-realists has been a central part of philosophical thinking about science for at least the last four decades. Thinking earnestly about the merits of scientific realism now requires navigating contentious historiographical issues, being familiar with the technical details of various scientific theories, and addressing disparate philosophical problems spanning aesthetics, , , and beyond. For all these reasons, reviewing the extant literature on the topic can be both fascinating and frustrating. With that in mind, this special

Spontaneous Generations 9:1(2018) 2 C. Forbes The Future of the Scientific Realism Debate issue of Spontaneous Generations: A Journal for the History and Philosophy of Science collects over twenty invited and peer-reviewed papers under the title “The Future of the Scientific Realism Debate: Contemporary Issues Concerning Scientific Realism.” It was organized in the hope of providing a suitable forum for everyone to pause, catch their breath, and share their current thoughts on this topic. It should present a suitable place for any newly interested party to enter the debate and begin to make sense of the existing literature. Deeply immersed veterans whose thinking has shaped the course of the debate in recent years should find this collection valuable as well, including those who have contributed to it. Ultimately the hope is that this collection will help all curious parties better understand, appreciate, and reflect on the evolution of the scientific realism debate, its current state, and how we might move it forward fruitfully. I will count this collection as a success, however, so long as a few others find it as enjoyable to read as I have, even if, at the end, the shared wonder that drives the scientific realism debate remains.

II. Articles For such a thoroughly discussed topic, it might seem strange that ANJAN CHAKRAVARTTY and —the former a champion of scientific realism and the latter an opponent—have contributed a dialogue entitled “What is Scientific Realism?” investigating that very question. Scientific realism now comes in a variety of subtypes—semirealism, structural realism, , etc.—as its defenders have tweaked, qualified, and clarified the basic position to fend off various anti-realist challenges. Butas Chakravartty and van Fraassen’s conversation makes clear, more abstract differences between people’s conceptions of scientific realism have not always been sufficiently appreciated. Thankfully, this dialogue clarifies the differences between each author’s conception of scientific realism, revealing some important common ground and better mapping the contested territory. Because it addresses such a basic question, and in doing so reveals the complexity of the dispute, Chakravartty and van Fraassen’s piece provides an ideal place to start for anyone reading through this entire collection. JEFFREY FOSS addresses the debate over scientific realism with a refreshing combination of candour and irreverence. Foss reminds us that our realist commitments to science should be retail rather than wholesale, so “philosophers really should respond to the question, ‘Are you a scientific realist’ with the question, ‘Which bit of science do you have in mind?”’ (26). At the same time, people’s explicit pronouncements might not be the best way to determine their true ontological commitments, for their behaviour often implicitly presumes (or denies) the (likely) truth of various scientific theories. As Foss sees it, regardless of whether we assent to or dissent from a Spontaneous Generations 9:1(2018) 3 C. Forbes The Future of the Scientific Realism Debate realist interpretation of science in the classroom, at conferences, or in our publications, each of us tends to treat as real whichever entities “really matter” (28) to us, pragmatically speaking. Such a pragmatic focus departs from the usual terms of the scientific realism debate, which generally focuses on epistemic questions, e.g., do we know that the theories of modern science are (approximately) true, as scientific realists assert we do? Several of the papers in this issue begin by assuming the debate over such epistemic questions inevitably ends in a stalemate, a view Bradley Monton accurately described as a “growing consensus” (2007, 3) a full decade ago. Even so, there may still be pragmatic debates worth having about the relative utility of interpreting science realistically or anti-realistically, e.g., is it useful to believe that the theories of modern science are true? HASOK CHANG, for instance, thinks we should “face the that we cannot know whether we have got the objective Truth about the World (even if such a formulation is meaningful). Realists go astray by persisting in trying to find a way around this fact, asdo anti-realists in engaging with that obsession” (31). Rather than treating scientific realism as a descriptive thesis, Chang looks to develop itasa kind of policy, a commitment to think and act in certain ways, a position whose virtues can be measured according to whether adopting it proves “useful for scientists and others who are actually engaged in empirical ” (31). THEODORE ARABATZIS also suggests that realism might be best evaluated pragmatically rather than epistemically, specifically by assessing its potential as a historiographical tool for making sense of scientific practice. While Arabatzis suggests that historians of science might find adopting a realist outlook methodologically useful in limited circumstances (cf. 2001, 2006), HARRY COLLINS maintains his longstanding position that sociologists of science must always maintain a “methodological relativism” (39) when studying scientific practice (though he’s come around to the idea that those same sociologists might find themselves “wallowing in realism” in other contexts, without contradiction). We can debate whether adopting an anti-realist attitude is a methodological necessity for the sociologist of science, of course, which raises an interesting point: even if we assume that the debate over scientific realism’s epistemic merits inevitably ends in a stalemate, we might still fruitfully debate its pragmatic merits, investigating when (and whether) adopting it proves useful for scientists, historians, sociologists, or anyone else; for the truth of scientific realism, and the utility of treating it as true, are surely distinct issues. ARTHUR FINE agrees that it’s futile to continue arguing about whether scientific realism is true, but he also believes it’s futile to investigate the utility of adopting a realist outlook, arguing that we’re unlikely to find people’s philosophical outlooks on science making any practical difference.

Spontaneous Generations 9:1(2018) 4 C. Forbes The Future of the Scientific Realism Debate

He argues that we should expect scientists, for example, to operate equally well as scientists whether they’re realists searching for true theories or anti-realists searching for something else, e.g., theories that are reliable tools for making predictions. Nevertheless, as several authors have pointed out over the years this is an empirical matter, so conducting a thorough historical investigation would be the only way to determine whether realists and anti-realists typically practice science in distinctive and differentially fruitful ways (Hendry 1995, 2001; Forbes 2016; cf. Wray 2015; McArthur 2006). Depending on the results of such an investigation, working scientists might find that the gives them a pragmatic reasonto prefer one philosophical interpretation of science over all others. But, Fine argues, we shouldn’t expect the history of science to reveal any reason for anyone to prefer one interpretation over any other, and accordingly he offers a prescription for which he’s already well-known: stop trying to find any reason—pragmatic, epistemic, or otherwise—to be a realist or an anti-realist about science. JOSEPH ROUSE likewise argues that the scientific realism debate is a fruitless endeavour, though his reasoning is different than Fine’s. As Rouse sees it, the various conflicts between realists and anti-realists only arise and seem pressing because both parties share specious philosophical assumptions. He identifies and challenges several such assumptions, arguing that once we give them up we’ll have no need to keep asking the kinds of questions to which realism or anti-realism would be an answer; if we think about it properly, he suggests, our desire to continue the scientific realism debate will simply vanish. ALAN MUSGRAVE also has little interest in continuing to debate whether one should be a scientific realist, but not because the debate is futile or confused; rather, as he sees it, the debate is already settled, for realism about science is the “obvious, commonsensical view” (52). Thus, his concern is not with defending realism about science against anti-realist challenges, but rather with figuring out how to avoid slipping into “mad-dog dogmatic realism” while maintaining a more modest “lap-dog critical realism.” His considerations take him through many episodes from the history of science and several classic issues in epistemology and the philosophy of science: the , the and limits of scientific explanation, the possibility of having justified beliefs in the face of uncertainty, the distinction between observables and unobservables, and so on. His guiding thought throughout is this: if science is to be believed, as the realist maintains it is, what does that entail about the way things are? But HOWARD SANKEY points out that, prima facie, there is an epistemological problem with the claim that scientific realism is just common sense, for a realist interpretation of science actually contradicts our commonsense picture of the world. The issue is that the we have for the scientific picture of

Spontaneous Generations 9:1(2018) 5 C. Forbes The Future of the Scientific Realism Debate reality is based on made through the lens of our commonsense understanding of the world, so when our scientific worldview undercuts that commonsense worldview, it seems to discredit the evidential basis upon which it was initially accepted. Sankey ultimately believes that this apparent conflict between common sense and scientific realism is an illusion, buthe reminds us that any would-be scientific realist must be able to explain how the can reveal the truth about reality by first looking through a lens that makes reality appear very different than it truly is. Many would-be scientific realists have been especially preoccupied with evading an anti-realist challenge commonly known as the “pessimistic meta-induction,” most memorably pressed by (1981). The idea, roughly, is that history shows that taking a realist attitude toward any contemporary scientific consensus is an unreliable belief-forming strategy, for anyone who believed in the truth of Aristotle’s , Ptolemy’s cosmos, Newton’s absolute space, Priestley’s phlogiston-based , Lavoisier’s conception of oxygen or heat, or innumerable other examples was eventually proven wrong. We should, so the argument goes, expect our current theories to meet the same fate, and accordingly refrain from interpreting modern science realistically. points out that this challenge to scientific realism has an extensive pedigree not yet fully appreciated, having been expressed not only by scientists like Poincaré but also by more popular writers like Tolstoy. Thus, trying to identify the true and belief-worthy portions of one’s contemporary scientific worldview, while acknowledging that scientific consensuses shift over time, is not a new concern, norone that only vexes philosophers. The lesson for realists and anti-realists is the same: we make mummies of history if we portray historical scientists as naïve realists that felt it was impossible (or even unlikely) for their most stridently defended theories to be replaced one day by something better and radically different. But to meet the challenge presented by the pessimistic meta-induction, realists need to develop a principled way of deciding which portions of any successful are true, belief-worthy, and likely to be retained into the future, as scientists continue to develop better and better theories. P. KYLE STANFORD suggests that the scientific realism debate might be more productive if it focused less on debating whether our current best theories are approximately true, and more on debating whether it’s possible to predict the future development of science reliably, and thereby rebut the pessimistic meta-induction. The division between those who believe we can make such predictions and those who do not, Stanford maintains, is more important for understanding the scientific realism debate in its current form than the division between those who think modern science is at least approximately true and those who believe something else. Focusing on this issue might also be well-advised for pragmatic reasons. Because realists

Spontaneous Generations 9:1(2018) 6 C. Forbes The Future of the Scientific Realism Debate and anti-realists have different expectations regarding the way science will develop in the future, JAMIE SHAW argues that “our adoption of a realist or anti-realist position has normative implications for scientific practices” (83). Using the Human Brain Project as a case study, Shaw argues—pace Fine but in line with the suggestions of Chang, Arabatzis, and others—that our philosophical commitments can also have normative implications for science policy making, e.g., regarding the allocation of research funds. In that case there would remain great pragmatic value in continuing to ask whether scientists, philosophers, policy makers, and others should interpret our current best scientific theories realistically, and whether realists can predict the future development of science reliably enough to rebut the pessimistic meta-induction. Structural realists such as JAMES LADYMAN argue that the structure of well-established scientific theories is generally retained into whatever theories eventually replace them, for empirical theories are successful when their structure identifies “real patterns” in the modalities of nature. Ladyman offers structural realism as a way to avoid the pessimistic meta-induction, but he also claims it makes better sense of science as a successful and rational enterprise than anti-realist alternatives, providing a metaphysical account of both the various modal structures posited by our current best theories and the way those structures relate to each other at different scales. ROBIN HENDRY rightly emphasizes in his contribution that epistemic concerns about scientific theory change have been raised by many working scientists throughout history. Through a discussion of the development of structural theory in the , Hendry shows how the field displays a much more cumulative progression than physics, especially in light of the interpretive caution expressed by the chemists who actually developed structural theory. Like Ladyman, Hendry suggests that physics need not be treated as the only discipline with ontological authority, and that by paying more attention and respect to the special sciences the realist will be able to paint a more conservative picture of scientific theory change that supports a more optimistic induction. In any case, PETER VICKERS argues that the historical evidence fails to support the anti-realist’s pessimistic challenge to the realist. By showing that the realist can account for most instances of radical theory change, Vickers argues that “if the historical challenge is supposed to be a pessimistic induction, the realist is in a strong position since the inductive base looks rather weak. But if it is not an induction, then it seems to miss the mark, since the realist proposes a defeasible success-to-truth inference which will survive one or two historical counter-instances” (120). MARIO ALAI takes a less historically focused approach in his effort to defend a popular form of scientific realism—deployment realism—from one of its

Spontaneous Generations 9:1(2018) 7 C. Forbes The Future of the Scientific Realism Debate critics. Scientific realists certainly need to ensure that their philosophies of science are supported by (or at least consistent with) the historical of scientific change, but Alai’s paper shows that the more purely philosophical issues still matter. If scientific realism can’t serve the philosophical tasks itis generally meant to (e.g., by explaining the many successes of modern science), then there is really no reason to defend it against historical challenges from anti-realists. Like Ladyman, many would-be scientific realists have developed various strategies for portraying well-established scientific theories as approximations of their successors, hoping to rebut the pessimistic meta-induction by understanding scientific theory change as a conservative process that vindicates properly considered realist commitments (e.g., Chakravartty 2007, Psillos 1999). But even if some such strategy succeeds, scientific theory change poses an additional challenge for any realist seeking to draw her metaphysical beliefs from the contemporary scientific consensus. As KERRY MCKENZIE points out, even assuming that our current best scientific theories will approximate whatever eventually replaces them, this doesn’t imply that the metaphysical framework we use to make sense of our current science will “approximate” whatever framework we’ll use to make sense of future science. Indeed, the realist might be able to argue that classical mechanics approximates quantum mechanics, but it’s hard to understand how the different metaphysical frameworks used to give realist interpretations of those theories— and indeterminism, respectively—could be said to “approximate” one another in any meaningful sense. So, if scientific realism depends on interpreting our current best science through some metaphysical framework, yet we should expect that framework to be replaced by something entirely different when science changes in the future, then scientific realism seems to be built on rather shaky metaphysical foundations in the present. K. BRAD WRAY suggests that realists haven’t actually managed to rebut the basic pessimistic meta-induction anyway, claiming that they are generally exaggerating “the anti-realists’ need for evidence. Any radical change of theory in a field threatens to undermine the realist’s claim that changes of theory generally preserve the successes of the theories they replace by appeal to the same mechanisms and entities” (144). This contrasts with Vickers’s position on this matter, and is reminiscent of TIMOTHY LYONS’s claim (2002) that the pessimistic meta-induction is not actually an inductive argument needing a plethora of cases to fuel it, as it is usually understood, but rather a deductive argument that can be carried through with a single case. Lyons’s contribution to this volume presents four challenges to the most common variety of scientific realism, which he calls “epistemic scientific realism,” i.e., the claim that our most successful scientific theories are approximately true. He takes these challenges tobe

Spontaneous Generations 9:1(2018) 8 C. Forbes The Future of the Scientific Realism Debate devastating, but he nevertheless maintains an axiological form of scientific realism, i.e., a realism about the aims of science: “in changing its theoretical systems, the scientific enterprise seeks, not truth per se, but an increase in a particular subclass of true claims, those whose truth—including deep theoretical truth—is experientially concretized” (148, italics in original). This claim need not be maintained as an article of faith, however, for it is “an empirical hypothesis ... a testable tool for further empirical inquiry” (149), one whose plausibility can be assessed through historical investigations of the way that scientists actually evaluate competing scientific theories. PAUL TELLER raises another puzzle for the would-be scientific realist, for the “complexity of the world ensures the failure of our theoretical terms to attach to specific extensions, extension-determining characteristics, and properties and quantities generally” (157). When realists claim that the terms of their favoured theories refer to real things, he contends, this is at best an idealization, for these terms are too exacting to refer successfully to anything, strictly speaking. He goes on to sketch a way of understanding non-referring representations as true, suggesting that this can also help make sense of Ronald Giere’s provocative but underdeveloped “perspectival realism” (2006). While NANCY CARTWRIGHT maintains we have “considerable evidence” (166) that the theoretical terms of our best scientific theories successfully refer, she maintains that our most general (and most prized) scientific theories cannot be given a realist interpretation, insofar as that would mean taking such universal claims to be literally true, even approximately speaking. So, instead of asking whether (or arguing that) our best scientific theories are true, Cartwright proposes that realists should ask, “What image of the world makes intelligible the successes and failures of our theoretical practices?” (165). That image, she emphasizes, cannot be one in which we’ve simply discovered the true, universal laws by which Nature determines what will happen, for none of our most cherished theoretical principles can be rendered as both true and universal. Instead, she suggests, our theoretical practices help us make accurate predictions because Nature employs a much looser patchwork of local laws and highly abstract principles, determining what happens “just as we do, from representations and practices like ours” (172). Cartwright thinks this practice-centred understanding of how our most general theoretical principles relate to the world allows us to interpret many lower-level scientific models realistically, i.e., as true factual claims. Butthe scientific models of the future might not be the familiar type she has inmind, for as CLIFF HOOKER and GILES HOOKER discuss, machine learning is increasingly being used in the construction of predictive models. The resulting models are generally black-boxed in a way that makes it difficult (if not impossible) to give anything like a realist interpretation of them. While

Spontaneous Generations 9:1(2018) 9 C. Forbes The Future of the Scientific Realism Debate there are reasons for the realist to find this concerning, the use of machine learning is not obviously a bad thing from the perspective of someone seeking only to predict the behaviour of some system, device, or phenomenon. Would it have been better or worse, the Hookers ask us to consider, if Newton had used machine learning to develop a computer program capable of making incredibly precise and accurate predictions of how any system of massive objects will interact, but incapable of revealing the inverse square law of universal gravitation? In general, is it better to have theories and models that paint a conceivable picture of reality, or instead to have a set of ontologically uninterpretable algorithms that make far better predictions (so far as we know!)? Perhaps future scientists will come to prefer the latter, but here’s the issue: if scientific realists want to update their beliefs about reality as science itself develops, but the future of science is the production of models that permit no realist interpretation, what would that mean for the future of the scientific realism debate? And the future of the debate over scientific realism is, of course, exactly what this collection is about. In the end, what it makes most clear is that there is no significant consensus regarding what the future of the scientific realism debate should be; as a corollary, however, it also makes clear that the future of this debate is full of possibilities.

III. Acknowledgements Every single contributor deserves to be thanked not only for sharing their thoughts, opinions, reflections, and arguments with us all, but also for their patience as we worked to pull this collection all together. Our lead copyeditor Adam Richter, our layout editor Alex Djedovic, our book review editor Chris Dragos, and our many referees, copyeditors, and members of the editorial board deserve special acknowledgement for all their hard work. On behalf of anyone who enjoys reading this collection, thanks to everyone who made it happen.

References

Arabatzis, Theodore. 2001. Can a Historian of Science be a Scientific Realist? Philosophy of Science 68(3): S531-41. Arabatzis, Theodore. 2006. Representing Electrons: A Biographical Approach to Theoretical Entities. Chicago: Chicago University Press. Bourget, David and . 2014. What Do Philosophers Believe? Philosophical Studies 170(3): 465-500. Spontaneous Generations 9:1(2018) 10 C. Forbes The Future of the Scientific Realism Debate

Chakravartty, Anjan. 2007. A Metaphysics for Scientific Realism. Cambridge: Cambridge University Press. Fine, Arthur. 1986. The Shaky Game: Einstein, Realism, and the Quantum Theory. Chicago: University of Chicago Press. Forbes, Curtis. 2016. A Pragmatic, Existentialist Approach to the Scientific Realism Debate. Synthese. Giere, Ronald. 2006. Scientific Perspectivism. Chicago: University of Chicago Press. Hendry, Robin. 1995. Realism and Progress: Why Scientists Should be Realists. Royal Institute of Philosophy Supplements 38: 53-72. Hendry, Robin. 2001. Are Realism and Methodologically Indifferent? Proceedings of the Philosophy of Science Association 68(S3): 25-37. Laudan, Larry. 1981. A Confutation of Convergent Realism. Philosophy of Science 48(1): 19-49. Lyons, Timothy. 2002. Scientific Realism and the Pessimistic Meta-Modus Tollens. In Recent Themes in the Philosophy of Science: Scientific Realism and Commonsense, eds. Steve Clarke and Timothy Lyons, 63-90. Dordrecht: Springer. McArthur, Dan. 2006. The Anti-Philosophical Stance, the Realism Question and Scientific Practice. Foundations of Science 11(4): 369-97. Monton, Bradley. 2007. Images of : Essays on Science and Stances, with a Reply from Bas C. van Fraassen. Oxford: Oxford University Press. Psillos, Stathis. 1999. Scientific Realism: How Science Tracks .Truth London & New York: Routledge. Plato. 1931. Theaetetus. In The Dialogues of Plato IV, trans. Benjamin Jowett. 193-280. Oxford: Oxford University Press. Whitehead, Alfred North. 1938. Modes of Thought. New York: The Free Press. Wray, K. Brad. 2015. The Methodological Defense of Realism Scrutinized. Studies in History and Philosophy of Science Part A 54: 74-79.

Spontaneous Generations 9:1(2018) 11 What is Scientific Realism?

Author(s): Anjan Chakravartty and Bas C. van Fraassen

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 12-25.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.26992

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Focused Discussion Invited Paper What is Scientific Realism?* Anjan Chakravartty† and Bas C. van Fraassen‡

I. Abstract Decades of debate about scientific realism notwithstanding, we find ourselves bemused by what different philosophers appear to think itis, exactly. Does it require any sort of belief in relation to scientific theories and, if so, what sort? Is it rather typified by a certain understanding of the rationality of such beliefs? In the following dialogue we explore these questions in hopes of clarifying some convictions about what scientific realism is, and what it could or should be. En route, we encounter some profoundly divergent conceptions of the nature of science and of philosophy.

II. BvF: When the term appears, as it does quite often, cursorily in an introduction or comment, it is easy to gain the impression that a scientific realist is a true believer, someone who holds currently accepted science to be true and its postulated unobservable entities to exist. If the entry is more guarded, that tends to be with qualifications such as “approximately” or “for the most part,” which leaves intact the main impression, that scientific realism is the opposite of science skepticism. If that were so, science-denying politicians would be philosophers, if scientific realists are. I find this very puzzling, for surely, in our present context, scientific realism is a philosophical position. Is it appropriate for a philosophical position to include answers to the sort of questions that scientists investigate? A philosophical position may certainly include some factual theses. For even if it is a stance, some cluster of attitudes, intentions, and commitments, that will be so if only because these will quite typically have and require factual

* Received July 31, 2016. Accepted July 31, 2016. † Anjan Chakravartty is Professor of Philosophy and Director of the John J. Reilly Center for Science, Technology, and Values at the University of Notre Dame, and Editor in Chief of Studies in History and Philosophy of Science. ‡ Bas C. van Fraassen is McCosh Professor of Philosophy at , Emeritus, and Distinguished Professor of Philosophy, San Francisco State University.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 12 A. Chakravartty and B.C. van Fraassen What is Scientific Realism? presuppositions. If a position is called “realist,” we are reminded first of all of the venerable problem of universals: the medieval realists, nominalists, and conceptualists disagreed on what was real, on what exists. None of these positions was simply and merely an assertion or denial of existence. Neither was the twentieth century debate about the reality of mathematical entities among Goodman, Church, and Quine. But their disagreements were centrally on questions of existence. So a belief in a specific scientific theory, such as Dalton’s or Bohr’s atomic theory, or in the reality of Dalton’s molecules or Bohr’s , could not automatically disqualify a position from being philosophical. But there is a startling difference, in that here the entities are concrete, conceived to be as concrete as rocks and planets, and the theories are explicitly taken as hostage to the outcomes of future experiments, measurements, and observations. Could such a belief really be a matter of philosophical debate, or of metaphysical argument? How should we conceive of philosophy if we were to say that it is? Universals, abstract entities, are not the sole historical background for an understanding of “realism,” at least not in the past century. The debates over classical versus intuitionist or constructivist mathematics, over modalities and counterfactuals, over possible worlds, over quantum —these have all had their “realist” and “anti-realist” sides. But note: to take a realist position on questions about counterfactual conditionals or modalities does not involve the assertion of any specific such conditional or modal statement. It may involve a factual thesis, but nothing comparable to the assertion that a specific theory is true. Much of the past century’s debates of this sort were cast within a “linguistic turn,” and so Dummett proposed a usage for “realist” adapted to this context: to take a realist position on some topic is to hold that a certain associated discourse has “objective” truth conditions. To explain this is not easy, but illustration is: for example, a non- or anti-realist position concerning counterfactual conditionals holds that they are to be evaluated in terms of not just the facts but certain contextually relevant parameters in the speaker or occasion of use. What Dummett thus terms realism is also called “semantic realism.” Scientific realism tends to come with semantic realism about the language of science, though with qualifications. (Think of McMullen on metaphor, Hesse on analogy, Suarez on representation, to name a few.) There are contraries to scientific realism that are not realist in Dummett’s sense, about the language of science, but at least one, , is not so. Taking the language of science literally, and its truth conditions as objective, does therefore not amount to scientific realism. The question is what more it takes, and specifically whether that more will be a belief about contingent

Spontaneous Generations 9:1(2018) 13 A. Chakravartty and B.C. van Fraassen What is Scientific Realism? facts about concrete entities. I started with the impression gained from cursory use of the term “scientific realist.” What of careful formulations in the literature? ’s famous statement, that scientific realism is the only philosophy of science that does not leave the success of science a miracle, does not settle the matter. For even if we assent to it, we can arrive at a belief in the truth of any scientific theory only via Inference to the Best Explanation, which is one of the main bones of contention in the scientific realism debates. begins his discussion with a characterization of realism, for any putative body of knowledge, as required to involve the conjunction of two theses: (1) an independence thesis: our judgments answer for their truth to a world which exists independently of our awareness of it; (2) a knowledge thesis: by and large, we can know which of these theses are true. (Papineau 1996, 2) Together these do not imply that anything thus regarded is true, only that we can know, that it is possible to know, whether or not it is. So this position of scientific realism could be held by someone who adds that we (couldbut) do not know, and that s/he does not believe. On this characterization a scientific realist may well be, to use Peter Forrest’s terms, nota scientific gnostic but a scientific agnostic. So here I am left still with this open question: why do practically all presentations of their side give the impression that being a scientific realist involves being a scientific gnostic? And secondly: what is needed, besides semantic realism about the relevant discourse, to arrive at scientific realism?

III. AC: One might be tempted to strengthen your thought, Bas, that scientific realists often give the impression—commonly associated with the idea that some theories, models, or claims derived from them are true—that having certain beliefs is part of what it is to be a scientific realist. Where some merely hint, others are explicit.1 Despite a number of finer-grained presentations of what realism can and should be, some of which may contest the notion that belief is central to scientific realism, it is safe to say, I think, that this notion is part of the currently received view. Is this puzzling? Does it not answer (in part) your question of what more is required, in addition to semantic realism in the realm of scientific

1 Some examples of the latter are Smart 1963, Boyd 1983, Devitt 1991, Kukla 1998, Niiniluoto 1999, Psillos 1999, and Chakravartty 2007. Spontaneous Generations 9:1(2018) 14 A. Chakravartty and B.C. van Fraassen What is Scientific Realism? discourse, for scientific realism? I would suggest that, on reflection, itisnot so puzzling after all, though it is easily bound up with entirely reasonable and ample puzzlement about issues surrounding the nature of these beliefs. I would maintain as well that the addition of beliefs only partially answers the question of what more is required for scientific realism, for reasons that have emerged already, above. Let us consider first the notion that beliefs, often associated with truth claims, are part and parcel of scientific realism. (I will take for granted here qualifications in terms of approximate truth and deviations from truth via abstraction, idealization, and other representational practices, noting en passant that questions of whether and when these qualifications are severe enough to compromise realism is one of the puzzles to be grappled!) Indeed, historical conceptions of realism in other domains are helpful for starters: in each such case “realism” connotes the reality of something. Realism about universals is the view that properties conceived as abstract entities exist. Mathematical realism is the view that there are mathematical entities. Holding such views goes hand in hand with belief: realism about universals, or numbers, or x, connotes belief in universals, or numbers, or x. Now, certainly, universals and numbers are (putatively) abstract, and if the objects, events, processes, and properties described by scientific theories are anything at all, they are part of the world of the concrete. Affirmations of their existence are not typically merely supported with empirical evidence (a determined interlocutor might assert the same for universals, numbers, de re modality, and so on) but also take potential risks in the face of such evidence—even if this potential must await the future development of science for actualization. But does this difference qua risk mark a difference qua realism? It would not appear so. The abstract/concrete distinction, while relevant to questions of how telling certain kinds of evidence are for certain kinds of propositions, bears not at all on the notion that to endorse the reality of something involves believing it to exist. Perhaps this misses the point in a spectacular way. No doubt it is one thing to believe that there are electrons, and quite another to think that such belief is philosophical. Herein lies a possible disanalogy with universals and their ilk: in cases where belief takes no risk in the world of empirical investigation, philosophical modes of assertion and denial may be the best we can do; conversely, putative scientific entities belong to the realm of the empirical, the domain of science, where belief is hardly a matter of philosophical debate. This diagnosis separates science and philosophy in an untenable way, however. Whether belief is the appropriate doxastic attitude to take toward entities putatively described by scientific theories and models is precisely what is at issue between scientific realists and some of their critics, for theories and models are multiply interpretable regarding questions

Spontaneous Generations 9:1(2018) 15 A. Chakravartty and B.C. van Fraassen What is Scientific Realism? of ontology. Perhaps a scientific realist could hold beliefs of only a highly generic, noncommittal sort, abstaining from affirming any particular scientific descriptions. Consider the analogy of a mathematical realist who says that she believes in mathematical entities, but not in any of them in particular. In response to questioning about the number 7, or sets, she would say: “No, I don’t believe in any particular mathematical entity, I just believe that there are mathematical entities.” Likewise, regarding the things ostensibly at issue between scientific realists and antirealists of an empiricist or instrumentalist bent—things ostensibly too small or too big or otherwise undetectable by the unaided senses—our imagined scientific realist would say: “I don’t believe in electrons, or DNA, or major depressive disorder, or any other such thing, but I do believe in scientific entities that are undetectable using the senses alone.” How peculiar! Granted, peculiarity all by itself is rarely fatal. Perhaps we could accept as realists those who believe in the existence of a class of things while simultaneously having no beliefs in the existence of any members of that class. These extraordinarily mild-mannered realists would eschew supererogatory beliefs. But we have never thought of this sort of humility as constituting versions of realism. Locke (1975/1689) believed that there are real essences of objective categories of things in nature, but he did not think that we can know any. A fortiori, he did not believe assertions of real essences, satisfying himself with conventions in the form of nominal essences instead (book III, chapter III, §15). Locke offered an influential exemplar of antirealist accounts of natural categories. Belief in real essences at his level of abstraction was insufficient for realism. The mention of conventions raises a further point about other facets of realism which are typically viewed as constitutive. There is the commitment to the idea of a mind-independent world, such that the things in which realists believe and the truth of propositions describing them do not depend on our thoughts. There is the related semantic commitment to objective truth conditions as well as to taking scientific descriptions at face value, rather than interpreting them as elliptical for some other domain of discourse. Puzzles abound in articulating these commitments, but acknowledging them is crucial to understanding the character of belief for scientific realists in contrast to others who are also happy to speak of scientific knowledge, but only under the auspices of contrary metaphysical or semantic commitments. With ample puzzles surrounding belief, we may yet hope to dissolve others: it is no wonder that politicians who deny scientific knowledge are not philosophers merely in virtue of denying scientific realism, a position that affirms scientific knowledge. Just as there are different ways ofaffirming science, not all of which are realist (the instrumentalist, the neo-Kantian, the

Spontaneous Generations 9:1(2018) 16 A. Chakravartty and B.C. van Fraassen What is Scientific Realism? pragmatist, and the constructive empiricist all affirm scientific knowledge in their own ways), there are different ways to deny realism about science. Is it the metaphysical dimension that is problematic, or the semantic, or the epistemological, or some combination thereof, and how precisely? It is only when the denials are articulated that one wears a philosopher’s cap in this sphere. Thus we see that the opposite of scientific realism is not science skepticism, though certain forms of skepticism—regarding the existence of an external world, or objective truth conditions, or taking scientific claims at face value, or the epistemic reach of descriptions extracted from theories and models—are ways of articulating opposition. That these forms of skepticism need not amount to skepticism about science as a producer of knowledge and thus of belief is evident in the many admirers and champions of science who, all the same, do not find it within themselves to be scientific realists.

IV. BvF: You are quite right, Anjan, to point out that quite a few scientific realists take having those beliefs as part of what it is to be a scientific realist. What matters, though, is this: is it a distinctive part of that position, an identifying part, a crucial part, or just something incidental? Could someone be a scientific realist and not have such beliefs to the effect that certain unobservable entities are real, or that certain theories, which could be empirically adequate without being true, are actually true?2 The answer is yes, on my understanding of scientific realism as the view that the aim, the criterion of success in science is to arrive at true theories, rather than merely empirically adequate ones. This has no implication for whether that criterion is met in any particular case, or whether even our best theories today are successful by that criterion. I suspect quite strongly that the typical scientific realist self-image is confused. You mention, I think with reference to my brief discussion of Hilary Putnam’s and David Papineau’s formulation, “finer-grained presentations of what realism can and should be.” If confusions in the currently received view were to come to light, those might remain, and form the basis of a new scientific realist self-understanding. First, then, believing any or all currently accepted scientific theories cannot be an identifying characteristic of scientific realism. We can imagine a scientific realist Raoul and a constructive empiricist Antiny both listening to some eminent scientist who takes the opportunity (acceptance of a Nobel

2 Clifford Hooker told me, in a conversation in 1995, that he felt quite sure that noneof the main currently accepted scientific theories are even empirically adequate, and that his version of scientific realism did not require any such belief.

Spontaneous Generations 9:1(2018) 17 A. Chakravartty and B.C. van Fraassen What is Scientific Realism?

Prize perhaps?) to outline what is currently accepted scientific theory. Some things he mentions are obvious: the theory of evolution and the global warming scenario, for example. Many theories he mentions involve postulation of unobservable entities, and the scientist speaks very favourably of them.3 At this point each philosopher may state quite explicitly that he believes everything that this scientist said. In Antiny’s case that will tell us something quite independent of his philosophical position; what about Raoul’s? There will be one clear difference between them: not in what they believe about the natural world, but what they say about this belief. Raoul says his belief is based on the conviction that the great scientific successes described would be a miracle (something unexplained) otherwise and that it is part of his philosophical position that this cannot be. Antiny says that he arrived at his own belief in those very propositions by rationally permissible leaps of faith. He adds that, with respect to the success of the scientific enterprise, those beliefs are supererogatory. He even adds rather nonchalantly that if that success had been gained by useful fictions, so that those beliefs would be false, that would not bother him very much. I’m sure that I telegraphed the punch a long time ago: the difference between these two philosophers appears at the meta-level of analysis and interpretation directed first of all to the question of what science is, and only secondly to what the norms of rationality are for belief. The truth of any scientific theory is just not what is really at issue in the scientific realism debates. If it is continually raised, that may be, whether unconsciously or intentionally, a damning association of empiricism with science skepticism. Perhaps someone might retort that Antiny could have kept his philosophical position while giving up those beliefs, whereas Raoul could not give them up without ceasing to be a scientific realist. But surely he could give up belief in any one part of the sciences without ceasing to be a scientific realist? Then why not if he becomes agnostic about the existence of anything postulated by the sciences that is unobservable? The difference between Raoul and Antiny, if they reached this stageof debate, would definitely be at a meta-level. Raoul would hold that tobe scientific, whether a professional or lay member of the scientific community

3 Being a scientist rather than a philosopher he is guarded: he claims that classical physics, quantum mechanics, quantum field theory, and general relativity are accurate toat least a remarkable degree in specified domains. As to whether electrons are real, he says certainly, provided the question is understood in the sense in which it could be understood in quantum field theory, which in his opinion almost makes a question about individual electrons disappear altogether. Despite those nuances, however, he is including the reality of certain unobservable entities and the truth of certain statements about them in accepted scientific theory.

Spontaneous Generations 9:1(2018) 18 A. Chakravartty and B.C. van Fraassen What is Scientific Realism?

(as we all aspire to be, after all), requires belief in some unobservable reality behind the phenomena. Antiny would deny this. But all that would be significant would be the rationale each offers for this difference. The rationale for Antiny would be that the actual criteria of success in scientific practice (which reveal its defining aim, its telos) are purely empirical. I can imagine a rationale for Raoul, but only one that I would regard as too far-reaching a metaphysics. And here I think we have come perhaps to the realization that behind these debates we see the great divide in philosophy, about our conception of philosophy itself, that has always pertained to the status of metaphysics. I think I will take that bull—minotaur?—by the horns, and just say straightforwardly how it looks to me. Whether Norwegians, electrons, or witches exist, those are not philosophical questions. There are philosophical questions in the neighborhood: questions to be answered by conceptual analysis. For example, is a naturalized Norwegian citizen a Norwegian? Is there anything in quantum field theory that can properly be called by the same name as those negatively charged particles in Bohr’s model of the atom? But the questions of fact and existence are not philosophical questions: such questions we must bracket, to use Husserl’s term. Mutatis mutandis, this is the case for the truth of theories that imply the existence of concrete entities, such as scientific theories or empirical hypotheses generally. Applying this to the scientific realism issue, I will take as a typical scientific realist position one that Jose Diez recently described to youand me in correspondence:

(SSR—Selective Scientific Realism) Really predictively successful theories (i.e., that make true, novel, and risky predictions) have a part of their non-observational content that is:

(i) responsible for their successful predictions;

(ii) approximately true;

(iii) approximately preserved by posterior theories which, if more successful, are more truth-like.

I do not think that this, by itself, has any testable content that relates to its being a realist position. The factual content I see is a partial characterization of scientific practice:

Spontaneous Generations 9:1(2018) 19 A. Chakravartty and B.C. van Fraassen What is Scientific Realism?

[Norm] for T* to be a candidate for the status of successor theory to a given theory T, T* must “preserve” the empirical predictive success of T, and account for both the (limited) success of T and for its (relative) failures in the empirical predictions, by showing how calculations based on T and empirical data approximate but do not reach the results of calculations based on T* itself.

This is not about what the world is like, nor about which current theories are true. Instead it is about norms that are in force, constitutive even, in scientific practice. This position is accountable to the relevant facts, tobe sure. The relevant facts here are facts about scientific practice, and not facts about nature. I could say more about why, just by itself, SSR has no empirical content. But it will serve me better to just raise the question: how would one tease some out of this [Norm]? I surmise that to do so, realists would provide an account of “responsible for” (in the formulation of SSR) that has much to do with what they regard as “genuine” explanation rather than calculation, description, or empirical prediction. Don’t you think so? And that would effectively put the matter beyond testing. Finally, to take up in closing the idea that we are touching on a deep and fundamental divide in philosophy, I want to emphasize that my concern with scientific beliefs as putative parts of philosophical positions is not really about academic compartmentalization, not about lines between philosophy and other disciplines, not about “turf.” Though it is easy to put it in such terms, the better way to think of it is to appropriate, momentarily, some much more traditional terminology. The philosopher’s only equipment is Insight and Reason. How much can Insight and Reason discover about the world, on their own? Kant insisted: a great deal, within our purely human/conscious sphere. Insight is possible into the sciences, mathematics, religion, politics, and the arts because we (jointly laymen and practitioners) engage, participate in these, and through this participation constitute what they are. But when it comes to the natural world, whether observable or hidden, Insight and Reason deliver nothing except the forms of our thinking about them. As an aspirant-empiricist, I take Kant’s point (with only the protest that he himself overestimated what can be gained even in that limited sphere). There truly is a great divide in philosophy still today, so I cannot expect all philosophers to agree to this. The seventeenth-century metaphysics that Kant thought he had defeated came back within twentieth-century . But the Kantian philosophical orientation is hospitable to much: to what metaphysics can be now and equally to what empiricism can be now.

Spontaneous Generations 9:1(2018) 20 A. Chakravartty and B.C. van Fraassen What is Scientific Realism?

V. AC: Ah, now I think we have come to the heart of the matter—in fact, three matters whose clarification would go a long way to answering the question our title poses. Let me consider them in what I take to be an order of increasing philosophical depth. The first issue concerns what we call scientific realism and why. The second builds on the issue of naming but in a deeper way, concerning what a realist should think, if anything, about whether whatever distinctively realist beliefs she has are required or merely permissible. The third and deepest issue brings us to the heart of the heart of the matter: “metaphysics” and the philosophy of science. When you ask, Bas, whether believing in (for example) the existence of specific unobservable entities is crucial or merely incidental to scientific realism, I am reminded that despite my suggestion of a currently received view, not all self-avowed scientific realists have such beliefs. One might ask whether these outliers are confused, or whether it is the majority that is confused in taking such beliefs to be crucial, as I think they do. Here we have a shallow but nonetheless important issue. Does “bachelors” refer to unmarried men? Well, yes, if that is what we have stipulated, and if we cannot agree on how to stipulate then there is no agreed fact of the matter. Let us think for a moment about what motivates the outliers, and the majority, and see if there is anything more here than disagreements about stipulation. Earlier I labeled as “mild-mannered” those realists who believe in the existence of a class of things but have no beliefs in the existence of any member of that class. Some outliers who claim to be scientific realists are mild-mannered in this sense. John Worrall (2007), for instance, thinks that scientific realists should believe only the Ramsey sentences of theories meeting certain criteria. On this view one replaces various terms for things in which scientific realists commonly believe, like DNA molecules and electrons, with existentially quantified predicate variables, the striking move here being that the variables refer in only a highly indeterminate way—we are in no position to say to which thing or things in the world they refer. Likewise, a few self-avowed scientific realists, some no doubt inspired by your own (1980, 8) characterization of their view in terms of science aiming to give a literally true story of what the world is like, may think that endorsing this aim is sufficient, absent belief in any of the putative (unobservable) entities of scientific discourse, with the understanding that such endorsement fits naturally with the belief that there are such entities if only generically speaking. Now, are these Ramsey-sentence realists and aspirational realists also scientific realists? One can imagine a motivation for thinking so. When they claim that there are unobservable things in the world, even without believing in any one of them in particular, their beliefs extend beyond those of scientific Spontaneous Generations 9:1(2018) 21 A. Chakravartty and B.C. van Fraassen What is Scientific Realism? antirealists such as constructive empiricists (those of them who are agnostic), logical empiricists (who translate terms for unobservables into terms for observables), and other instrumentalists (those who attribute no semantic content to terms for unobservables at all). But one may also understand the exclusionary motivation of those who take the relevant beliefs to be crucial to realism. For one thing, there is the peculiarity I noted earlier of insisting that there is a class of things and yet abjuring belief in any member of the class, since often it is at least some alleged knowledge of the latter that serves as evidence for the former. For another thing, the generic idea that there are unobservable entities is hardly distinctively scientific; most scientific realists think that the distinctive contribution of science is to tell us which ones there are. With this in mind, perhaps it is no surprise that the mild-mannered are regarded by the majority as overly timid, epistemically speaking. In watering down the commitments involved the very idea becomes unpalatable to many. At a certain point in the effort to make healthy choices, the “light” mayonnaise becomes so divested of anything flavourful that most people are apt to say: “That just isn’t mayonnaise anymore.” The reason that the received view is in the majority is that even if one were to grant that by going just beyond what a scientific antirealist would say, one is thereby a scientific realist, most think that such a position is not realism enough to qualify as scientific realism in their sense. But now we are simply legislating for terms. That said, acknowledging that there is an element of stipulation about the currently received view is not to grant that it is confused. There is a confusion in the neighbourhood, but I would suggest that it is not, Bas, the one that you suspect. The actual confusion is one of conflating the contrast between scientific realism and antirealism with a contrast between impermissive and permissive accounts of rational belief. These two contrasts are, I believe, independent of one another, and once we appreciate this, the thought that there is an inherent confusion in taking belief in at least some (unobservable) scientific entities to be an important part of scientific realism should, Ithink, dissipate. Let me broach this contention with our friends, Raoul and Antiny. You present one telling difference between them qua scientific realist and constructive empiricist, respectively, in terms of meta-level thinking about norms of rationality. This is a crucial reminder of something that once escaped me: it is a view about the rationality of belief and not agnosticism about unobservable things (in the context of science) that is essential to constructive empiricism. So Raoul holds that rationality requires some beliefs about scientific unobservables where Antiny takes these beliefs tobe rationally permissible, not compelled—but here is the conflation! If Raoul and Antiny differ in their conceptions of rationality, this is idiosyncratic to

Spontaneous Generations 9:1(2018) 22 A. Chakravartty and B.C. van Fraassen What is Scientific Realism? them, not to scientific realism and antirealism. Regarding rationality, Bas, I’m with you. I too am a voluntarist in epistemology and see as mistaken any suggestion to the effect that believing in molecules of DNA is somehow irresistibly impelled by constraints of rationality. I am a scientific realist nonetheless because I do believe in such things, and here I would say that I stand apart from antirealists, who choose not to extend belief in this way. Thus we lay bare two very different conceptions of scientific realism. Raoul and Antiny share some beliefs about scientific unobservables and differ in some beliefs about what rationality requires. What about Antiny’s sister, Anandi? She feels the same way as Antiny about rationality (pace Raoul) but is agnostic about scientific unobservables. On your understanding of the distinction between scientific realism and antirealism, where conceptions of rationality are telling, Raoul is a realist and the siblings are antirealists (of the constructive empiricist sort). On my understanding, where belief is telling, Raoul and Antiny are scientific realists and Anandi is not. This, I maintain, is what puts the “real” into “realism”: a belief in the reality of something. Having such belief is necessary and sufficient, and this is precisely what Anandi lacks. A scientific realist might take her belief in DNA to be rationally compelled or merely rationally permissible—it is the naked fact that she believes that makes her a scientific realist about at least some part of molecular biology. If Antiny thinks that this belief is rationally permissible but not compelled, it would not be this fact but rather the fact that he is agnostic about the existence of DNA molecules that would reflect his antirealism, if he were in fact agnostic (which, as it turns out, he is not). It may be tempting to conflate scientific realism with impermissive norms for rational belief because some people offer rational compulsion as an argument for beliefs associated with scientific realism. This is a poor argument, I believe, but it ismerely one among others. It is not constitutive of scientific realism. Could Raoul give up his beliefs in DNA and yet retain his credentials as a scientific realist? Certainly, but only if he has some other such beliefs in connection with some other scientific descriptions of the world, in which case he would be a scientific realist about those things but not with respect to the relevant part of molecular biology.4 Let us be careful, though, about what this entails. Surely Raoul need not think that to be scientific requires belief in (some specifics of) an unobservable reality behind the phenomena. There is far too much variation in scientists’ own attitudes toward their endeavor, historically and today, for anyone to put such constraints on what

4 We should remember here that belief for the scientific realist may extend only so faras approximate truth, and since a merely approximately true theory is strictly speaking false, this affords some leeway—enough, for example, to render Hooker’s revelation (see footnote 2) compatible with scientific realism. Spontaneous Generations 9:1(2018) 23 A. Chakravartty and B.C. van Fraassen What is Scientific Realism? it means to be scientific. Science is as science does, but what science achieves, epistemically, is open to interpretation by scientist and philosopher alike. I admit to deriving a little too much pleasure, when teaching my very first course, in putting up a quotation by a local hero cosmologist inwhich he claimed not to understand what people are asking when they inquire about the truth of his theories—all he is trying to do is come up with models for the data. This sort of view has a celebrated history in the thinking of many, not least scientists, who have sought to validate and endorse the sciences: Mach, Duhem, ... No one should think that being scientific requires belief regarding unobservable aspects of reality, but if one interprets science as telling us about such things, one is a scientific realist. In interpreting the outputs of science this way, is the scientific realist a metaphysician? Here we come to perhaps the deepest matter of all. Ultimately I think that the answer to this question is “yes,” though the scope of this affirmation is easily misconceived. Some apparently explicit targets of scientific investigation, such as proteins and atomic nuclei, cannot be detected using our unaided senses alone, and if believing in such things is metaphysical even though the beliefs themselves are highly informed by and responsive to empirical evidence, so be it. But this does not commit a scientific realist to any particular beliefs regarding what are, arguably (at best), implicit subject matters of scientific investigation, such as the nature of properties or lawsof nature, let alone other subjects of contemporary metaphysics, many of which pay the empirical dimensions of the sciences little if any regard. Even so construed, some scientific realist beliefs are surely, if properly called “metaphysical,” metaphysical in a pre-Kantian way, in that they pertain (in intention, at least) to the world itself, the noumenal world, not merely the world as we fashion it through human ways of knowing. This is not to deny the use of human categories of description and other conventions, but to suggest that their use is compatible with at least some things so categorized, and even some of the categories themselves, being “out there” independently of human thought. Holding such beliefs—whether about the existence of Norwegians or electrons—is often philosophical because the forms of analysis and interpretation sometimes invoked to determine whether any given instance of a person or a subatomic particle fits these categories are themselves philosophical. There is a reason that so many philosophers of science were scientists. They were the ones who knew how to make their otherwise implicit philosophical commitments, on which the multiple interpretability of the sciences rests, explicit.

Anjan Chakravartty University of Notre Dame 100 Malloy Hall Spontaneous Generations 9:1(2018) 24 A. Chakravartty and B.C. van Fraassen What is Scientific Realism?

Notre Dame, IN 46556 USA [email protected]

Bas C. van Fraassen San Francisco State University 1600 Holloway Ave. San Francisco, CA 94132 USA [email protected]

References

Boyd, Richard N. 1983. On the Current Status of the Issue of Scientific Realism. Erkenntnis 19: 45-90. Chakravartty, Anjan. 2007. A Metaphysics for Scientific Realism: Knowing the Unobservable. Cambridge: Cambridge University Press. Devitt, Michael. 1991. Realism and Truth. Oxford: Blackwell. Kukla, André. 1998. Studies in Scientific Realism. Oxford: Oxford University Press. Locke, John. 1975. An Essay Concerning Human Understanding, ed. P. H. Nidditch. Oxford: Oxford University Press. Niiniluoto, Ilkka. 1999. Critical Scientific Realism. Oxford: Oxford University Press. Papineau, David., ed. 1996. The Philosophy of Science. Oxford: Oxford University Press. Psillos, Stathis. 1999. Scientific Realism: How Science Tracks .Truth London: Routledge. Smart, J. J. C. 1963. Philosophy and Scientific Realism. London: Routledge & Kegan Paul. van Fraassen, Bas C. 1980. The Scientific Image. Oxford: Oxford University Press. Worrall, John. 2007. Miracles and Models: Why Reports of the Death of Structural Realism May Be Exaggerated. Royal Institute of Philosophy Supplements 82: 125-154.

Spontaneous Generations 9:1(2018) 25 Feyerabendian

Author(s): Jeffrey Foss

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 26-30.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Focused Discussion Invited Paper Feyerabendian Pragmatism Jeffrey Foss*

In the not-too-distant future the scientific realism debate will be absorbed into the far more ancient-and-venerable, old-and-unqualified, realism debate. The first efficient mover of this absorption will be the fact that scientific ontology is a growing and very mixed bag, including not just rocks, plants, animals, and stars, but the Higgs boson, the Big Bang, evolutionary pressures, teenage anxieties, economic growth, social trends, countries, industrial toxins, and hedge funds. Trying to hedge off these ever-stranger newcomers by such moves as castling the debate within well- (or best-) established, mature (two decades old? five?), basic physics is to submit toa biased umpire, with a narrow strike-zone, to get to an arbitrary first base. The large-scale conceptual migrations underlying this future change include increasing diversification (as opposed to unification) of the sciences into new fields (cosmology, computation, , climate) with strange realities, entailing increasingly well-established internal tensions among practising scientists (gradualism versus punctuated equilibrium, Copenhagen interpretation versus quantum realism, CO2 warming versus solar-driven variability, etc.), and ever more integration of scientific theories into different domains of popular culture. Lo! The time is upon us when we (self-identified) philosophers really should respond to the question, “Are you a scientific realist” with the question, “Which bit of science do you have in mind?” In retrospect we will see, and even now we can see, that philosophical answers to the question of scientific realism are treating the word “real” as an honorific title. The everyday scientific realist is saying something like, “Science is just so cool (epistemically and metaphysically speaking) that I believe what it says,” while the scientific hyper-realist (such as ) bombastically adds, “and only what it says.” The everyday scientific anti-realist doesn’t think science is all that cool, and thus denies

* Jeffrey Foss was so smitten by philosophy as an undergraduate that he abandoned physics, where he excelled, for the pursuit of wisdom, where he encountered difficulty. Though he made a profession of this philosophical pursuit, he never abandoned the romantic realism of physics: its love of sensory evidence, its continually hopeful invention of instruments which extend and improve our sensory mechanisms’ causal contact with the world, and, most importantly, its use of the dogs of data for herding the sheep of theory.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 26 J. Foss Feyerabendian Pragmatism that everything science talks about is real, although which part of science (and its ontology) s/he rejects varies. Since physics writes what it and all orthodox scientists take to be the foundational ontology for all of science, the ontological focus is on it—but it is not nearly as clear, sturdy, steady, or near-fetched these days as it was in Newton’s heady times. To paraphrase the poet, this new, solid physical stuff can “melt, thaw, and resolve itself into a dew”—albeit a dew of black holes—or some even stranger quasi-substance that physics will come up with tomorrow. Black holes are but one magical stuff of new physical theory (or physical ideology as Feyerabend seesit), and I sympathize with ontologists who pick and choose within its bestiary. Bas van Fraassen famously suggests we needn’t go there at all, as accepting a theory only requires believing that the perceptible (visible, audible, etc.) things it refers to exist—so light exists but retinal cells, awkwardly, might not. In the future we will see that there has been all along a very personally pragmatic thread to the scientific realism debate: a philosopher dubs “real” the entities postulated in scientific theories (or hypotheses) only if they satisfy the ontological intuitions underlying her/his ontological leanings. All the while, the reality of philosophers, their theories, and the journals they publish them in are merely presupposed without question—along with the metabolic chemistry of those who toil in the philosophical and publishing professions. A richer, more Feyerabendian, ontology than that which is professed is unconsciously presupposed by many an ontologist. Like C.S. Peirce, I think that we must look at a person’s whole behavior to see what she or he really believes, not just the verbal bit of it. Like my philosophical grandfather, I believe that real belief (versus expressions of belief) guides behaviour. That’s its biological function. What you believe is what you act on and live by. It is a working part of the info-bio-mechanism you are. Professed beliefs are more often restricted to verbal behaviour alone. First, to say that “real” is an honorific expresses the pragmatic core of my ontology. In terms Wilfred Sellars and both Churchlands might, I hope, like: when I accept a model (scientific or otherwise) that really matters to how I act and live, then I grant (whether explicitly or implicitly, consciously or unconsciously) reality to the things of that model. I honor what really matters to me by treating it as real. Put epigrammatically:

The Real = All That Really Matters.

Secondly, I accept as true ’s postulation of the “Abundance of Nature,” which I (perhaps mis-) understand as follows: the world (reality as a whole) contains such an abundance of things that we are generally able to find bodies of evidence in favour of almost any theory we may dream up. Peoples have thrived believing the craziest, wildest Spontaneous Generations 9:1(2018) 27 J. Foss Feyerabendian Pragmatism things. When it comes to professional knowledge, i.e., science, there has been abundant evidence for every model that ever got into the major leagues, including those that have long since been smashingly falsified, like Newton’s mechanics, Ptolemy’s astronomy, and ancient flat-Earth geography. But—and this is a history those who cleave to the “mature science” form of realism do not want to learn from—the flow of humanly accessible evidence is anything but monotonic. The evidence for any model, including Ptolemy’s and Newton’s, may together form a living, growing body for centuries, and seem immortal, but eventually cancerous falsifications appear. Conversely, the sun-centered universes of Philolaus and Aristarchus can lie as dead as a Norwegian Blue parrot for centuries only to have life breathed back into them by sky-gazers like Copernicus, telescope-makers like Galileo, and model-spinners like Newton. I believe this is inevitable for the following Feyerabendian reason: there is more in the world (the Abundance of Nature) than can be captured in any representation, model, map, theory, mind, brain, or philosophy. Much must be left out. Leaving the right (i.e., the messy, murky, troublesome) stuff out is the first decision of any successful modeller. So the unexplainable always comes knocking for any theory, ideology, mapping, or representation—no matter how high-born or well-born or obvious. Even the impossible may turn up—and grumpily refuse to go away (e.g., the Earth moves, or vanishes by gravitational collapse). As for scientific realism, my Feyerabendian Pragmatism brand of ontology says that bacteria, viruses, molecules, , electrons, and electromagnetic forces are real because they really matter to me (just as much as do dismissive philosophical reviews). Those who work with such scientific things (not the reviews) created the computer on which I write this essay, and since this essay is real, those scientific things on which both computer and essay depend must have significant reality as well. On the other hand, my pragmatism, schooled by William James, requires that my beliefs be seen as having different sorts and degrees of reliability. I trust my roadmap to get me where I am travelling, but not to reveal the detailed shape of a river or the real colour of its water. I place more confidence, generally speaking, in my manifest model,1 my sensory representation of the world, wherein my perceptions and my theory of everyday objects cohere, than I place in the scarcely visible and generally less relevant postulates of science. I am more confident in the reality of plants, animals, and inanimate objects than I am in the atoms of which I believe (less confidently) they are made. When it comes to explicating the different sorts and degrees of

1 I develop this idea of Wilfred Sellars a bit more in Foss (2000)—in ways I hope he would like. See especially Ch. 2.

Spontaneous Generations 9:1(2018) 28 J. Foss Feyerabendian Pragmatism reliability among my beliefs, there is little by way of sweeping principle and much by way of homely, boring qualifications, details, provisos, and the uncounted devils among the details. I do not believe there exist different degrees of reality, but I do believe there are different sorts of entities. Living things, for instance, are really different from nonliving things. How many sorts of entities? I seenobest way to count them, indeed no practical way to count them at all (i.e., there is, practically speaking, an uncountable abundance of kinds of things)—and I leave the business of ordering the whole dang shooting match to more ambitious ontologists than me. But, for what it’s worth, I believe that last-minute corrections are just as real as dogs. I think that hunger and satiety are just as real as food, even though what counts as food is species-specific (hay for the ass, bread for me, etc.), hence that different species experience different realities (all of which have a place in the whole). Venturing beyond science into ordinary language, rainbows are real, though they clearly are not physical objects, but visual phenomena. Like colours, they are phenomena that exist by virtue of that part of my nervous system that informs me of the world that is present to me in both time and space. I’m still not clear whether van Fraassen sees rainbows as real. Speaking of whom, I dare venture even into the domain of logic by believing my decision to write this essay was real, though the decision I might have made not to write it was never real, though it might have been. Who could fail to agree—if not too bored to read. Finally, I speculate that this ever-creative, constantly-evolving, endlessly-happening universe comprises a specific ontological unity at any one time and over all of time. That’s what my total experience so far leads me to think, though I must in honesty hasten to add that even if this is really true, I will never have a way of proving it, not even to myself, much less to others. Even if there is only one world, there are indefinitely many ways of experiencing it and knowing it. The abundant font of reality constantly overflows our primitive vessels of epistemological containment. There ismore to it than meets the eye, or all the eyes, microscopes, telescopes, cameras, Broca-readers,2 and so on that will ever exist.

Jeffrey Foss University of Victoria P.O. Box 3045 STN CSC Victoria, BC V8W 3P4

2 The Broca-reader is a science-fictional device I devised as a Dennett-ian intuition-pump in Foss (1995). A Broca-reader reveals to scientific observation the purely verbal aspects of our private consciousness, those things we say “in our hearts,” to ourselves alone, in a specific language, with specific words.

Spontaneous Generations 9:1(2018) 29 J. Foss Feyerabendian Pragmatism

Canada [email protected]

References

Foss, Jeffrey. 1995. Materialism, Reduction, Replacement, and the Placeof Consciousness in Science. Journal of Philosophy 92(8): 401-29. Foss, Jeffrey. 2000. Science and the Riddle of Consciousness: A Solution. Dordrecht: Kluwer Academic Publishers.

Spontaneous Generations 9:1(2018) 30 Realism for Realistic People

Author(s): Hasok Chang

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 31-34.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.27002

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Focused Discussion Invited Paper Realism for Realistic People* Hasok Chang†

Why should anyone care about the seemingly interminable philosophical debate concerning scientific realism? Shouldn’t we simply let go of it,in the spirit of Arthur Fine’s “natural ontological attitude” (NOA) (Fine 1986, chs. 7-8)? To a large extent I follow Fine, especially in relation to the endless arguments surrounding realist attempts to show that the impossible is somehow possible, that empirical science can really attain assured knowledge about what goes beyond experience. It is time to face the fact that we cannot know whether we have got the objective Truth about the World (even if such a formulation is meaningful). Realists go astray by persisting in trying to find a way around this fact, as do anti-realists in engaging with that obsession. I cannot do better than Fine. Yet I find it difficult to let go of the “realist” label. I feel that it has been hijacked by the most unrealistic of people. I would like to rescue the notion of realism from the so-called realists, and re-conceive it as something useful for scientists and others who are actually engaged in empirical inquiries. If that fails, we will still have NOA. I do not take scientific realism as a descriptive thesis. Like Basvan Fraassen, I take it as a statement about the aims of science (Van Fraassen 1980, 8). More actively, I take it as a policy, or even an ideology. “Active realism,” as I have termed it, is a commitment to do our utmost to learn as much as possible about reality (Chang 2012, section 4.2). One might ask: who would possibly object to such a commitment—and is a position worth defending if no one would object to it? But there are in fact a great number of people who go against active realism. Against active realism, anti-realism can take any of the following forms: seeking the comfort of affirming what we already believe, instead of subjecting our beliefs tothe most informative tests; preferring the simplicity or elegance of theories for its own sake (rather than for the possible side-effect of facilitating further

* Received August 12, 2016, Accepted August 12, 2016 † Hasok Chang is the Hans Rausing Professor of History and Philosophy of Science at the . He received his degrees from Caltech and Stanford, and has taught at University College London. He is the author of Is Water H2O? Evidence, Realism and Pluralism (2012), and Inventing Temperature: Measurement and Scientific Progress (2004). He is a co-founder of the Society for Philosophy of Science in Practice (SPSP), and the Committee for Integrated History and Philosophy of Science.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 31 H. Chang Realism for Realistic People inquiry); insisting that having a consensus is more valuable than allowing all perspectives that can lead to new learning; making only fictional or idealized models to do a particular job at hand, rather than finding a theory that can also work in other circumstances. All of these things happen with depressing regularity in science as well as religion, politics, and everyday life. In that context, ’s motto takes on fresh urgency: “do not block the way of inquiry.” There is, however, a significant gap in the conception of active realism as stated above. In my previous exposition, I took “reality” as mind-independent nature, something that can resist our attempts to deal with it in some particular way that we might prefer (Chang 2012, 220). But “reality” in this sense is like the Kantian thing-in-itself, about which we can and should say nothing; it is nonsense to think that we can learn anything expressible about it. So some other notion must be worked out as to what empirical knowledge is about. This recognition prompts a new and productive train of thought. The objects of empirical knowledge must be placed in the operable realm. Such objects are not “given,” but identified through an effortful process of inquiry, through which we learn how to parse things out at the same time as learning about the parsed-out objects. This is one clear sense in which ontology is empirical.1 My position here is consonant with ’s “entity realism,” which takes our ability for successful intervention as the sign of the reality of the putative objects involved in the intervention (Hacking 1983, esp. ch. 16). And I seek to generalize Hacking’s position further into a coherence theory of reality: a putative entity should be considered real if it is employed in a coherent epistemic activity that relies on its existence and its basic properties (by which we identify it).2 The key notion in the definition of “reality” just given is coherence, by which I mean a harmonious fitting-together of actions that leads tothe successful achievement of one’s aims, rather than mere logical consistency or any other inferential relation holding among a set of propositions. Coherence may be exhibited in something as simple as the correct coordination of bodily movements and material conditions needed in lighting a match or walking up the stairs, or something as complex as the successful integration of a range of material technologies and various abstract theories in the operation of the global positioning system (GPS). Coherence comes in degrees, with a specific scope, and it is defeasible. The same is the case for reality, if we base the notion of reality on the notion of coherence. Such a perspective exposes a deep problem with the binary thinking in Larry Laudan’s pessimistic induction,

1 This is in line with the thoroughgoing empiricism expressed in Dewey (1938). 2 For an explanation of what I mean by an “epistemic activity,” see Chang (2014).

Spontaneous Generations 9:1(2018) 32 H. Chang Realism for Realistic People and of course also in the realist arguments that he was trying to counter (Laudan 1981). Caloric, phlogiston, ether, or what have you, isn’t simply either-real-or-unreal (or referring/non-referring). Caloric had a high degree of reality in Lavoisierian chemical practices, and it still is real to anyone who engages in similar practices. It is pointless and meaningless bravado to declare “but it doesn’t really exist!” By that count, what does? (Laudan’s main lesson stands.) I would like to go beyond Hacking in another way, too. He places great emphasis on the existence/reality of entities, but that is not enough. We also need to think about what else we can credibly know and say about the entities. So we need to have a notion of truth, a usable one in the operable realm, to underpin our practice of making, assessing, and accepting various statements.3 I would like to propose a pragmatist coherence theory of truth, according to which a statement is true if (belief in) it is needed in a coherent epistemic activity (or system of practice). This notion of truth, like the notion of reality given above, is a matter of degrees and circumstances. These remarks are only preliminary sketches pointing to a full-fledged account, which will be published in a book titled Realism for Realistic People.4 It will be framed, most of all, within a revitalized pragmatism, deploying a pragmatist (or, operational) notion of coherence to its full potential within a conception of knowledge as ability rather than as information. Rejecting traditional notions of scientific realism does not mean abandoning the pursuit of truth and reality, but revising the philosophical concepts of truth and reality to make them meaningful in scientific and everyday practice.

Hasok Chang University of Cambridge Department of History and Philosophy of Science Free School Lane Cambridge CB2 3RH United Kingdom [email protected]

References

3 For a classic discussion in this direction, see Austin [1950] (1979). 4 There is some affinity between my line of thought and what Kosso (1998, ch.8)has called “realistic realism”. I cannot recall whether I owe the “realistic” trope to Kosso, but I would be happy for that to be the case. The same phrase is also used by Mäki (2009), with a rather different intention. Spontaneous Generations 9:1(2018) 33 H. Chang Realism for Realistic People

Austin, J. L. 1979. Truth. In Philosophical Papers, 3rd ed., eds. J. O. Urmson and G. J. Warnock, 117-133. Oxford: Oxford University Press. Originally published in the Proceedings of the Aristotelian Society, Supplementary Volume XXIV (1950). Chang, Hasok. 2012. Is Water H2O? Evidence, Realism and Pluralism. Dordrecht: Springer. Chang, Hasok. 2014. Epistemic Activities and Systems of Practice: Units of Analysis in Philosophy of Science After the Practice Turn. In Science After the Practice Turn in the Philosophy, History and Social Studies of Science, eds. L￿na Soler et al., 67-79. London and Abingdon: Routledge. Dewey, John. 1938. Logic: The Theory of Inquiry. New York: Henry Holt and Company. Fine, Arthur. 1986. The Shaky Game: Einstein, Realism and the Quantum Theory. Chicago: University of Chicago Press. Hacking, Ian. 1983. Representing and Intervening. Cambridge: Cambridge University Press. Kosso, Peter. 1998. Appearance and Reality: An Introduction to the . New York and Oxford: Oxford University Press. Laudan, Larry. 1981. A Confutation of Convergent Realism. Philosophy of Science 48(1): 19-49. Mäki, Uskali. 2009. Realistic Realism about Unrealistic Models. In The Oxford Handbook of Philosophy of Economics, eds. Don Ross and Harold Kincaid, 68-98. Oxford: Oxford University Press. Van Fraassen, Bas. 1980. The Scientific Image. Oxford: Clarendon Press.

Spontaneous Generations 9:1(2018) 34 Engaging Philosophically with the History of Science: Two Challenges for Scientific Realism

Author(s): Theodore Arabatzis

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 35-37.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.27095

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Focused Discussion Invited Paper Engaging Philosophically with the History of Science: Two Challenges for Scientific Realism* Theodore Arabatzis†

I would like to raise two challenges for scientific realists. The first isa pessimistic meta-induction (PMI), but not of the more common type, which focuses on rejected theories and abandoned entities. Rather, the PMI I have in mind departs from conceptual change, which is ubiquitous in science. Scientific concepts change over time, often to a degree that is difficultto square with the stability of their referents, a sine qua non for realists. The second challenge is to make sense of successful scientific practice that was centred on entities that have turned out to be fictitious. Let me elaborate. To begin with challenge one, there are two versions of the PMI: one focusing on false but successful theories and another on evolving concepts and their shifting referents. The former was defended by Laudan (1981); the possibility of the latter was suggested by Putnam (e.g., 1982, 199). The recent debates on realism have focused on the former version of the PMI. In a recent authoritative overview of the realism debate (Chakravartty 2015), the latter version receives scant treatment. Needless to say, this remark is no criticism of Chakravartty’s excellent survey article. Rather, it indicates a lacuna in the recent literature on scientific realism. I think that Laudan’s version of the PMI does not rest on a sufficiently wide evidential base. If one takes into account the stringent criteria of success imposed by realists (e.g., novel predictions), then that evidential base shrinks considerably. This is not to deny that there are significant cases where the ontology of successful theories (such as the phlogiston theory, the caloric theory, and the ether theory) turned out to be vacuous. However, I don’t see this as an insuperable obstacle to scientific realism. Given that most, if not all, scientific realists are fallibilists these days, they presumably admit

* Received September 14, 2016. Accepted September 14, 2016. † Theodore Arabatzis is Professor of History and Philosophy of Science at the National and Kapodistrian University of . He has written on the history of modern physical sciences and on issues in general philosophy of science. He is the author of Representing Electrons (University of Chicago Press, 2006), co-editor of Kuhn’s The Structure of Scientific Revolutions Revisited (Routledge, 2012), and co-editor of Relocating the History of Science (Springer, 2015).

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 35 T. Arabatzis Two Challenges for Scientific Realism the possibility of empirically successful theories turning out to be mistaken. What more do these well-known and much discussed cases show than the actualization of this possibility? The other version of the PMI, though, the one indicated by Putnam, may be far more threatening to the viability of a realist approach to scientific development. If scientific concepts change beyond recognition over time, as they often do, what sense can we make of their purportedly stable referents? The neglect of this problem in the recent literature is perhaps due to a feeling that it has been resolved by the causal theory of reference (CTR) and its more refined causal-descriptivist descendants (see Psillos 2012). The CTR, however, is not problem-free (see Arabatzis 2012, 154-155). Furthermore, it is not applicable to theoretical objects (e.g., black holes) with indirect causal links, or even no causal links at all, to observable phenomena or experimentally-produced effects. The second challenge I would like to raise for scientific realism is its, prima facie at least, incompatibility with contemporary historiographical bon sens. As I have argued elsewhere (Arabatzis 2001), realists are in a historiographically awkward predicament: they are compelled to maximize the continuity between past and contemporary science. To put it another way, they are hard-pressed to portray past successful scientific theories as imperfect versions of their contemporary descendants. In other words, they are forced to embrace Whiggism, a historiographical stance rejected by the overwhelming majority of historians of science. The Whiggish orientation of scientific realism has two untoward consequences. First, it hampers the coveted integration of history and philosophy of science. Second, it impedes the understanding of past scientific practice that involved entities, postulated by successful scientific theories, that have turned out to be non-existent.1 Let’s assume, for instance, that the ether does not exist. What then were 19th century physicists doing when they (thought they) were investigating the properties of ether and its interaction with matter? If they talked about nothing and structured their practice around a non-existent object, how was it that their investigations were so productive, leading, among other things, to the birth of microphysics (Buchwald 2000)? In those cases anti-realists seem to have an advantage over realists, because they aim at making sense of scientific practice regardless of its ability to track truth regarding the unobservable realm. And this is the terrain on which the debate on scientific realism should be played out. If we narrow our focus to issues about whether we are justified in believing

1 This is not unanimously accepted. Some philosophers of science (e.g., Chang 2012) have advocated the resurrection of long-gone entities such as phlogiston, whereas others (e.g., Psillos 1999) have argued that obsolete entities such as the ether could be identified with their contemporary counterparts, e.g., the electromagnetic field. Spontaneous Generations 9:1(2018) 36 T. Arabatzis Two Challenges for Scientific Realism in successful scientific theories (and the ontologies they sanction), then the prospects of overcoming the current standoff in the realism debate, a standoff noted by several commentators (see, e.g., Forbes 2016), are rather slim. If, on the other hand, we evaluate realism and anti-realism according to their capacity to make sense of scientific practice, both past and contemporary, hopefully we’ll be able to move beyond the present stalemate.

Theodore Arabatzis Department of History and Philosophy of Science National and Kapodistrian University of Athens, University Campus Ano Ilisia, 157 71 Athens Greece [email protected]

References

Arabatzis, Theodore. 2001. Can a Historian of Science be a Scientific Realist? Philosophy of Science 68(3): S531-S541. Arabatzis, Theodore. 2012. Experimentation and the Meaning of Scientific Concepts. In Scientific Concepts and Investigative Practice, eds. U. Feest and F. Steinle, 149-166. Berlin: de Gruyter. Buchwald, Jed Z. 2000. How the Ether Spawned the Microworld. In Biographies of Scientific Objects, ed. Lorraine Daston, 203-225. Chicago: University of Chicago Press. Chakravartty, Anjan. 2015. Scientific Realism. In The Stanford Encyclopedia of Philosophy, Fall 2015 Edition, ed. E. N. Zalta. http://plato.stanford.edu/ archives/fall2015/entries/scientific-realism. Chang, Hasok. 2012. Is Water H2O? Evidence, Realism and Pluralism. Dordrecht: Springer. Forbes, Curtis. 2016. A Pragmatic, Existentialist Approach to the Scientific Realism Debate. Synthese, First Online. doi:10.1007/s11229-016-1015-2. Laudan, Larry. 1981. A Confutation of Convergent Realism. Philosophy of Science 48(1): 19-49. Psillos, Stathis. 1999. Scientific Realism: How Science Tracks .Truth London & New York: Routledge. Psillos, Stathis. 2012. Causal Descriptivism and the Reference of Theoretical Terms. In Perception, Realism, and the Problem of Reference, eds. A. Raftopoulos and P. Machamer, 212-238. Cambridge: Cambridge University Press. Putnam, Hilary. 1982. Three Kinds of Scientific Realism. Philosophical Quarterly 32(128): 195-200.

Spontaneous Generations 9:1(2018) 37 Gravitational Waves and Scientific Realism

Author(s): Harry Collins

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 38-41.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.27042

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Focused Discussion Invited Paper Gravitational Waves and Scientific Realism* Harry Collins†

I started following the attempts to detect gravitational waves (GW) using terrestrial detectors shortly after the business started in the 1960s. My fieldwork began in 1972 and I have been pursuing the subject ever since. The first (Collins 1975) paper I wrote about GW set out what Iwould later (Collins 1985) call “the experimenter’s regress.” It argued that, on its own, replicating the experiments could not prove whether some claimed phenomenon was real or not. This was because one could not tell whether the replicating experiments had been performed adequately. Because experiment is a skilful business and depends on tacit knowledge, the only reliable way of knowing whether an experiment has been properly carried out is to examine the results. But, if the right results from such an experiment are what is in question then there is no reliable criterion and the argument about whose experiments were good and whose were poor can go on for ever. In later work I tried to show that Joe Weber’s early claims were killed, not just by the negative results of others, but by the determined efforts of certain scientists to get everyone to agree about how to divide the whole set of experiments into competent and incompetent (e.g., Collins 1985). Over the nearly half-century of terrestrial GW detection work up to 2015 there have been about half-a-dozen claims to have seen the waves with detectors more sensitive than Weber’s but all of them have led to fierce arguments and eventual consignment to the waste-bin of scientific history. I’ve described them in my 2004 book and shown how each of them illustrates the problem of the experimenter’s regress. I’ve also described, in 2011 and 2013, two passages of analysis of fake data, known, respectively, as the “Equinox Event” and “Big Dog.” These were “blind injections” deliberately but secretly inserted into the huge interferometers to cause the scientists to practice and rehearse their skills in preparation for the real thing. In these cases, I showed that the data by themselves were not decisive

* Received April 16, 2016. Accepted April 16, 2016. † Harry Collins is Distinguished Research Professor and Directs the Centre for the Study of Knowledge, Expertise and Science (KES) at Cardiff University. He is an elected Fellow of the British Academy. He is the author of 21 books while his 45-year project on the sociology of gravitational wave detection was completed with the publication of Gravity’s Kiss in 2017.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 38 H. Collins Gravitational Waves and Scientific Realism because the analysis rested on a whole raft of philosophical assumptions or taken-for-granted procedures. The validity of these procedures could not be proved by calculation or logic, so even though the injected data should have been decisive (if fake) the process of reaching an agreement about what it meant involved social agreement. Though in my early work I thought what I was doing was proving something about the nature of the physical world—that it was a “social ”—by 1981 I reached the conclusion that no such thing could be proved. All I could show was that Nature, or the data that were taken to represent Nature, were not decisive in the short term so long as there were scientists ready to argue against any particular interpretation; data are “interpretatively flexible.” But whether our long term conclusions are forced on us by something like Nature’s “hidden hand” we do not know. By 1981 the conclusion I had reached was simply that though we could not know whether realism was true or not, we could know that it was fatal to the social analysis of science. The question asked by the social analyst of science is: “Why do scientists believe ‘p’ rather than ‘not-p’ and how do they come to this belief?” If the trump of reality is always up the sleeve of the analyst there is little chance that the question will be pushed to the limit because social inquiry can be trumped anytime the analyst fancies: “Scientists came to believe this because it is true, rational, or whatever.” Thus, since 1981 my position has been “methodological relativism” in which reality-trumps are not allowed. (See also Collins 2015 for a discussion of this in the context of the scientific realism debate.) Now, the question is, how does the wonderful event of September 14, 2015 affect all of this? I have already indicated the problem by calling it “a wonderful event” rather than “the bit of interpretively flexible data that turned up on September 14” and I’ll now make things worse. I learned of the event on a Monday evening from my regular scanning of the scientists’ emails; this was just a few hours after it was first seen. As with most of the other scientists, I first assumed it was just another of the many false alarms which afflict such a sensitive apparatus. But by Thursday morning Ihad concluded it was the “real thing” and I had started to write the book I knew I was going to have to write (Collins 2017). The real thing! How can I say such a thing? Well, the reason I have chosen to spend more than forty years immersed in a group of scientists is because I like the scientists and I love the completely crazy project: to try to spot cosmic events by recording changes down to 1/10,000 of the diameter of a proton in a four-kilometre interferometer arm—something which a large number of scientists said would never work. It was the remote chance that the impossible would be made possible before I died that kept me going, along with pleasure in the company of the honest,

Spontaneous Generations 9:1(2018) 39 H. Collins Gravitational Waves and Scientific Realism larger-than-life, and sometimes slightly mad characters who thought it was worth devoting their lives to it. In other words, most of the time I was taking the role of a native and that is what made the whole thing viable. And when I saw what was happening in the first few days after September 14 I was seeing it as a native and loving it as native—wallowing in realism—a realism that carried all the way through to the press conferences on February 11 when the secret was announced to the world: “We have detected gravitational waves; we’ve done it.” But the job of the sociologist, and certainly the job of the methodological relativist, is also to step back—to see the same world from the estranged perspective. Why do people, such as you, dear reader, believe any of it? Just why is this gravitational wave detection different from all other gravitational wave detections? And what use would it be to say because “this time it is real?” The answer must be in terms of why no one is doubting (I should say, no one in the mainstream is doubting). What has happened here is that, in terms of the title of my 1985 book, we have seen “the order of things” changing; from now on, when gravitational wave research is done, it will be done against a different background of taken-for-granted assumptions and it will be possible to judge the competence of the experiments by reference to their outcomes. The job of the methodological relativist is to document and try to explain that social transformation without reference to reality.

Harry Collins Cardiff University School of Social Sciences Cardiff CF10 3WT UK [email protected]

References

Collins, H. M. 1975. The Seven Sexes: A Study in the Sociology of a Phenomenon, or The Replication of Experiments in Physics. Sociology 9(2): 205-224. Collins, H. M. 1981. What is TRASP?: The Radical Programme as a Methodological Imperative. Philosophy of the Social Sciences 11(2): 215-224. Collins, Harry. 1985. Changing Order: Replication and Induction in Scientific Practice. Beverley Hills & London: Sage. 2nd edition 1992, Chicago: University of Chicago Press. Collins, Harry. 2004. Gravity’s Shadow: The Search for Gravitational Waves. Chicago: University of Chicago Press.

Spontaneous Generations 9:1(2018) 40 H. Collins Gravitational Waves and Scientific Realism

Collins, Harry. 2011. Gravity’s Ghost: Scientific Discovery in the Twenty-First Century, Chicago: University of Chicago Press. Collins, Harry. 2013. Gravity’s Ghost and Big Dog: Scientific Discovery and Social Analysis in the Twenty-First Century. Chicago: University of Chicago Press. Collins, Harry. 2015. Contingency and “The Art of the Soluble.” In Science as It Could Have Been. Discussing the Contingent / Inevitable Aspects of Scientific Practices, eds. L. Soler, E. Trizio and A. Pickering, 177-186. Pittsburgh: University of Pittsburgh Press. Collins, Harry. 2017. Gravity’s Kiss: The Detection of Gravitational Waves, Cambridge, MA: MIT Press.

Spontaneous Generations 9:1(2018) 41 Motives for Research

Author(s): Arthur Fine

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 42-45.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.27048

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Focused Discussion Invited Paper Motives for Research* Arthur Fine†,‡

Some facts on which we can agree: no epistemic argument supports a robust form of realism in general as against an equally robust form of antirealism. For example, none of “provides a better explanation for,” or “accounts better for the success of,” or “makes more intelligible than” tilts the balance in general one way or another. But are there perhaps heuristic arguments? Einstein, famously, encountered calculations by Lorentz using an auxiliary temporal variable, took them realistically, and so was led to the “proper time” of special relativity. And he did it again, turning Planck’s expedient of fictional clumps of emitted energy (which need not “really exist somewhere in nature”) into the “light quanta” that account for the photoelectric effect and that we know today as photons. In these circumstances a realist attitude seems to have motivated work that led to significant scientific progress. Exactly the opposite seems to have been true in subsequent developments concerning the quantum theory, where the or instrumentalism of Bohr, Heisenberg, and Pauli marked out progressive paths as against the realist ideas of Schrödinger, de Broglie, or Einstein.1 So with respect to realism and antirealism there may be times to reap and times to sow. Really? I don’t think so. That is, I do not think there are reliable “best practice” guides that link generic scientific tasks (build theories, measure parameters, look for novel phenomena, etc.) with meta-attitudes like realism, or instrumentalism, or empiricism. Except for the Feyerabendian “anything goes” (or as I prefer “anything might go”), there are no dialectical laws for scientific practice. Just as realism (along with other metaphysical attitudes) fails to be rationally required for understanding science, it also fails in a general or generic way to be required for doing science (“scientific progress”).

* Received August 2, 2016, Accepted August 2, 2016 † Arthur Fine is Professor Emeritus of Philosophy (and Adjunct Professor of Physics and of History) at the University of Washington. His research concentrates on the history and foundations of quantum physics and on interpretive issues relating to the development of the natural and social sciences. ‡ This is the title of Einstein’s 26 April 1918 address in honor of Max Planck’s sixtieth birthday: Motive des Forschens. 1 See Ryckman (2017) for a proper account of these episodes.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 42 A. Fine Motives for Research

Here is why. Suppose that Hendrik proposes a way to calculate the apparent shrinkage that a moving rod seems to undergo in the direction of motion. His calculation uses a certain auxiliary variable t′. Albert, struggling to take in the basis for the novel calculation, thinks (realism) that maybe t′-time with respect to the moving rod really is different from t-time on the stationary reference system. Voila! Special relativity. But Mileva, working alongside Albert, notes that doing things Albert’s way—including how one might go further (for example by trying to connect energy and rest mass in one nifty equation)—would be just as well motivated by the thought that t′-time is a good (useful, reliable) way of treating time with respect to the moving rod. No further attitude about the “reality” of times yields further dividends in terms of motivation. The same is true with respect to Max and Albert over the quanta of energy. Real? Useful fictions? It does not matter for doing good work. This line of argument is much the same as the indifference argument of the Natural Ontological Attitude (NOA) that matches realist moves with antirealist moves in their endless pas de deux over how best to understand science (Fine 1986). The claim here is that indifference applies to motivation as well as to understanding. Of course Mileva does not deny that in fact Albert’s penchant for realism contributed to his pursuit of light quanta. Nor would she deny that it played a role in his later dissent from the quantum theory, where he did not pursue developments. The claim is that other attitudes could have had much the same motivational force. This is like the position of constructive empiricism with respect to explanation: if the search for explanation does generally lead to empirical adequacy, then the constructive empiricist who knows this has as much motivation to pursue explanations as does an explanation-hungry realist. And the point of the claim is to subvert any argument that says if you want progress here it would be better to be a[n] —ist. In a thoughtful examination of these issues, Robin Hendry (2001) agrees about parity in the arguments over how “best” to understand science but denies that this parity entails indifference of motivation. Curtis Forbes (2016) makes a similar move, supporting it with a perceptive study of how different meta-attitudes affected nineteenth century electrodynamics (inthe cases of Weber, Helmholtz, and Maxwell). The parity in arguments over understanding scientific practice, they think, does not extend to motivations because there are practices that may involve beliefs and inferences only available to one party (say realists, or instrumentalists) and hence could not be motivated otherwise. Hendry concentrates his argument on two practices: conjoining theories and accepting multiple incompatible models. The idea is that only realists will want to conjoin theories and only antirealists will want to work with incompatible models. So where these practices occur they

Spontaneous Generations 9:1(2018) 43 A. Fine Motives for Research point differentially, either to realism or to antirealism, in terms of motivation. But do they? Take conjunction first. The instrumentalist wants theories that work, not just in terms of predicting the phenomena (or empirical adequacy) but also in terms of other virtues, like fruitfulness, for example, in connecting usefully with other theories or models. I call theories with the desired virtues “reliable” (Fine 2001). It follows that an instrumentalist who looks to each of theories X and Y as being reliable has reason to think that connecting them will be reliable as well, and so to pursue the conjunction. As for incompatible models, the realist of course cannot accept incompatibles as true. The instrumentalist is in a similar boat since incompatibles cannot both be reliable. But both realists and instrumentalists can expect to learn from incompatibles how to pursue their respective goals (truth, reliability) better, and meanwhile they can make the best (in their own terms) of what they have. There are no compelling arguments here, as elsewhere, that show important general differences between decent realist or antirealist attitudes either in understanding or in motivating practice. What then of beliefs and inferences that might make a difference? Certainly if I believe that there are nano-widgets and that they are important scientifically, I (may) have good reason to try to find and measure them. (I say “may” since I might be engaged elsewhere.) If I don’t believe in nano-anythings, you might think I have no reason to search for nano-widgets, surely not in order to measure them. But suppose I believe that searching for nano-widgets and their measures could be scientifically useful. Maybe the experimental techniques themselves would be valuable in developing new tools of research. Maybe the search seems likely to turn up important, reliable information (though probably not about nano-widgets). Maybe, an excellent way to understand the big-widgets all around us is to model them using the fiction of nano-widgets, and the search will help to improve the model. John Dewey, in the course of a spirited rejection of what we call the “pessimistic meta-induction,” highlights just such considerations:

But the very putting of the question ... induces modification of existing intellectual habits, standpoints, and aims. Wrestling with the problem, there is evolution of new technique to control inquiry, there is search for new facts, institution of new types of experimentation; there is gain in the methodic control of experience. And all this is progress. (Dewey 1916, 101)2

2 Dewey continues, “It is only the worn-out cynic, the devitalized sensualist, and the fanatical dogmatist who interpret the continuous change of science as proving that, since each successive statement is wrong, the whole record is error and folly; and that the present truth is only the error not yet found out.” (Dewey 1916, 101)

Spontaneous Generations 9:1(2018) 44 A. Fine Motives for Research

In line with Dewey, I see no practice motivated by a search for truth that could not be motivated just as strongly in a quest for reliability. Parity between truth and reliability marks a permanent impasse in arguments between realists and instrumentalists. The impasse does not disappear when it comes to motivations.

Arthur Fine University of Washington Seattle, WA 98915-3350 USA [email protected]

References

Dewey, John. 1916. Essays in Experimental Logic. Chicago: University of Chicago Press. Fine, Arthur. 1986. Unnatural Attitudes: Realist and Instrumentalist Attachments to Science. Mind 95(378): 149-79. Fine, Arthur. 2001. The Scientific Image Twenty Years Later. Philosophical Studies 106(1/2): 107-122. Forbes, Curtis. 2016. A Pragmatic, Existentialist Approach to the Scientific Realism Debate. Synthese. Online First. DOI:10.1007/s11229-016-1015-2. Hendry, Robin. 2001. Are Realism and Instrumentalism Methodologically Indifferent? Philosophy of Science 68(3): S25-S37. Ryckman, Thomas. 2017. Einstein. New York and London: Routledge.

Spontaneous Generations 9:1(2018) 45 Beyond Realism and Anti-Realism—At Last?

Author(s): Joseph Rouse

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 46-51.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.26979

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Focused Discussion Invited Paper Beyond Realism and Anti-Realism—At Last?* Joseph Rouse†

I have long argued, beginning with my first philosophical publication 35 years ago (Rouse 1981), that what is wrong with scientific realism is not realist answers to questions to which various anti-realists give different answers. The philosophical problems arise from assumptions shared by realists and anti-realists that are needed to make the questions they are answering seem intelligible, answerable, and pressing. The realism issue is protean, however. Not only can it be formulated in multiple ways; there are also multiple ways of identifying and rejecting the problematically shared assumptions that both motivate and enable the disagreements among realists and anti-realists. In what follows I will distinguish several informative ways to identify and reject the shared assumptions that sustain the dialectic among realists and anti-realists. Each of these critical approaches correctly identifies a problem that arises in the course of working out a sensible question to which realism or an anti-realist alternative would be an appropriate kind of answer. They nevertheless differ in the depth and insight of their diagnoses of the problem. Put another way, the earlier analyses are best understood as consequences of the subsequent, more comprehensive approaches. Arthur Fine (1986a, 1986b, 1991, 1996) has prominently advanced a first challenge to all sides of the realist debates in a series of papers advocating the “Natural Ontological Attitude,” by asking what these debates are about. For example, they might be understood as advocating alternative goals for scientific inquiry (truth, empirical adequacy, instrumental reliability, the advancement of social interests, and the like). Realists and anti-realists attribute such goals to the sciences as an interpretation that “makes better sense” of scientific practices and achievements. Fine offers a trenchant reply: “Science is not needy [for interpretation] in this way. Its history and current practice constitute a rich and meaningful setting. In that setting, questions of goals or aims or purposes occur spontaneously and locally (1986, 148).”

* Received July18, 2016. Accepted July 18, 2016. † Joseph Rouse is the Hedding Professor of Moral Science in the Philosophy Department and the Science in Society Program at Wesleyan University. His research concerns the philosophy of scientific practice, and its implications for in philosophy more generally.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 46 J. Rouse Beyond Realism and Anti-Realism—At Last?

Michael Williams makes a similar argument in epistemology more generally, challenging the belief that “there is a general way of bringing together the genuine cases [of knowledge] into a coherent theoretical kind” (1991, 108-109), such that one can make a general case for realist or anti-realist interpretations of knowledge claims. A second way to undercut the realism issue is to recognize that it presupposes a substantive commitment to a separation between mind and world, and frames the issue in terms of how epistemic “access” to the world is mediated. Anti-realism takes various forms, corresponding to the forms of epistemic mediation taken as primary: empiricists emphasize that we only encounter the world perceptually; instrumentalists highlight predictive reliability and the fallacy of affirming the consequent; conceptual schemers appeal to the conceptual underpinnings of scientific methods and the systematic inferential relations among concepts; social constructivists account for epistemic authority in terms of social interests, practices, or norms. On this formulation, realists bear the burden of proof against anti-realists, because the opacity of epistemic mediation seems the more parsimonious hypothesis. Realists in turn argue that the possibility of radical skepticism about the real confers significance upon their claims. The possibility of the pessimistic induction to the eventual falsification of all scientific theories, or that we might be brains in a vat, supposedly shows why it is important to demonstrate that at least some of our best theories really are referentially successful and/or approximately true. Realism so construed typically endorses the premises of some form of anti-realism, and then tries to argue for realism indirectly. Thus, ’s (1990) explanationist defense of realism starts from the theoretical mediation of all epistemic access and argues indirectly for the approximate truth of meta-methodologically reliable theories, while entity realism or structural realism seeks to exempt manipulable causal powers or “underlying” structure from the mediational opacity of other aspects of our understanding. The appropriate response to all of these arguments is instead to reject the unargued presumption of a separation of mind from world and the consequent need for mediation, an argument first cogently advanced in the introduction to Hegel’s (1977) Phenomenology of Spirit and still not adequately assimilated in much of philosophy. A third way to dissolve the realism question highlights a problematic commitment to the independence of meaning and truth. Anti-realists are evidently committed to such independence, because they endorse the possibility of understanding what scientific claims purport to say about the world, while denying the kind of access to what the world is “really” like needed to determine whether those claims are “literally” true. We can supposedly only discern whether claims are empirically

Spontaneous Generations 9:1(2018) 47 J. Rouse Beyond Realism and Anti-Realism—At Last? adequate, instrumentally reliable, paradigmatically fruitful, rationally warranted, theoretically coherent, or the like. Realists nevertheless agree that understanding theoretical claims and determining whether they are correct are distinct and independent achievements. For realists, it is a significant achievement to determine, for some scientific theory or hypothesis, that this claim, with its semantic content independently fixed, is true. If the determination of the truth or falsity of a claim were entangled with the interpretation of its content, however, such that what the claim says was not determinable apart from those interactions with the world through which we assess its truth, then realists would be unable to specify the claims (i.e., the contents of those claims) about which they want to be realists. Anti-realists in turn could not pick out their preferred proximate intermediary (perceptual appearances, instrumental reliability, social practices or norms) without invoking the worldly access they deny. Donald Davidson (1984) developed a classic criticism of this assumption and the realist and anti-realist positions that presuppose it. Davidson argued that the only way to justify an interpretation of what a claim says is to show that this interpretation maximizes the truthfulness and rationality of the entire set of beliefs and desires attributed to a speaker in conjunction with that interpretation. Otherwise, any attribution of false beliefs to the speaker would be justifiably open to a response that attributes the error to the interpretation rather than to the claims interpreted. Only against the background of extensive understanding of what is true can we also understand the objective purport and content of beliefs and utterances. Davidson rightly concluded that “[n]othing, no thing, makes our sentences or theories true: not experience, not surface irritations, not the world...” (1984, 194). A similar line of argument informed Wilfrid Sellars’s (1997, 2007) criticisms of phenomenalism and empiricist suspicions about modal and normative claims. Just as Davidson argued that we can only understand what a speaker means against a background understanding of what is true, Sellars argued that we cannot understand claims about empirical appearances (e.g., “looking red”) unless we understand the relevant worldly properties (“red”) and their counterfactual applicability and normative significance for understanding, justification, and action. These lines of argument undercut what is at issue in debates over realism and its anti-realist alternatives. We can only understand claims about the world as part of our understanding of the world, and hence it is not intelligible to presume a grasp of the content of those claims independent of a larger pattern of interaction with and understanding of their intended objects. Understanding the world and understanding what our beliefs and utterances say about the world are not separable enterprises. I endorse this broadly Davidsonian strategy for rejecting both realism

Spontaneous Generations 9:1(2018) 48 J. Rouse Beyond Realism and Anti-Realism—At Last? and its anti-realist alternatives, but the strategy has been difficult to carry through successfully. John McDowell (1984) cogently argued that Davidson’s own project undermines its motivating insight by maintaining a systematic distinction between the rational interpretation of conceptual content and causal interaction with the world in perception and action. I have argued in turn (Rouse 2002, 2015) that Brandom’s (1994), Haugeland’s (1998), and McDowell’s (1994) efforts to undertake a similar strategy in a different way nevertheless fail for parallel reasons. Those arguments point toward a fourth way to express what is wrong with the question to which realism or some form of anti-realism is posed as an answer. The realism debates presuppose an objectionably anti-naturalist account of conceptual contentfulness and justification. Although many philosophers of science advocate realism to fulfill a commitment to naturalism, my arguments show why that aspiration is misguided. Both realism and the various anti-realisms are committed to an irreconcilably dualist rift between the normativity of meaningful and justifiable scientific claims about the world, and our causal entanglement within the world. Rouse (2002) shows how the broader debates over naturalism in philosophy, which incorporate the arguments over realism, have led to this unacceptable dualism. Realist and anti-realist interpretations of scientific claims thereby fail to meet an indispensable criterion for any adequate philosophical naturalism: they do not make intelligible how scientific understanding of the world is a scientifically comprehensible natural phenomenon. Rouse (2015) then works out the basis for a more adequately naturalistic account of scientific understanding. I draw upon recent developments in evolutionary biology and philosophical work on scientific understanding in practice to display human conceptual capacities generally, and scientific understanding specifically, as forms of biological niche construction in the human lineage. This account blocks both realism and anti-realism by showing how the contentfulness of scientific claims about the world is worked out aspart of ongoing interaction with our developmental and selective environment. Scientific claims and the conditions for their intelligibility are part ofthat environment, and only acquire meaning and justification as part of our ongoing efforts to articulate that environment conceptually from within. There is no gap between how the world appears to us and how it “really” is for realists to overcome, or for anti-realists to remain safely on the side of those appearances. Scientific understanding instead develops hard-won, partial articulations of the world. Within those conceptually articulated domains we can differentiate locally between what our theories and models say about the world, and whether what they say is correct or in need of some form of revision. Both conceptual understanding and its assessment nevertheless presuppose the kind of access to the world that antirealists deny

Spontaneous Generations 9:1(2018) 49 J. Rouse Beyond Realism and Anti-Realism—At Last? and realists seek to secure. In the wake of these arguments, we should stop asking the questions to which realism or anti-realism would pose answers. Unless they can develop an adequate critical response to these arguments, realists must abandon any commitment to philosophical naturalism. They would instead share with their anti-realist opponents the need to defend their conceptions of scientific understanding with the recognition that these conceptions conflict with what the sciences have to say about our own conceptual capacities.

Joseph Rouse Wesleyan University Department of Philosophy 350 High Street Middletown, CT 06459 USA [email protected]

References

Boyd, Richard. 1990. Realism, Approximate Truth, and Philosophical Method. In Scientific Theories, ed. C. W. Savage, 355-91. Minneapolis: University of Minnesota Press. Brandom, Robert. 1994. Making It Explicit. Cambridge, MA: Harvard University Press. Davidson, Donald. 1984. Inquiries into Truth and Interpretation. Oxford: Oxford University Press. Fine, Arthur. 1986a. The Shaky Game. Chicago: University of Chicago Press. Fine, Arthur. 1986b. Unnatural Attitudes. Mind 95(378): 149-79. Fine, Arthur. 1991. Piecemeal Realism. Philosophical Studies 61(1/2): 79-96. Fine, Arthur. 1996. Science Made Up. In The Disunity of Science, eds. Peter Galison and David J. Stump, 231-54. Stanford: Stanford University Press. Haugeland, John. 1998. Having Thought. Cambridge, MA: Harvard University Press. Hegel, G. W. F. 1997. Phenomenology of Spirit. Translated by A. V. Miller. Oxford: Oxford University Press. McDowell, John. 1994. Mind and World. Cambridge, MA: Harvard University Press. Rouse, Joseph. 1981. Kuhn, Heidegger, and Scientific Realism. Man and World 14 (October): 269-290. Rouse, Joseph. 2002. How Scientific Practices Matter. Chicago: University of Chicago Press. Rouse, Joseph. 2015. Articulating the World. Chicago: University of Chicago Press.

Spontaneous Generations 9:1(2018) 50 J. Rouse Beyond Realism and Anti-Realism—At Last?

Rouse, Joseph. 2007. In the Space of Reasons. Cambridge, MA: Harvard University Press. Sellars, Wilfrid. 1997. Empiricism and the . Cambridge, MA: Harvard University Press. Williams, Michael. 1991. Unnatural Doubts. Oxford: Blackwell.

Spontaneous Generations 9:1(2018) 51 BEWARE OF Mad DOG Realist

Author(s): Alan Musgrave

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 52-64.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.27051

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Focused Discussion Invited Paper BEWARE OF Mad DOG Realist* Alan Musgrave†

Once, in America, I got into an argument with some local antirealist philosopher, and must have said something outrageous. Whereupon Bill Lycan exclaimed, “Oh, you mad-dog realist you, I love it!” Before I left to come home to New Zealand, my friend Deborah Mayo gave me a Wild West sign as a farewell present. That sign is the title of my paper. Though I am very proud of my sign, it is a little misleading. I am not really a mad-dog realist—more like a lap-dog realist. If science is to be believed, we are a bunch of middle-sized land mammals. We are not the biggest land mammal on the planet, or the strongest, or the swiftest—but we are the smartest. We are so smart that we can even try to figure out how the planet works. That is called “SCIENCE.” Anditis one of the great achievements of humanity. The view that science seeks to understand how the world works is called scientific realism. Realism is the obvious, commonsensical view, the instinctive philosophy of most working scientists. It astonishes me that many people try to rob us of this great achievement, by going in for antirealist views that deny that science can or should seek to understand the world. I have tried to expose this act of intellectual thievery for what it is. There is an old proverb: “Set a thief to catch a thief.” It takes a philosopher to catch the antirealist philosophers out. Trouble arises because realism comes in different versions, some more optimistic than others. Optimistic realists claim that science not only seeks to explain things, but can also know for certain when it has found a true explanation. Even more optimistic realists claim that science can achieve ultimate explanations about which no further explanatory question

* Received July 16, 2016, Accepted July 16, 2016 † Alan Musgrave (b. 1940) has been Professor of Philosophy at the University of Otago (New Zealand) since 1970. He was educated at the London School of Economics, where his PhD was supervised by Sir . With he edited Criticism and the Growth of Knowledge (1970), a most influential collection of essays in twentieth century philosophy of science. His other publications include Common Sense, Science and Scepticism (1993), Essays on Realism and (1999), and Secular Sermons (2009). His views were the subject of a collection of essays Rationality and Reality: Conversations with Alan Musgrave (2006).

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 52 A. Musgrave BEWARE OF Mad DOG Realist can be asked. This is mad-dog realism proper, the view that science can achieve certainly true and ultimate explanations. Antirealists object to these claims, and throw the realist baby out with the mad-dog realist bathwater. Antirealists rightly point out that a scientific explanation can never be certain—and conclude that science can never achieve truth. But this wrongly conflates truth with certainty. Again, antirealists rightly point outthata scientific explanation can never be ultimate—and conclude that science can never really explain anything. But this wrongly conflates explanation with ultimate explanation. The question is whether we can resist these mad-dog realist conflations and work out a defensible lap-dog version of realism. Why can science not achieve certainty? Because science is based upon observation and experiment, yet its principles transcend observation and experiment in three different ways. First, the principles are general, while the results of observation and experiment are never completely general, however numerous they may be. This is the classical problem of induction. The favourite example is that observing lots of white swans cannot prove that all swans are white, since the next swan we come across might be a black one. (As you can see from this example, most philosophers of science come from the northern hemisphere; an unkind soul once joked that having black swans in it was Australia’s chief contribution to the philosophy of science!) The second way that theory transcends observation is precision. Scientific observation or measurement is always to some degree imprecise, while some scientific theories are mathematically precise. You cannot prove a precise theory from imprecise measurements. The third way that theory transcends observation is in terms of observability itself. We can only observe observable things, while some scientific theories postulate unobservable things, events or processes. You cannot prove a theory about unobservables from observation. For these three reasons, the observational or experimental methods of science cannot achieve certainty. But for all that, a general and precise theory about unobservable things might well be true. Yes, antirealists say, but is it ever reasonable to think that a scientific theory is true? To which I reply, of course it is. I shall come back to this. What of the conflation of explanation with ultimate explanation? Little children often play the “explanation game.” They ask a Why-question, and whatever answer the parent gives, they ask another Why-question about that answer. Whether children do this out of genuine curiosity, or just to get attention, I do not know. Back in the infancy of science Aristotle noted this potential infinite regress of explanation. We ask, “Why A?” and answer, “Because B.” But then we can ask, “Why B?” And if we answer, “Because C” we can then ask, “Why C?” ... and so on, ad infinitum. The worry is: have we really explained A by citing B, if B cries out for explanation in its turn?

Spontaneous Generations 9:1(2018) 53 A. Musgrave BEWARE OF Mad DOG Realist

Well, of course we have. As John Hospers put it:

There is an elementary but widely pervasive confusion on this point. It is said that unless an explanation has been given all the way down to the level of brute fact, no explanation has been given ... at all; e.g. unless we know why water expands on freezing ... we do not really know why our pipes burst. But surely this is not the case. Whether we know why water expands on freezing or not, we do know that it does so, and that it is because of its doing so that the pipes burst in cold weather. When we have asked why the pipes burst, the principle of the expansion of water does give an explanation. When we ask why water expands on freezing, we are asking another question. The first has been answered, whether we can answer the second or not. (Hospers 1946, 341, footnote)

Quite so. Why is there this “elementary but widespread confusion”? What lies behind it is the idea that an explanation should relieve our puzzlement, set our curiosity at rest. Scientific explanations do not do this—they donot relieve puzzlement, but rather relocate it. If we focus on the puzzlement and forget what it is about, we are bound to find scientific explanations wanting. Besides, puzzlement-relief is a very subjective business. What relieves one man’s puzzlement may not relieve the next woman’s. I dare say that more puzzlement has been relieved, down the ages, by the ritual incantation “God moves in mysterious ways” than by all the teachings of science. And I dare say that if puzzlement-relief is your aim, recourse to the whiskey bottle, or to some up-to-date equivalent, would work better than the study of science. So, a genuine explanation need not be an ultimate explanation, which sets all curiosity at rest. Besides, what would an ultimate explanation be like? When children play the “explanation game,” impatient parents often try to shut them up at some point by saying, “Because I say so.” This is never a good answer to the child’s last question. Yet both candidates for ultimate explanation resemble the impatient parent’s “Because I say so.” The secular candidate is “Because we say so” and the theological candidate is “Because God said so.” To illustrate the secular candidate, imagine the following dialogue between a small child and her parent:

Mummy, why is Uncle John unhappy?

Because he is a bachelor, and bachelors are unhappy.

But Mummy, why are bachelors unhappy?

Because they are unmarried. Spontaneous Generations 9:1(2018) 54 A. Musgrave BEWARE OF Mad DOG Realist

But Mummy, why are bachelors unmarried?

BECAUSE WE SAY SO—that’s what the word means.

The secular candidate is Aristotle’s idea of essences or of essential predication. It makes sense to ask why bachelors are unhappy (supposing that they are). But it makes little sense to ask why bachelors are unmarried. Why not? Because we say so, because we use or define the word “bachelor” just to mean an unmarried man. The theological candidate for ultimacy is “Because God said so.” Is this a better answer, as many people think? Darwin thought not. He drew attention to lots of strange biological phenomena that his theory of evolution could explain, about which the creationist could only say, “God said it should be so.” Darwin complained that this was just “restating the facts in dignified language” rather than explaining them (Darwin 1859, 217; see also Musgrave 2010). Still, restating the facts in dignified language is very popular and can take you far. John Polkinghorne used to be Professor of Physics at Oxford University. He accepted the idea that there are ultimate laws of physics, and noticed that a further question can be asked about them that physics cannot answer: “Why are the ultimate laws of physics what they are?” Physics cannot answer this question because a physical explanation of an ultimate law would traffic in deeper laws and show that it was not an ultimate lawafter all. Polkinghorne concluded that the only answer to the question “Why are the ultimate laws of physics what they are?” is “Because God said so.” And he switched from being Professor of Physics to being Professor of Theology. This was a good move in one respect. His argument won him the Templeton Prize in Theology, which is worth a lot more than the Nobel Prize in Physics (See Polkinghorne 1994, 328; Musgrave 2009)! Lap-dog realists like me resist the conflations of truth with certainty and of explanation with ultimate explanation. Yet these conflations run deep. Where do they come from? Perhaps they come from a religious or theological bent of mind. Religions traffic in certainties and ultimate explanations—or pretend to. And if we expect science to rival religion in these matters, we are bound to be disappointed. We are bound to fall back on the view that science does not provide genuine explanations at all. So there is an unholy alliance (forgive the pun) between religion and antirealism about science. This unholy alliance was first forged in the sixteenth century out ofthe clash between science and religion. The Copernican theory that the earth moves clashed with certain biblical passages that, literally interpreted, said or implied that the earth does not move. Cardinal Robert Bellarmine, member of the Inquisition, who later was made a saint, found a way to dissolve the contradiction. Bellarmine was an acute thinker. He saw that a false Spontaneous Generations 9:1(2018) 55 A. Musgrave BEWARE OF Mad DOG Realist theory might make nothing but true predictions, might “save the observable phenomena” as the ancients put it. He granted that Copernican theory “saved the phenomena” but denied that this showed it to be true. He advised the Copernicans to stop saying that their theory was true, and say instead only that it saved the phenomena, that the phenomena were as if it were true. As he put it, “This has no danger in it, and it suffices for mathematicians” (Letter to Foscarini, April 1615, reprinted in Santillana, 1955, 98-100). Pope Urban VIII agreed and went further. He challenged Galileo to show not just that the Copernican theory saved the phenomena, but that the phenomena could be saved in no other way, that an omnipotent God could not have fixed things so that it was merely as if the earth moved about the sun, that the earth must move about the sun. In other words, he challenged Galileo to show that the Copernican theory was not just true but necessarily and ultimately true. Poor Galileo could not do this. He did propose his famously mistaken theory of the tides, describing them as “physical effects whose causes can perhaps be assigned no other way” than by supposing the earth to move (Galilei 1632). But at the end of his great Dialogue on the Two Chief World Systems of 1632, Galileo gave up. His character Simplicio, so far presented as the idiot Aristotelian, reiterates the pope’s argument that God could have produced the tides in the oceans in some other way than by moving the earth and that “it would be excessive boldness for anyone to limit and restrict the Divine power and wisdom to some particular fancy of his own.” Galileo’s spokesman Salviati is stumped by what he calls this “admirable and angelic doctrine,” and the Dialogue ends. Despite this, it got Galileo into trouble. Bellarmine had ordered him not to teach or defend the truth of the Copernican system and he had evidently disobeyed the order. Galileo was summoned before the Inquisition, forced to recant his teachings, placed under house arrest for the rest of his life, and forbidden to have visitors or to publish anything further. Thus was antirealism about science invented by Bellarmine and Pope Urban VIII to dissolve the clash between Copernican science and their religion. A more amusing example of the same line of thinking came in the nineteenth century, when geologists were amassing huge bodies of evidence for the great antiquity of the earth, and when the discovery of fossils of extinct creatures fuelled evolutionary speculations. Philip Gosse was a famous naturalist and explorer, the author of forty books, the David Attenborough of his age. He was also a biblical fundamentalist who thought that the earth and all the creatures on it were specially created a few thousand years ago. Gosse agonized for years about the clash between his science and his religion. Then he hit upon a brilliant solution. In 1857, two years before Darwin’s Origin of Species, Gosse published Omphalos: An Attempt to Untie the Geological Knot. “Omphalos” is Greek for belly button. Gosse’s book begins with an erudite

Spontaneous Generations 9:1(2018) 56 A. Musgrave BEWARE OF Mad DOG Realist discussion of whether Adam and Eve had belly buttons. Think about it—it is a good question for a creationist! Gosse soberly concludes that Adam and Eve probably did have belly buttons. He soberly concludes, in other words, that God created them as if they had been born of women and were not the first people. It is the same, Gosse argued, with the rocks. God created them a few thousand years ago bearing all the marks of great age including the fossils, created them as if the teachings of geology and evolution were true. Similarly, God created trees with growth rings in them, crocodiles with fully-formed teeth in their jaws, and so on. Philip Gosse’s son Edmund describes what happened:

Never was a book cast upon the waters with greater anticipations of success ... My Father lived in a fever of suspense, waiting for the tremendous issue. This “Omphalos” of his ... was to fling geology into the arms of Scripture, and make the lion eat grass with the lamb. ... He offered it, with a glowing gesture, to atheists and Christians alike. ... But, alas! Atheists and Christians alike looked at it, and laughed, and threw it away. (Gosse 1907, 105)

Why did the geologists laugh at Gosse’s hypothesis? They had not a scrap of geological evidence against it. “God created the earth as if geology were true” is evidentially equivalent with geology. For the most extreme example of this kind of thinking, we must jump back in time to a philosopher, to George Berkeley, Anglican Bishop of Cloyne. Berkeley was a more radical thinker than most antirealists. He did not just dispute the truth of some particular scientific theory. He thought that all science was false. More radically still, he thought that our commonsense belief in external objects, out of which the sciences grow, is also false. Common sense posits external objects to explain regularities in our experience. I see a tree, I shut my eyes for a moment, and when I open them again I see the tree again. Why? Common sense answers that there is a tree out there, existing independently of me, and somehow causing my tree-experiences. Berkeley says this is wrong. There is no tree. Reality is entirely spiritual. Reality consists of finite spirits, like you and me, and an infinite spirit, God.Of course, we finite spirits have tree-experiences. But our tree-experiences are not caused by trees. Rather, they are plonked directly into our minds by God. It is just that God plonks tree-experiences into our minds as if our belief in trees were true. Berkeley’s philosophy is mad. Yet I have a love/hate relationship with it. I love it because it reduces to absurdity the antirealist lines of thinking that I hate. Science grows out of common sense, and no sharp line can be drawn between them. Thus it is hard to avoid slipping upward from realism about common sense to realism about science. But slippery slopes run both ways. Spontaneous Generations 9:1(2018) 57 A. Musgrave BEWARE OF Mad DOG Realist

If you deny scientific realism you will find it hard to avoid slipping downinto Berkeley’s denial of commonsense realism as well. So Bellarmine said that God produces astronomical phenomena as if Copernican theory were true, and Gosse said that God produces geological phenomena as if geology were true, and Berkeley went the whole hog and said that God produces all phenomena as if common sense and science were true. It is no accident that in all these cases God gets into the picture. Bellarmine, Berkeley, and Gosse invented antirealism about science to dissolve clashes between science and religion. But the religious dimension is not essential. Instead of saying that God produces phenomena as if some theory were true, we can say that the phenomena just are as if that theory were true. So, we can form a theological surrealist transform of any theory T by saying, “God produces phenomena as if T were true” (TG) and we can form a secular surrealist transform of it by saying, “The phenomena are as if T were true” (T*). Here “surrealism” is short for “surrogate realism.” A theory and its surrealist transforms are empirically or observationally equivalent. No observation or experiment can decide between them. If we are strict empiricists for whom only evidence should determine theory-choice, there is nothing to choose between them and they are equally good theories. But we should not be strict empiricists. There is a world of difference between T, TG and T* from an explanatory point of view. We may assume that the scientific theory T we started with explains its phenomena somehow. The theological surrealist transform of T (TG) gives an all-purpose theological explanation of those same phenomena, namely “God said so.” The secular surrealist transform of T (T*) gives no explanation of the phenomena at all. Better the all-purpose theological explanation than no explanation at all. The trouble is that we poor humans cannot understand that explanation and are not meant to understand it. God plonks tree-experiences into our minds as if there were trees. How does She do that? How does divine operate? We have no idea and we are not meant to have any idea. We are just meant to comfort ourselves with the thought that God is omnipotent and can do anything. Many thought God the weak link in Berkeley’s system and removed Her from it. Berkeley minus God is secular surrealism, or as it is more usually called, “phenomenalism.” This view is remarkably popular among philosophers—and scientists, too, for that matter. The great philosopher-physicist Ernst Mach agreed with Berkeley that the universe consists entirely of sensations. Mach had no explanation for why sensations of trees occur in a regular fashion. Trees do not explain it, and neither does God. Regularities in our experience are for Mach the ultimate inexplicable laws of nature. Phenomenalism also became the rage among philosophers obsessed with

Spontaneous Generations 9:1(2018) 58 A. Musgrave BEWARE OF Mad DOG Realist problems about meaning. In the twentieth century all the best journals of so-called “analytic philosophy” were full of learned articles devoted to the project of “translating” statements about external objects into statements about actual and possible sense-experiences. I once wrote that Berkeley denied the existence of external objects. The reviewer of my book in Mind, the leading British philosophy journal, described this as an “egregious error” (Dancy 1994, 215). No, Berkeley did not deny the existence of trees, he just told us what we “really mean” when we assert their existence. What we “really mean” when we say that the tree continues to exist when we shut our eyes is that we will have more tree-experiences if we open our eyes. Well, that is not what I mean when I say that the tree continues to exist when I shut my eyes. Getting back to antirealism about science, Bellarmine’s view became remarkably popular. Its leading defender in the nineteenth century was the great philosopher-scientist , who, like his hero Bellarmine, was a devout Catholic. Its leading contemporary advocate is Bas van Fraassen, who as it happens is a late convert to Catholicism. Van Fraassen calls his position “constructive empiricism.” A scientific theory is an intellectual tool or instrument for making predictions—or as van Fraassen’s slogan puts it, “The name of the [scientific] game is saving the phenomena” (van Fraassen 1980, 93). A theory is a perfectly good theory if it makes nothing but true predictions, if it is empirically adequate. Of course, a false theory might be empirically adequate. Never mind, “a theory need not be true to be good” and science should aim for empirical adequacy rather than truth (van Fraassen 1980, 10). Science should never claim that a theory is true, only that it is empirically adequate, that the phenomena are as if it were true. We should be strict empiricists and admit that there is nothing to choose between empirically equivalent theories. The famous economist Milton Friedman defended the same view in his seminal piece, “The Methodology of Positive Economics.” Friedman agreed that a scientific hypothesis need not be true to be good. Indeed, he seemed to go further and claim that the less true a hypothesis is, the better:

Truly important and significant hypotheses will be found to have “assumptions” that are widely inaccurate descriptive representations of reality, and, in general, the more significant the theory, the more unrealistic the assumptions. (Friedman 1953, 14; see also Musgrave 1981)

Nancy Cartwright said similar things about the laws of physics in her famous book How the Laws of Physics Lie: “The fundamental laws of physics do not describe true facts about reality. The fundamental laws of physics ... are simply false” (Cartwright 1983, 54-55; see also Musgrave 1995). Spontaneous Generations 9:1(2018) 59 A. Musgrave BEWARE OF Mad DOG Realist

Then there is Kyle Stanford, the recently elected President of the Philosophy of Science Association. He made his name by resurrecting Pope Urban VIII’s argument, rechristening it “the argument from the unthought-of alternative,” and claiming that it is fatal to scientific realism (Stanford 2006). Pope Urban VIII does not figure in Stanford’s extensive bibliographies—but I dare say that he is resting safe in heaven without the benefit of a footnote from Professor Stanford. The pope’s argument is fatal to mad-dog dogmatic realism; it is not fatal to the lap-dog critical realism that I defend. I could multiply further examples—but time is short. Instead I will next argue that antirealism is a human chauvinistic philosophy that is at odds with some of the basic teachings of science. If science is to be believed, what human beings happen to be able to observe is an outcome of our particular evolutionary history. Other critters have different sensory systems and can observe things that we cannot. Even among humans, what can be observed varies; after all, some of us are short-sighted or colour-blind. Moreover, what we can observe shifts over time and with technology. People can see things wearing their glasses that they cannot see with the naked eye. Once we allow glasses in, why not telescopes or microscopes or electron microscopes, or any of the many other fancy detection devices that scientists have invented? The observable/unobservable distinction is vague, species-specific and shifting. And yet, idealists and antirealists give it crucial philosophical significance. The idealists or positivists say, following Berkeley, say that the only things that exist are those that humans happen to be able to observe. Why should we accept that? Dogs and cats can hear or smell things we cannot. Should we deny that ultraviolet light exists, just because we cannot see it (though bees can)? Should we deny that atoms exist, just because they are too small for us to see with the naked eye? Van Fraassen insists that he is no positivist: he concedes that unobservables exist. Yet he gives the observable/unobservable distinction crucial epistemological significance. We humans should only think true statements about what we can observe. We should not think true any statement about things we cannot observe. Such statements should only be accepted as empirically adequate, not as true. Van Fraassen also says, rightly, that it is up to science to tell us what is observable by humans and what is not. But now consider a scientific statement of the form “So-and-so’s are unobservable by humans.” This is not a statement about observables. So the consistent constructive empiricist cannot think it true. The consistent constructive empiricist cannot think it true that anything is unobservable by humans, contrary to what van Fraassen maintained. Since I pointed out this contradiction, a literature has grown up trying to extricate constructive empiricism from it—but I do not have time to canvas that now. Instead, let us go back to the beginning. I have said, more than once,

Spontaneous Generations 9:1(2018) 60 A. Musgrave BEWARE OF Mad DOG Realist that if science is to be believed, such and such is the case. But is science to be believed? Ought we to believe things that are uncertain, things that we cannot prove to be true? Well, of course we ought. We do it all the time, and we are perfectly reasonable to do so. Notoriously, the term “belief” is ambiguous. It can refer to the content of the belief, the proposition believed, as when we say that two different people may have one and the same belief. Or it can refer to the mental act or state of believing something. This ambiguity carries over to the question of whether a belief is reasonable or justified. When we speak of justifying or giving a reason for a belief, do we mean justifying or giving a reason for the belief-content or for the belief-act? Obviously, we mean the latter. After all, one person might have a good reason for believing something, and somebody else might believe the same thing for no good reason at all. We deem the first belief (believing) reasonable and the second belief (believing) unreasonable. Yet the belief (belief-content) is exactly the same. Despite these platitudes, a malign assumption dominates discussions of these matters. The assumption is that a reason for believing a proposition must be a reason for the proposition believed. This assumption seems self-evident. After all, to believe something is to think it true or more likely true than false. So a reason for believing something must show that what is believed is true or more likely true than false—must it not? Still, the obvious question arises of what a reason for a proposition might be. The answer is equally obvious: logic tells us that. A conclusive reason for a proposition is another proposition that logically entails it. And an inconclusive reason for a proposition is another proposition that entails that it is more likely true than false. (We need an inductive logic to make good this second idea.) Suppose we accept that logic tells us what reasons for propositions are. Then we will end up with what I call logomania, the view that only logical reasoning can provide us with a reason to believe anything. And logomania combined with a familiar skeptical point will result in total irrationalism, the view that no belief (believing) is reasonable. The mere existence of a proposition that entails something that you believe cannot provide you with a reason for your belief. Obviously, you must believe that other proposition. But inferring something from something else that you do not reasonably believe cannot provide you with a reason for your belief either. Obviously, you must reasonably believe that other proposition. So we end up with the logomaniac view that the only reason for believing something is that you have inferred it from some other belief for which you have reason. But now, as skeptics down the ages have tirelessly pointed out, an infinite regress of reasons looms. Since nobody can complete an infinite chain of reasons for what they believe, everybody must start from non-inferential

Spontaneous Generations 9:1(2018) 61 A. Musgrave BEWARE OF Mad DOG Realist beliefs. But logomania means that non-inferential beliefs are unreasonable. From this it follows that all beliefs are unreasonable. If we are to avoid total irrationalism, we must reject logomania. We must think that some non-inferential beliefs are reasonable beliefs. And we must think that the reasons for them are reasons for the believings, not for the propositions believed. The two prime sources of non-inferential beliefs are, of course, sense-experience and testimony. Plain folk, asked why they believe things, often invoke sense-experience or testimony as the answer. “Why do you believe that the cat is on the mat?” “Because I see her there.” “Why do you believe that Everest is the tallest mountain?” “Because Granny told me so.” The answers here invoke the causes and the reasons for acts of believing. Of course, for a logomaniac there are no reasons here at all; seeing the cat or listening to Granny are not propositions, and so cannot be reasons for propositions. But logomania is mistaken, as we have seen. What goes for beliefs acquired from sense-experience and testimony may go for other beliefs as well. They, too, may be non-inferential, in the sense that they are not brought about by inferring them from other beliefs. And they, too, may be reasonable beliefs, but the reasons for them are reasons for the believings, not for the propositions believed. That a hypothesis has best withstood serious criticism is a reason for believing it. But it is not a reason for the hypothesis itself; it does not show that the hypothesis is true or more likely true than not. That is how Karl Popper solved the problem of induction, and claimed that inductive logic is a myth. Again, that a hypothesis provides the best available explanation of some phenomena is a reason for believing it, but it is not a reason for the hypothesis itself. Critics of Popper, and of inference to the best explanation, all presuppose logomania—or so I have argued. Can it be so simple? A hypothesis might survive a critical discussion, or provide the best explanation of something, and yet still be false. Can it ever be reasonable to believe a falsehood, as I seem to be saying? Well, of course it can. To be sure, if we find out that what we reasonably believe is false, it is no longer reasonable to believe it. But what we do and should say in such cases is that what we reasonably believed was wrong—not that we were wrong or unreasonable to have believed it. I began this abstruse discussion by asking, “Is science ever to be believed?” My answer should by now be obvious. Of course it is. In the sciences we have our best epistemic engine. It is not an infallible engine. Its best-tested results may yet be overthrown. Still, we ought to believe them, tentatively, just as we ought to believe some basic teachings of common sense, out of which the sciences have grown. Away with all that antirealist philosophy! Oops—perhaps I am a mad dog realist after all.

Spontaneous Generations 9:1(2018) 62 A. Musgrave BEWARE OF Mad DOG Realist

Alan Musgrave Department of Philosophy University of Otago PO Box 56 Dunedin 9054 New Zealand [email protected]

References

Cartwright, Nancy. 1983. How the Laws of Physics Lie. Oxford: Oxford University Press. Dancy, Jonathan. 1994. Review of Common Sense, Science and Scepticism: A Historical Introduction to the Theory of Knowledge, by Alan Musgrave. Mind, 103(410): 214-216. Darwin, Charles. 1859. On the Origin of Species by Means of Natural Selection. 1st ed. London: John Murray. de Santillana, Georgio. 1955. The Crime of Galileo. London, Heinemann. Friedman, Milton. 1953. The Methodology of Positive Economics. In Essays in positive economics, 3-43. Chicago: University of Chicago Press. Galileo, Galilei. 1632. Dialogue Concerning the Two Chief World Systems—Ptolemaic and Copernican. Translated by Stillman Drake. Berkeley and Los Angeles: University of California Press 1962. Gosse, Edmund. 1907. Father and Son. London: William Heineman. Reprint, London: Penguin Books, 1989. Hospers, John. 1946. On Explanation. Journal of Philosophy 43(13): 337-356. Musgrave, Alan E. 1981. Unreal Assumptions in Economic Theory. Kyklos, 34: 377-387. Reprinted in Bruce Caldwell, ed. 1984. Appraisal and Criticism in Economics: A Book of Readings, 234-244. London, Allen & Unwin and in Musgrave 1999, 158-161. Musgrave, Alan E. 2009. “Are Theological Notions Explanatory? In Secular Sermons: Essays on Science and Philosophy, 197-201. Dunedin: Otago University Press. Originally published in 1993. Musgrave, Alan E. 1995. Realism and Idealization (Metaphysical Objections to Scientific Realism). In The Problem of Rationality in Science and Its Philosophy: On Popper vs. Polanyi, the Polish Conferences 1988-89, ed. J. Misiek, 143-66. Dordrecht and Boston: Kluwer Academic Publishers. Reprinted in Musgrave 1999, 131-53. Musgrave, Alan E. 1999. Essays on Realism and Rationalism. Amsterdam: Rodopi. Musgrave, Alan E. 2010. Darwin’s Life and Method. In Aspects of Darwin: A New Zealand Celebration, ed. D. Galloway and J. Timmins, 20-35. Dunedin: Friends of the Knox College Library. Polkinghorne, John. 1994. Theological notions of creativity and divine causality. In Science and Theology: Questions at the Interface, eds. M. Rae, H. Regan

Spontaneous Generations 9:1(2018) 63 A. Musgrave BEWARE OF Mad DOG Realist

and J. Stenhouse, 225-237. Edinburgh: T & T. Clark. Stanford, P. K. 2006. Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives. Oxford: Oxford University Press. Van Fraassen, Bas. C. 1980. The Scientific Image. Oxford: Clarendon Press

Spontaneous Generations 9:1(2018) 64 A Dilemma for the Scientific Realist

Author(s): Howard Sankey

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 65-67.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.26352

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Focused Discussion Invited Paper A Dilemma for the Scientific Realist* Howard Sankey†

Contemporary discussion of scientific realism is fuelled by debate between scientific realists and advocates of various forms of anti-realism. Arguments for and against scientific realism have led to vigorous debate andthe development of compromise positions. Yet an unresolved tension remains at the heart of the realist position. I wish to draw attention to a problem that divides scientific realists among themselves. The problem stems from the relation between science and common sense. As traditionally understood, scientific realism is a view about the aimof science. The aim of science is to discover the truth about the world. Scientific progress consists in progress toward that aim. Progress is either increase in the stock of truths known about the world or convergence on a true theory of the world. Truth is correspondence with reality. A claim about the world is true just in case the world is the way it is said to be. The truth sought by science is not restricted to truth at the empirical level of what is observable by the human senses unaided. Science seeks and succeeds in discovering truths about theoretical entities, properties and states of affairs that are unable to be detected by means of observation alone. The world in which these entities, properties, and states of affairs are found is an objective reality that exists independently of human thought, language, or conceptual activity. Scientific realism may be distinguished from common-sense realism about the items of ordinary experience. According to common-sense realism, the ordinary items with which we interact on a daily basis are real. They exist independently of human cognitive activity. We arrive at knowledge of such things by means of our senses. Sometimes our senses lead us astray. But for the most part they are reliable. Some ordinary things are artifacts made by humans. Others occur naturally. Items of ordinary common sense

* Received February 3, 2016. Accepted February 3, 2016. † Howard Sankey is Associate Professor of Philosophy in the School of Historical and Philosophical Studies at the University of Melbourne. He works in epistemology and philosophy of science, and has published books and articles on such topics as incommensurability, rational theory-choice, epistemic relativism and scientific realism. His current work relates to the , the nature of perceptual warrant, and the role that considerations about perception may play in defence of both common-sense realism and scientific realism.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 65 H. Sankey A Dilemma for the Scientific Realist do not depend for their continued existence on human mental activity. Whereas scientific realism is opposed to anti-realism about theoretical entities, common-sense realism stands opposed to skepticism about the external world and idealist denials of the mind-independent status of ordinary objects. What is the relation between scientific realism and common-sense realism? They may seem to go hand in hand. Yet a moment’s reflection reveals the situation is more complicated. Some argue that there is conflict between science and common sense. Indeed, some take common sense to be an outmoded theory passed down by our primitive forebears. On this view, science shows common sense to be mistaken. With the advance of science, common sense is rejected. As we enter the brave new world of modern science, we bid farewell to the erroneous beliefs of our ancestors. The idea of a conflict between science and common sense acquires special significance in the context of scientific realism. To see this, suppose that there is indeed a conflict between what science tells us and the dictates of common sense. To use Eddington’s example, suppose science tells us that the “substantial” table of common sense is “nearly all empty space” rather than solid matter (Eddington 1933, xii). A realist stance advises us to take what science tells us as the truth about the world. But what if science conflicts with common sense? Both cannot be true. If we accept what science tells us, and science conflicts with common sense, we must reject common sense as mistaken. In this way, a realist stance toward science leads to the overthrow of common sense. Common-sense realism awaits a similar fate. Some scientific realists are eliminativists about common sense. They will welcome this result. But I think it spells trouble. The trouble is due to the fact that observation provides the evidential basis for science. The empirical evidence on which science is based is observational, deriving either from immediate sense perception or from instrumentation. But observation is part of common sense. Observation by means of the senses is the primary means by which we obtain knowledge of the ordinary objects of common sense. If common sense is to be rejected, observation must be rejected as well. Without common sense, the evidential basis for science disappears. We would have no basis to accept science in the first place. Worse: if we have no basis to accept science, we have no basis to reject common sense. So we must accept common sense instead of science. To reject common sense on the basis of science is self-defeating. This leads to the following dilemma. On the one hand, we may both accept scientific realism and agree that there is a conflict between science and common sense. If we do this, we remove the evidential basis for science and have no reason to accept science in the first place. On the other hand, we may accept scientific realism and endorse common sense. If we dothis,

Spontaneous Generations 9:1(2018) 66 H. Sankey A Dilemma for the Scientific Realist we must reject the conflict between science and common sense. But ifwe seek to reconcile science with common sense, we must show that the conflict between science and common sense is an illusion. The first option requires the scientific realist to develop an account of the evidential basis of science in which ordinary experience plays no role. I see little meaningful prospect for this. I prefer instead to explore the second option. Is there any way to reconcile science and common sense? In fact, the ordinary conception of common sense is ambiguous. There is the practical skill of the craftsman or technician. There are widely held beliefs found throughout a culture in a historical period. There are the rudimentary sense-based beliefs about our immediate surroundings on which everyday practical actions are based. The ordinary conception of common sense captures all three of these things. To resolve the dilemma we must focus on the rudimentary form of common sense. One may possess common sense without possessing technical skill or craft. The widely held beliefs of a culture may be eliminated with the advance of science. But sense-based beliefs involved in practical interaction with our immediate environment have a more solid basis. The advance of science may lead to the overthrow of the widely held beliefs of particular cultures. But beliefs closely integrated with everyday practical action are less likely to be rejected. I call this form of common sense “basic common sense.” For the most part, it survives the advance of science. It provides the evidential basis on which science is founded. Scientific realism and basic common sense go hand in hand.1

Howard Sankey University of Melbourne Parkville, Victoria Australia 3010 [email protected]

References

Eddington, Arthur. 1933. The Nature of the Physical World. Cambridge: Cambridge University Press. Sankey, Howard. 2014. Scientific Realism and Basic Common Sense. Kairos 10: 11-23.

1 For details, see Sankey (2014). Spontaneous Generations 9:1(2018) 67 Tolstoy’s Argument: Realism and the History of Science

Author(s): Stathis Psillos

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 68-77.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.28059

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Focused Discussion Invited Paper Tolstoy’s Argument: Realism and the History of Science* Stathis Psillos†

I. In his intervention in the “bankruptcy of science debate,” which raged in Paris at the turn of the twentieth century, Leo Tolstoy was one of the first to use the historical record of science as a weapon against current science.1 Among his various observations in his essay “The Non-Acting,” published in French in August 1893, the following stands out:

Lastly, does not each year produce its new scientific discoveries, which after astonishing the boobies of the whole world and bringing fame and fortune to the inventors, are eventually admitted to be ridiculous mistakes even by those who promulgated them? (...) Unless then our century forms an exception (which is a supposition we have no right to make), it needs no great boldness to conclude by analogy that among the kinds of knowledge occupying the attention of our learned men and called science, there must necessarily be some which will be regarded by our descendants much as we now regard the rhetoric of the ancients and the scholasticism of the Middle Ages. (1904, 105)

* Received August 31, 2016, Accepted August 31, 2016 † Stathis Psillos is Professor of Philosophy of Science and Metaphysics at the University of Athens, Greece and a member of the Rotman Institute of Philosophy at the University of Western Ontario, Canada. He is the author or editor of seven books and of more than 130 papers and reviews in learned journals and edited collections, mainly on scientific realism, causation, explanation and the history of philosophy of science. He is member of the Academy of Europe, of the International Academy of Philosophy of Science. 1 I tell the philosophical story of the bankruptcy debate in my “Revisiting ‘the Bankruptcy of Science’ Debate” talk delivered at the Rotman Institute of Philosophy, University of Western Ontario, 24 January 2014; available at https://www.youtube.com/watch?v=zwEYZNKeCpQ

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 68 S. Psillos Tolstoy’s Argument: Realism and the History of Science

Tolstoy couldn’t put this history-fed pessimism more clearly or elegantly. Given the past record of failures of scientific theories, which emerged triumphantly only to be abandoned at a later stage, we are justified to expect “by analogy” that at least some of the presently accepted theories will have the same fate, unless we are entitled to suppose that currently accepted theories “form an exception,” a supposition that “we have no right to make.” In his preface to the Russian translation of the English essayist Edward Carpenter’s “Modern Science: A Criticism” (published in 1885), Tolstoy made clear that his main concern is with a conception of science that takes it to aim to explain “things near and important to us by things more remote and indifferent” (1904, 220).

II. Note the sophistication of Tolstoy’s argument. It is not inductive. It does not conclude that all current scientific theories will be abandoned; nor that most of them will be abandoned; not even that it is more likely than not that all or most of them will be abandoned. Its conclusion is modest: some of presently accepted theories will have the fate of those past theories that once dominated the scene but subsequently were abandoned. This conclusion is grounded on an analogy; and it is as strong as the basis for the analogy. Hence, it is subject to the proviso that the current ways to do science “form no exception.” Because of its modesty, the argument is very pressing. Unless some kind of privilege is granted to current science or unless there are ways to identify the current culprits—those current theories which will have the fate of past abandoned ones—Tolstoy’s point is compelling: we cannot simply assume that as science grows, we get to know more about the world.

III. Tolstoy was very much among the “people of the world” whom Henri Poincaré had in mind when he said, in his keynote address to the 1900 International Congress of Physics:

The people of world [les gens du monde] are struck to see how ephemeral scientific theories are. After some years of prosperity, they see them successively abandoned; they see ruins accumulated on ruins; they predict that the theories in vogue today will in a short time succumb in their turn, and they conclude that they are absolutely in vain. This is what they call the bankruptcy of science. (1900, 14)

But he added, “Their scepticism is superficial; they understand neither the aim nor the role of scientific theories; without this they would understand Spontaneous Generations 9:1(2018) 69 S. Psillos Tolstoy’s Argument: Realism and the History of Science that ruins can still be good for something.” Poincaré’s considered reply was that the history of science shows that there is continuity at the level of mathematical equations, which he took to express theoretical relations among things. Far from being bankrupt, science offers some knowledge of the world, which is certified by the historical record of diachronically invariant mathematical equations, and hence of theoretical relations. The history of the development of physics, Poincaré stressed, shows that “new relations are continually being discovered between objects which seemed destined to remain forever unconnected” (1900, 23). This kind of response answers Tolstoy’s pessimism by presenting a strategy that neutralises his argument. The answer is: look at what is retained when theories change; these invariant elements capture what we can legitimately describe as scientific knowledge of the world.

IV. For various reasons that had to do with Poincaré’s relationism, he thought that that the neutralising strategy was bound to vindicate only knowledge of the relational structure of the world and not of the intrinsic properties of things—the “things themselves” (choses elles-memes) as he put it (see Psillos 2014). Ludwig Boltzmann did not carry this philosophical baggage. He was committed to atomism and thought that the atomic theory of matter was, by and large, correct (a view that Poincaré came to accept shortly before he died based on the theoretical and experimental work of Jean Perrin on the molecular basis of Brownian motion—see Psillos 2011). It’s unlikely that Boltzmann knew about Tolstoy and the “bankruptcy of science debate” in Paris, but he certainly knew of the “bankruptcy of materialism” debate that had been initiated by ’s paper, “Die Überwindung des Wissenschaftlichen Materialismus,” which was delivered in 1895 at the annual conference of the Society of German Naturalists and Physicians in Lubeck. Boltzmann was the respondent. Ostwald’s paper was quickly translated into French (Ostwald 1895) and English (Ostwald 1896).2 Part of Ostwald’s argument against the “mechanics of atoms” was historical: all attempts so far to offer a mechanical account of optical phenomena had failed (1896, 593). But what bothered Boltzmann more was the view of the so-called “phenomenologists” (who, as he noted, included

2 Ostwald’s piece was linked to the “bankruptcy of science” debate by the French physicist Leon “Marcel” Brillouin (1854-1948), who published a pointed reply to Ostwald in the very same year in the journal Revue Générale des Sciences Pures et Appliquées, entitled “Pour la Matiere.” His piece started with the observation, “After the bankruptcy of science, the bankruptcy of atomism!”

Spontaneous Generations 9:1(2018) 70 S. Psillos Tolstoy’s Argument: Realism and the History of Science the early Max Planck). They, according to Boltzmann, were opposed to the explanation of the visible in terms of the invisible. According to the phenomenologists, the aim of science was to “write down for every group of phenomena the equations by means of which their behaviour could be quantitatively calculated” (Boltzmann 1901b, 249). The theoretical hypotheses from which the equations had been deduced were taken to be the scaffolding that was thrown away after the right equations hadbeen arrived at. For phenomenologists, then, explanatory hypotheses of the visible in terms of the invisible were neither unnecessary nor useless. Rather, they had a heuristic value only: they lead to stable (differential) equations that describe the behaviour of the phenomena as they are experienced. According to Boltzmann, a key motivation for this attitude toward explanatory scientific theories was the “historical principle,” viz., that hypotheses are essentially insecure because they tend to be abandoned and replaced by other, “totally different” ones. As he put it:

...frequently opinions which are held in the highest esteem have been supplanted within a very short space of time by totally different theories; nay, even as St. Remigius the heathens, sonow they [the phenomenologists] exhorted the theoretical physicists to consign to the flames the idols that but a moment previously they had worshipped. (1901b, 252-253)

Against this “historical principle,” which was a bit more radical than Tolstoy’s, Boltzmann argued that despite the presence of “revolutions” in science, there is enough continuity in theory change to warrant the claim that some “achievements may possibly remain the possession of science for all time, though in a modified and perfected form” (1901b, 253).

V. So Boltzmann too, independently of Poincaré, argued that the answer to the historical challenge is to look for continuity in theory change. But unlike Poincaré, Boltzmann did not restrict the criterion of invariance to relations only. In fact, if the historical principle is correct at all, it cuts also against the equations of the phenomenologists. For unless these very equations remain invariant through theory change, there should be no warrant for taking them to be accurate descriptions of worldly relations. Boltzmann went on to make a seemingly puzzling point:

Indeed, by the very historical principle in question a definitive victory for the energeticians and phenomenologists would seem to be impossible, since their defeat would be immediately required by the fact of their success. (1901b, 253) Spontaneous Generations 9:1(2018) 71 S. Psillos Tolstoy’s Argument: Realism and the History of Science

To see what he has in mind, let us stress his further point that the sole way that phenomenologists had to arrive at the desired differential equations was by relying on the basic assumptions of the atomic theory. These equations, Boltzmann noted, are “totally devoid of meaning without the assumption of a very large number of individual entities,” that is, without assuming that matter is not continuous. And he added:

An unthinking use of mathematical symbols only could ever have led us to separate differential equations from atomistic conceptions. As soon as it is clearly seen that the phenomenologists, under the veil of their differential equations, also proceed from atomistic entities, which they are obliged to conceive differently for every group of phenomena and as endowed now with these and now with those complicated properties, the need of a simplified and uniform atomistic doctrine will soonbe felt again. (1901b, 252)

In other words, the very construction of the differential equations of the phenomenologists requires commitment to atomistic assumptions. Hence, the phenomenologists are not merely disingenuous when they jettison the atomistic assumptions after the relevant differential equations have been developed. As Boltzmann noted in his seemingly puzzling remark, their move is self-undermining in light of the historical principle that they themselves rely on to motivate abandoning the atomistic hypothesis and to defend the withdrawal to the level of mathematical equations. The reason is simple: their success (viz., that they have the right equations) would lead to their defeat since the very theory that led to this success would fall foul of the historical principle: it would have to be abandoned. That’s exactly what Boltzmann means when he says that the phenomenologists’ “defeat would be immediately required by the fact of their success.”

VI. The point here is not to classify Boltzmann as a realist—though I think he was. The point is that Boltzmann saw clearly that the historical challenge—Tolstoy’s argument, the phenomenologists’ principle—is not compelling, though it is illuminating, provided that parts of past and abandoned theories have been shown to be “stable and established” (1901b, 250). But it is significant that he also made clear that though showing continuity in theory change is important and indispensable in rebutting the historical principle, it is folly to think that this continuity is, or can only be, at the level of mathematical equations. More about nature can be known than mathematically-expressed relations among magnitudes.

Spontaneous Generations 9:1(2018) 72 S. Psillos Tolstoy’s Argument: Realism and the History of Science

Poincaré was, at least before Perrin’s work on Brownian motion, reluctant to accept that more of nature can be known. In the 1900 address we referred to above, he illustrated his relationist account of invariance in theory change by referring to the accommodation of Fresnel’s laws within Maxwell’s theory. As is well-known, Fresnel’s laws relating the amplitudes of reflected rays vis-à-vis the amplitudes of incident rays in the interface of two media were retained within Maxwell’s theory of electromagnetism, although, in this transition, the interpretation of the amplitudes changed dramatically. The fact of retention was for Poincaré evidence that Fresnel’s theory was not a mere “practical recipe” for prediction. More specifically, it was evidence that Fresnel got some relational facts about light right, where these facts were expressed in the relevant mathematical equations. Poincaré noted:

These equations express relations, and if the equations remain true it is because these relations retain their reality. They teach us, now as then, that there is such and such a relation between some thing and some other thing; only this something formerly we called motion; we now call it electric current. But these appellations were only images substituted for the real objects which nature will eternally hide from us. (1900, 15)

It is significant to note that for Poincaré the order of dependence between the worldly relations and mathematical equations is from the former to the latter. It is because real relations remain invariant that they are represented by invariant mathematical equations in different theories. The retention of mathematical equations in theory change is certainly a sign for an underlying invariant natural relation. But this should not obscure the fact that the invariance of the equation is explained by the invariance of the natural relation.

VII. But is it right to say that only relational facts about light got retained in the transition from Fresnel to Maxwell? The Fresnel case was a matter of national pride for the French. Already in 1895, in a reply to Ostwald’s “irresponsibly” titled essay, Marie Alfred Cornu, vice president of l’Académie des Sciences, emphatically noted, “Thus, according to Mr. Ostwald, nothing remains of the work of Fresnel, of this admirable theory of the light waves, the influence of which was widespread and fecund for three quarters ofa century” (1895, 1031). The readers of the Revue (which was partly a popular journal), Cornu noted, will think that Fresnel’s theory was “mediocre” since “it was buried without a noise” by the electromagnetic theory.

Spontaneous Generations 9:1(2018) 73 S. Psillos Tolstoy’s Argument: Realism and the History of Science

Cornu went on to point out that Fresnel’s law was still alive and was retained in Maxwell’s theory because it got right two important facts about light, viz., that optical vibrations are wave propagations and that this propagation is transversal. It is exactly these two features that were captured by Maxwell’s “electric waves” (1895, 1031). This is something that I too argued for (independently of Cornu, I must say) in Psillos (1995). The key point, I think, is that what counts is invariance in theory change, and that invariance in theory change is not merely relational or structural. Differently put, what’s important is that Fresnel correctly identified some properties of light vis-à-vis Maxwell’s theory, and not whether these properties were relational or not.

VIII. Why, one might wonder, should we take seriously the invariant elements in theory change? What do they tell us about the world and why? Poincaré was fully alive to this problem and I think he had the right answer: there is no God’s eye point of view; we work from within the scientific image of the world but we still want to find out to what extent it has latched onto the world. Tolstoy’s argument suggests a historical reason to doubt that we have reason to believe that scientific theories have latched onto the world. Invariance in theory change neutralizes this reason. But one might still wonder: why should we think that some parts of the theory have latched onto the world in the first place? In his reply to the extreme anti-realism of Eduard LeRoy, Poincaré rightly thought that the correct attitude toward science should be such that theories are taken to be neither dreams nor fictions (1902, 290). The fact that they are not dreams guards against idealism; the fact that they are not fictions guards against fictionalism. The invariance in theory change separates theories from dreams, since the transmissiblity of theoretical elements that survive theory change guarantees their inter-subjectivity. Still, the retained parts of theories could fail to represent anything real; they could be fictions! That’s precisely why Poincaré went beyond invariance and added another condition to his defence of science. I have called it correspondence (though I do not want it to be confused with the correspondence theory of truth). The idea is that the elements of theories that survive theory change should be such that there is reason to think that they have latched onto reality.3 Though I will not argue for this here in any detail, this condition of correspondence captures an early version of the so-called “no miracles argument,” which Poincaré put in terms of how unlikely it is that a theory is radically false and yet yields successful predictions. Science makes better

3 For a detailed analysis of this, see Psillos (2014, 129-131). Spontaneous Generations 9:1(2018) 74 S. Psillos Tolstoy’s Argument: Realism and the History of Science than chance predictions (“the scientist is less mistaken than a prophet who should predict at random”) and this suggests that science “is not without value as a means for knowledge” (1902, 265). I am not claiming that Poincaré was a scientific realist, though, as I noted earlier, he progressively became more convinced of the reality of unobservable entities such as atoms (see Psillos 2011; also Poincaré 1913, 90). But I am claiming that he rightly took it that a successful defence of science as an enterprise that delivers at least some knowledge of the unobservable world requires two kinds of argument: an historical argument that blocks Tolstoy’s argument, and a conceptual one linking the success of theories especially in yielding novel predictions to their having in some sense or other latched onto the world.

IX. As noted already, Poincaré, at least initially, thought that this kind of conceptual argument warrants only the conclusion that empirically successful theories have latched onto the relational structure of the world. We saw Cornu making the point (that I too defended extensively in Psillos 1999, chapter 7) that more is actually warranted. Boltzmann too made this point in a rather elegant way. Unlike Poincaré, Boltzmann didn’t live long enough to see the triumph of atomism after Perrin. When he was defending atomism, he was fully aware of the anomalies that the atomic hypothesis faced (e.g., the specific heats anomaly) and he was also aware that molecules must have had an internal structure. So he knew very well that the atomic hypothesis was not “the final word.” Yet, as he stressed in “The Relations of Applied Mathematics” (1906, 599), the atomic conception of matter has given “a better explanation of the previously known facts, it inspired new experiments and permitted the prediction of unknown phenomena.” Besides, it has provided a unified framework for the study of all forms of matter (cf. 1901a, 73, 76). These might not be conclusive arguments for the reality of atoms. But they are certainly good and cogent arguments. Moreover, they suggest the right kind of defence of a realist view of science: in defending realism, we can and should go beyond the claim of invariance in theory change and offer some positive explanatory reasons for taking at least some current theories as not being radically false.

X. Let’s go back to where we started: Tolstoy’s argument. Can it be turned into a positive argument for realism? Assume that there is no privilege, that is (though this certainly can and should be contested), that current science

Spontaneous Generations 9:1(2018) 75 S. Psillos Tolstoy’s Argument: Realism and the History of Science is no different than past science when it comes to methods and reliability. Assume, with Poincaré, that “science has already lived long enough for us to be able to find out by asking its history whether the edifices itbuilds stand the test of time, or whether they are only ephemeral constructions” (1902, 292). It is certainly the case, as both Poincaré and Boltzmann noted, that there is a non-trivial pattern of retention in theory change: elements of past theories have been retained in subsequent theories and are part of the current scientific image. We may conclude then “by analogy,” using Tolstoy’s words, that “among the kinds of knowledge occupying the attention of our learned men [and women] and called science, there must necessarily be some which will be regarded by our descendants much as we now regard” atomism, Newton’s law of gravity, Maxwell’s equations, Dalton’s laws, and many other components of past theories that are still with us as part and parcel of the scientific image. This is an optimistic lesson coming from the history oftheory change in science. It might be modest, but it is robust enough to be realist.

XI. I will close with two questions that Boltzmann raised:

Is it possible that the conviction will ever arise that certain representations are per se exempt from displacement by simpler and more comprehensive ones, that they are “true”? Or is that perhaps the best conception of the future, to imagine something of which one has absolutely no conception? (1902, 256)

His reply was typically modest: “These are, indeed, interesting questions. One regrets almost that one must pass away before their decision. O arrogant mortal! Thy destiny is to exult in the contemplation of the surging conflict!” Boltzmann took his own life on September 6, 1906. A few years later the “surging conflict” in which he took sides in favour of atomism led tothe vindication of atomism. He did not live to see it. Still, his questions remain very interesting.

Stahis Psillos Department of Philosophy and History of Science University of Athens University Campus Athens 15771 Greece [email protected]

Spontaneous Generations 9:1(2018) 76 S. Psillos Tolstoy’s Argument: Realism and the History of Science

References

Boltzmann, Ludwig. 1906. The Relations of Applied Mathematics. In International Congress of Arts and Science, vol. 2, ed. H. Rogers, 591-603. London and New York: University Alliance. Boltzmann, Ludwig. 1901a. On the Necessity of Atomic Theories in Physics. The Monist 12(1): 65-79. Boltzmann, Ludwig. 1901b. The Recent Development of Method in Theoretical Physics. The Monist 11(2): 226-257. Cornu, Alfred. 1895. Quelques Mots de Réponse A “La Déroute de l’Atomisme Contemporain”. Revue Générale des Sciences Pures et Appliquées 6(23): 1030-1031. Ostwald, Wilhelm. 1895. La Déroute de l’Atomisme Contemporain. Revue Générale des Sciences Pures et Appliquées 21: 953-958. Ostwald, Wilhelm. 1896. The Failure of Scientific Materialism. Popular Science Monthly 48: 589-601. Poincaré, Henri. 1902. Sur la Valeur Objective de la Science. Revue de Métaphysique et de Morale 10(3): 263-293. Poincaré, Henri. 1900. Sur les Rapports de la Physique Expérimentale et de la Physique Mathématique. In Rapports Présentés au Congrès International de Physique, vol. 1, 1-29. Paris: Gauthier-Villars. Psillos, Stathis. 2014. Conventions and Relations in Poincaré’s Philosophy of Science. Methode-Analytic Perspectives 3(4): 98-140. Psillos, Stathis. 2011. Moving Molecules Above the Scientific Horizon: On Perrin’s Case for Realism. Journal for General Philosophy of Science 42(2): 339-363. Psillos, Stathis. 1999. Scientific Realism: How Science Tracks .Truth London and New York: Routledge. Psillos, Stathis. 1995. Is Structural Realism the Best of Both Worlds? Dialectica 49(1): 15-46. Tolstoy, Leo. 1904. Essays & Letters, trans. Aylmer Maud. New York: Funk and Wagnalls Company.

Spontaneous Generations 9:1(2018) 77 A Fond Farewell to “Approximate Truth”?

Author(s): P. Kyle Stanford

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 78-81.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.28057

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Focused Discussion Invited Paper A Fond Farewell to “Approximate Truth”?* P. Kyle Stanford†

Rather than making a prediction for the future of the scientific realism debate, I would like to propose a substantial revision of the way that debate is presently conducted. Most commonly, the scientific realism debate is characterized as dividing those who do and those who do not think that the striking empirical and practical successes of at least our best scientific theories indicate with high probability that those theories are “approximately true.” But I want to suggest that this characterization of the debate has far outlived its usefulness. Not only does it obscure the central differences between two profoundly different types of contemporary scientific realists, but even more importantly it serves to disguise the most substantial points of actual disagreement between these two kinds of realists and those who instead think the historical record of scientific inquiry itself reveals that such realism is untenable in either form. In earlier iterations of the debate, the language of approximate truth played a crucial role in protecting scientific realism from invidious oversimplification. Scientific realists never meant to claim, of course, that even the most successful contemporary scientific theories were correct in every detail or in every assertion they made. The attribution of merely approximate truth recognized that even our best theories are surely wrong in some of their details and that many further surprises, corrections, and unexpected discoveries still remain in store. But those whom I have recently (Stanford 2015) called classical or “Catastrophist” scientific realists insist no less clearly that the most important, central, and fundamental claims of our best scientific theories simply describe how things actually stand in otherwise inaccessible domains of nature and that we should therefore expect those claims to be validated by and to persist in some recognizable form throughout the course of all further scientific inquiry. That is, such Catastrophist realists deny that the most successful contemporary scientific theories will ultimately share the same fates as successful predecessors like

* Received October 26, 2016. Accepted October 26, 2016. † P. Kyle Stanford is Professor and Chair of the Department of Logic and Philosophy of Science at the University of California, Irvine and the author of Exceeding our Grasp: Science, History, and the Problem of Unconceived Alternatives (2006). He is a frequent contributor to the literature concerning scientific realism and instrumentalism.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 78 P.K. Stanford A Fond Farewell to “Approximate Truth”? classical mechanics, the caloric theory of heat, phlogistic chemistry, the wave theory of light, and many other examples in which central, important, and/or fundamental claims of successful scientific theories were indeed overturned in the course of further inquiry. Of course, accumulating historical evidence has made it increasingly difficult to argue that there are any fundamental or categorical differences between the sorts of empirical and practical successes achieved by past scientific theories and those of the present day. Accordingly, many latter-day scientific realists have come to reject the Catastrophist realist’s exceptionalism concerning contemporary scientific theories, allowing instead that the future of science will be characterized by conceptual revolutions and theoretical transformations just as profound as those that characterize its past, and that many of even the most central and fundamental claims of contemporary theoretical orthodoxy will ultimately be overturned as scientific inquiry continues. But such “Uniformitarian” scientific realists do not see this concession as undermining the claim that those same theories are nonetheless “approximately true. ” After all, there are many substantial points of continuity, overlap, structural similarity, and other detailed relationships between successful past scientific theories and those of the present day. Uniformitarian realists suggest that there will be similarly substantive continuity and points of connection between our own theories and their historical successors that reflect some systematic connection or relationship between the conceptual apparatus of those theories and how things actually stand in the otherwise inaccessible domains of nature they seek to describe, even if many of their central and fundamental claims are nonetheless ultimately overturned. Indeed, most Uniformitarian realists go on to argue that the historical evidence itself puts us in a position to actually identify the particular parts, claims, or features of our own scientific theories that are responsible for their successes and should therefore be expected to persist throughout the course of further inquiry, whether these are thought to be the theory’s claims about the “structure” of nature (Worrall 1989), its “working posits” (Kitcher 1993), its “core causal descriptions” of hypothesized entities (Psillos 1999), or something else altogether. It would seem, then, that Catastrophist and Uniformitarian scientific realists are united in attributing “approximate truth” to our best scientific theories only because they have sharply diverging substantive conceptions of what such approximate truth requires or involves. To see just how divergent these conceptions of approximate truth really are, notice that Catastrophists contrast the approximate truth they attribute to contemporary theories with the fates of past theories like classical mechanics or the caloric theory of heat, while those very same past successful theories serve as paradigmatic examples of the much weaker Uniformitarian conception of approximate truth.

Spontaneous Generations 9:1(2018) 79 P.K. Stanford A Fond Farewell to “Approximate Truth”?

Framing the debate as one that concerns the approximate truth of our best scientific theories not only disguises these profound differences between Catastrophist and Uniformitarian realists, but also obscures the most important points of genuine disagreement between such realists and those who believe instead that the historical record shows scientific realism to be untenable in either form. Contemporary historicist critics of scientific realism do not argue that there will be no important points of continuity or systematic connections either between contemporary scientific theories and their successors or between those theories and the otherwise inaccessible natural domains they seek to describe. Their claim is simply that we should nonetheless expect many of even the most central and fundamental claims of contemporary scientific theories to be abandoned in the course of further inquiry (just as they were in the case of many successful past theories), and this claim is one that at least Uniformitarian realists seem happy to accept. Such critics diverge from Uniformitarian realists, however, in rejecting the further claim that the available evidence puts us in a position to reliably specify just which parts, claims, or features of our best theories accurately describe the world and should therefore be expected to persist in some recognizable form throughout the course of further scientific inquiry. Such historicist critics of scientific realism do not expect us to be any more successful than scientists of the past have turned out to be in predicting which parts, claims, or features of their own theories accurately capture how things actually stand in the otherwise inaccessible domains of nature they seek to describe or the important points of continuity and systematic relationships that will hold between their own theories and those still to come. My plea, then, is that we simply stop thinking, talking, and writing about the scientific realism debate as being concerned with the approximate truth of our best scientific theories. What actually divides the various parties in today’s scientific realism dispute is not whether they embrace the polysemous verbal formula that our best scientific theories are at least approximately true, but instead their competing commitments regarding the extent, form, and predictability of the theoretical continuity we should expect between contemporary scientific theories and those that will be embraced by future scientists and scientific communities. Catastrophist scientific realists doubt that the future of science will include further profound theoretical and conceptual revolutions of the sort that characterize much of its past. Uniformitarian realists concede that it will, but insist that we are nonetheless in a position to pick out the particular parts, claims, or features of our own scientific theories that are responsible for their successes and will therefore survive indefinitely throughout such further theoretical and conceptual revolutions. And historicist opponents of scientific realism think not only that the future of science will be characterized by such further

Spontaneous Generations 9:1(2018) 80 P.K. Stanford A Fond Farewell to “Approximate Truth”? transformations and revolutions, but also that we ourselves are simply not in a position to reliably or justifiably predict which parts, claims, or features of our best scientific theories are responsible for their successes and will therefore be recognizably preserved throughout the course of all further inquiry. None of these disagreements seem especially well characterized as concerning whether our most successful contemporary scientific theories are or are not “approximately true.”

P. Kyle Stanford Department of Logic and Philosophy of Science University of California, Irvine Irvine, CA 92697 USA [email protected]

References

Kitcher, Philip. 1993. The Advancement of Science. New York: Oxford University Press. Psillos, Stathis. 1999. Scientific Realism: How Science Tracks .Truth New York: Routledge. Stanford, P. Kyle. 2015. Catastrophism, , and a Realism Debate That Makes a Difference. Philosophy of Science 82(5): 867-878. Worrall, John. 1989. Structural Realism: The Best of Both Worlds. Dialectica 43(1/2): 99-124.

Spontaneous Generations 9:1(2018) 81 Why the Realism Debate Matters for Science Policy: The Case of the Human Brain Project

Author(s): Jamie Shaw

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 82-98.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.27760

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Peer-Reviewed Why the Realism Debate Matters for Science Policy: The Case of the Human Brain Project*,† Jamie Shaw‡

I. Abstract There has been a great deal of skepticism about the value of the realism/anti-realism debate. More specifically, many have argued that plausible formulations of realism and anti-realism do not differ substantially in any way (Fine 1986; Stein 1989; Blackburn 2002). In this paper, I argue against this trend by demonstrating how a hypothetical resolution of the debate, through deeper engagement with the historical record, has important implications for our criteria of theory pursuit and science policy. I do this by revisiting Arthur Fine’s “small handful” argument for realism and by showing how the debate centers on whether continuity (either ontological or structural) should be an indicator for the future fruitfulness of a theory. I then demonstrate how these debates work in practice by considering the case of the Human Brain Project. I close by considering some potential practical considerations of formulating meta-inductions. By doing this, I contribute three insights to the current debate: 1) I demonstrate how the realism/anti-realism debate is a substantive debate, 2) I connect debates about realism/anti-realism to debates about and pursuit, and 3) I show the practical significance of meta-inductions.

* Received July 3, 2016. Accepted January 25, 2017. † This paper is an abbreviated and updated version of my Master’s thesis, supervised by Sergio Sismondo. Many thanks to him, as well as Josh Mozersky, Tim Juvshik, David Pitcher, Torin Doppelt, and Kyley Ewing for their helpful comments and suggestions on the thesis. I also greatly appreciate the careful feedback I received from Stathis Psillos, Kathleen Okruhlik, Robert DiSalle, and many anonymous referees on later drafts of the paper. I also had extremely fruitful discussions with Kyle Stanford on the broad arc of the paper. ‡ Jamie Shaw is a PhD candidate in the philosophy department of the University of Western Ontario and a resident member of the Rotman Institute of Philosophy. His research is focused on Feyerabend’s conception of theoretical and methodological pluralism, theory pursuit, and structural features of scientific communities.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 82 J. Shaw Why the Realism Debate Matters for Science Policy

II. Preamble , remarking on the enduring stand-off between compatibilists and determinists, states that if “a controversy [remains] long kept on foot, and remains still undecided, we may presume that there is some ambiguity in the expression, and that the disputants affix different ideas to the terms employed in the controversy” (Hume 1902, 81). This sentiment has been echoed by many in relation to the realism/anti-realism debate. Hasok Chang, for instance, writes that “[d]ebates about realism can be frustrating. It is easy to get a sense that one does not know what all the fuss is about” (Chang 2001, 5). Given this frustration, it can seem quite tempting to give up on the debate altogether. In this paper, I argue that the resolution of the realism/anti-realism debate has significant implications for our criteria for theory-choice and thus for science policy.1 More specifically, I argue that continuity or discontinuity can be used to inductively justify the pursuit (or non-pursuit) of scientific theories.2 By doing this I hope to contribute three insights to the discussion: I provide a novel defense of the realism/anti-realism debate from skeptical or deflationary arguments, I connect the hypothetical resolution of the debate to discussions about theory choice and theory pursuit, and I demonstrate the practical importance of meta-inductions. The structure of this paper is as follows. After providing a brief overview of the debate and skeptical arguments which seek to deflate the debate, I motivate our continued engagement in the realism/anti-realism debate by revisiting Arthur Fine’s “small handful” argument for realism and relate it to discussions of theory-choice. I then provide a survey of a variety of realist and anti-realist positions and relate them back to the small handful argument. This demonstrates how our adoption of a realist or anti-realist position has normative implications for scientific practices. Following this, I provide the case study of the Human Brain Project in which scientists used meta-inductions to justify their attitudes toward the future prospects of the project. This shows the practical significance of meta-inductions. I conclude by briefly positing a few considerations for future meta-inductions.

III. Some Background on the Debate Since Putnam (1978) and Boyd (1983), many have conceived of realism as an empirical thesis about the history of science. More specifically, they provide an abductive argument from the success of science to the (approximate) truth of our best current scientific theories. Putnam’s original

1 For this paper, “realism” means the explanationist defense of realism. 2 For the remainder of this paper, “continuity” and “discontinuity” are meant as neutral terms (i.e., they do not refer to ontological continuity, structural, etc. specifically).

Spontaneous Generations 9:1(2018) 83 J. Shaw Why the Realism Debate Matters for Science Policy thesis argues that the history of science is cumulative in that earlier successful scientific theories were retained as limiting cases of our current best theories. Many since have followed suit and argued that assuming realism is a coherent position (i.e., that abductive reasoning isn’t problematically circular),3 it requires the historical continuity between successful theories to be an accurate description of the history of science. Laudan’s (1981) pessimistic meta-induction (PMI) and his anti-realist followers (most notably, Kyle Stanford [2003, 2006]) provide historical counterexamples to the continuity thesis to demonstrate that realism is either false or should be qualified.4 Given that the correctness of these theses can be provided, if we assume that interpreting the history of science can be done objectively,5 by the historical record, we should have a straightforward, albeit complicated, answer to our inquiry. A part of what is difficult about assessing the realism/anti-realism debate is the breadth of the subject matter. Realism, in its earliest formulations, was a global thesis about the entire history of science. Similarly, Laudan makes the rather bold claim that “for every highly successful theory in the past of science which we now believe to be a genuinely referring theory, one could find half a dozen once successful theories which we now regard as substantially non-referring” (Laudan 1981, 35). As Lewis notes: Given that past theories are not automatically successful, the only way to ascertain whether the history of science supports convergent realism or undermines it would be to conduct a thorough survey of past theories, true and false, successful and unsuccessful. A moment’s reflection on the difficulties of sucha survey perhaps indicates why nothing like it has been attempted. (Lewis 2001, 379) Because of this, contemporary realists tend to make more qualified claims.6 However, they have qualified this global thesis in two distinct

3 See Psillos (1999, ch. 10) for a defence of abductive reasoning. I also will not consider the objections that realists commit the “base rate fallacy” (Magnus and Callender 2004) or the “turnover fallacy” (Lange 2002). I think both of these objections have been answered definitively by Saatsi (2005). 4 Stanford’s use of the PMI is unique in that it combines historical meta-inductions with . I will restrict myself to traditional conceptions of the PMI in this paper. 5 This is an extremely under-discussed assumption of the debate. See Lakatos (1970) for a criticism of this. 6 Some, such as Hardin and Rosenberg (1982) and Leplin (1997), retain an extremely “thin” conception of realism that allows for substantive discontinuities. I will not consider these positions further since I accept Stanford’s claim that this strategy dilutes realism into a trivial position (Stanford 2003, 566). Spontaneous Generations 9:1(2018) 84 J. Shaw Why the Realism Debate Matters for Science Policy ways. Psillos (1996, 1999), for example, argues that realists shouldn’t commit themselves to an “all-or-nothing” view in which a single example of discontinuity would undermine the realist thesis, but should instead limit realism to particular cases. This qualification makes realism a claim about particular historical episodes. Other formulations of realism, however, retain a global thesis while restricting the domain of realism to particular aspects of scientific theories (e.g., structures) (see Chakravartty 2011). I will return to these accounts in section 6; for now, I want to calm the worry that the debate about realism is of no significance whatsoever.

IV. Motivating the Realism/Anti-Realism Debate Regardless of the historical exegesis in which a philosopher of science engages, many commentators on this debate suspect that there is nothing worth debating about. The most famous instance of this is Arthur Fine’s “Natural Ontological Attitude” (NOA) which offers a deflationary alternative to realism and anti-realism: science provides “homely truths” grounded in the evidence available to scientists at the time and those truths are potentially defeasible (Fine 1986, 153).7 Since anti-realists such as Laudan, Stanford, and van Fraassen do not deny the existence of the external world (i.e., the metaphysical thesis of realism), accept some weak version of the semantic thesis which states that theoretical terms refer to something in the world, and realists do not deny some weak version of , proponents of both positions can subscribe to NOA.8 What realism seeks to add to NOA, Fine argues, is some superfluous notion of “approximate truth” that exists in addition to instrumental success that amounts to mere “foot stomping” that theoretical entities are “really real.” Since there is no substantive difference between “approximate truth” and “empirical success,” realism and anti-realism dissolve into NOA. Proponents of the NOA are not the only philosophers who seek to dissolve the debate. Kukla, for example, writes, “there are no scientific practice arguments on the table that support either side of the debate” (Kukla 1994, 955). Blackburn claims that realism wavers between being “indisputably true” and “obviously false” depending on its interpretation (Blackburn 2002, 112). Howard Stein claims that there is no notable difference between plausible formulations of instrumentalism and realism (Stein 1989, 47). Finally, Maddy argues that we shouldn’t have to “add extra-scientific standards of justification to our repertoire” (Maddy 2001, 47-48). This

7 Further proponents of the NOA include Rouse (1988), Brandon (1997), and Sismondo and Chrisman (2001). 8 I am implicitly using Psillos’ (2000) characterization of realism as the joint holding of the sematic, metaphysical, and epistemic theses.

Spontaneous Generations 9:1(2018) 85 J. Shaw Why the Realism Debate Matters for Science Policy increasing push for a deflationary take on the debate has left many to search for other means of justifying realism (e.g., Chang 2012) or simply abandoning the attempt to justify realism abductively altogether. I want to resist this deflationary movement and demonstrate how the resolution of the debatehas implications for science policy and funding distribution.

V. The Small Handful Argument and Theory-Choice An underappreciated consequence of realism that Fine considers is what he calls the “small handful argument.” He puts the argument like this:

Suppose that the already existing theories are themselves approximately true descriptions of the domain under consideration. Then surely it is reasonable to restrict one’s search for successor theories to those whose ontologies and laws resemble what we already have especially where what we already have is well-confirmed. And if these earlier theories were approximately true, then so will be such conservative successors. Hence, such successors will be good predictive instruments; that is, they will be a successor in their own right. (Fine 1984, 87)

This can be put another way: since realists argue that approximate truth can be inferred from empirical success, they are committed to the view that approximately true theories will be successful in the future. This means that theories explaining some class of phenomena should be continuous with previous theories within that domain (though, see the caveats below). Therefore, continuity becomes a sufficient condition for pursuing a theory.9 This provides a crucial difference between approximate truth and empirical success: approximate truth entails the continued success of the theory whereas mere empirical success does not. If some explanatory defense of realism is true, then, we should add “continuity” to the list of factors considered in choosing between rival theories. Historically, there has been a great deal of emphasis on synchronic virtues such as simplicity, accuracy, explanatory scope and depth, and consistency (see Mackonis 2013). However, diachronic virtues such as fruitfulness are rarely discussed.10 If realism is correct, then continuity

9 I am using the term “pursuing” in Laudan’s (1978) sense in which pursuing a theory means we develop its theoretical apparatus, applying it to new phenomena, and so forth. However, Laudan does not go into any detail about what the rational conditions of pursuit are. Whitt (1990, 1992), filling in this blank, argues that mathematical analogies to previous theories are a criterion for theory pursuit. 10Kuhn (1977) lists fruitfulness as a criterion of theory choice. However, he provides no analysis of it and simply states that it “deserves more emphasis than it has yet [or since] Spontaneous Generations 9:1(2018) 86 J. Shaw Why the Realism Debate Matters for Science Policy provides a key indicator that theories pursued are likely to continue to be successful. By contraposition, we are also able to infer that we should not pursue theories that contradict or are incommensurable with those established and well-confirmed theories. This conflicts with notable views suchas that of Feyerabend who argues that the proliferation of incommensurable alternatives is an essential feature of scientific progress (Feyerabend 1975, 28). However, if Laudan’s anti-realism is true, continuity is either irrelevant or negligible for our criteria of theory choice.11 Revolutions, in Kuhn’s sense, are allowable or even desirable events in scientific development on the anti-realist account. Our conditional acceptance of realism or anti-realism, then, has important implications for theory choice and pursuit. It is important to clarify a few things here. First, the resolution of the realism/anti-realism debate does not univocally determine the ontological conditions for pursuit for two primary reasons. If some kind of realism is correct, ontologies that are discontinuous with previous ontologies (within the same domain) may still be pursued for heuristic reasons (e.g., they aid in the formulation of more acute criticisms,12 they suggest new experimental methods, they provide a useful pedagogical contrast, etc.). Second, the ontological commitments of idealizations may be incompatible but still jointly explanatory. Take, for example, Sandra Mitchell’s example of studying social insect colonies: models make incompatible ontological assumptions about genetic diversity and yet can, in specific cases, jointly provide a unified true explanation of individual colonies (Mitchell 2002). Having incompatible ontological commitments within the broader domain of social insect colonization is necessary for true individual explanations. In both of these cases, we pursue ontologically diverse theories within a domain even if realism (of some kind) is correct. Therefore, even if realism is true, ontological pluralism is not refuted. Finally, as will be pointed out later (see n. 20 below), methodological decisions impact theory pursuit as well. In order for a meta-induction to be useful, past instances must be of the same kind as current ones. It is quite possible that two theories are ontologically continuous but new evidence suggests adopting some ontologically incompatible theory.

received” (Kuhn 1977, 322). An exception to this oversight is found in the aforementioned literature on theory pursuit. 11If Laudan is correct that 6/7 theories are discontinuous across theory change (“I daresay that for every highly successful theory in the past of science which we now believe to be a genuinely referring theory, one could find a half dozen once successful theories which we now regard as substantially non-referring” [Laudan 1981, 35]) then ontological continuity is not even a reliable indicator of theory promise. 12See Feyerabend (1962) on this issue. See also section 3.2 of Shaw (2016) for a brief list of pragmatic factors that often partially determine funding and an argument for pluralism in funding distribution.

Spontaneous Generations 9:1(2018) 87 J. Shaw Why the Realism Debate Matters for Science Policy

Meta-inductions alone do not determine theory pursuit, but they can, ceteris paribus, be a sufficient (but not necessary) condition for theory pursuit.

VI. Realisms and Inductive Bases Before illustrating how meta-inductions are important in practice, I want to make some brief remarks about the importance of the different formulations of realism and their resultant claims about continuity. Here, I will survey three prominent positions: entity realism, structural realism, and traditional realism. Entity realism, as outlined and defended by Hacking (1983) Cartwright (1983) and Giere (1999), commits itself to the existence of entities in contrast to the truth of theories (or laws). If we can consistently and reliably intervene and interact with the entities handled in laboratories, we can be realists about those entities. This is because experimentation, Hacking argues, operates largely independently of theoretical considerations. This, when combined with an adherence to a Kripke/Putnam causal theory of reference, supports the thesis that we are referring to the same entities across theory change.13 In contrast to this, structural realism, as defended by Worrall (1989, 1994), Ladyman (1998) and Zahar (2007), holds that realism applies only to the structures, understood as the formalisms of a theory,14 of scientific theories. Worrall’s structural realism grounds its optimism in the case study of the transitions between Fresnel’s luminiferous ether and Maxwell’s electrodynamics. Since his landmark paper, more case studies have been recruited to the structural realist cause.15 Finally, traditional realism, as defended by Psillos (1994, 1996, 1999, 2000), Kitcher (1993), and Chakravartty (2007), provides a more general defense of realism. Here, traditional realists argue that those posits that were essential to the success of a past theory are preserved in subsequent theories.16 I will not articulate or defend any of these positions here, but it is worth pausing to consider the implications that adopting one of these positions would have. Clearly the kind of continuity we are seeking will differ depending on which position is adopted. If we are structural realists, we will pursue theories

13See Hacking (1983) and Devitt (1984) for examples of the entity realist thesis and see Busch (2006) for a counterexample. 14Structure is a notoriously difficult term to explicate precisely. See Frigg and Votsis (2011) for an overview of the different formulations. 15See Ladyman and French (2003) for an example of structural continuity from quantum theory. For counterexamples to structural realism, see Newmann (2005) and Rivadulla (2010). 16Psillos makes a distinction between “idle” and “essential” posits of a theory which is largely analogous to Kitcher’s distinction between “presuppositional” and “working” posits. Spontaneous Generations 9:1(2018) 88 J. Shaw Why the Realism Debate Matters for Science Policy that contain the structures (however conceived) that previous theories had. If we are traditional realists, we will seek theories that are consistent with the essential posits of previous theories. For entity realists, future knowledge of how to manipulate or intervene with entities mustn’t conflict with previous experimental knowledge. However, there is more to say about what the proper inductive base of these positions is (i.e., which kinds of historical case studies could confirm or disconfirm the view in question). If they are intendedas global views, like Worrall’s or Hacking’s, then the inductive base is the whole of science.17 However, the inductive base could be localized to a particular discipline or domain, in which case the inductive base would be qualified appropriately. In addition to what the inductive base is, one must determine a threshold that must be passed and the probability method being employed. Clearly a single counterexample, if there are dozens of genuine examples, should not discredit a view from contention as a tool for theory choice. What the exact threshold is, therefore, must be explicitly stated.18 The historical record will be the ultimate arbiter of which position is correct, under what conditions, and to what extent. Which threshold and which probability method we use depend on the goals and contingencies of a given case. In the following section, I will illustrate the importance of meta-inductions in practice with the case study of the Human Brain Project.

VII. Case Study: The Human Brain Project Meta-inductions are used quite frequently to justify the potential value of scientific proposals (though they often are done quite informally). In this section, I provide one such instance in which meta-inductions have actively been employed. By doing this, I will illustrate the importance of meta-inductions in practice and how they may be formulated. The Human Brain Project (HBP) is a large collaborative project that aims to model approximately 1% of the human brain using supercomputers. After being announced in early 2013, the ten-year program, which incorporates over eighty partner institutions and 256 researchers (Kenall 2014), has

17It is worth mentioning that realists often restrict their scope to “mature” scientific theories though they are quite vague about what counts as “mature.” Worrall (1989), for example, dismisses phlogiston as an immature theory. However, phlogiston theory had been pursued for over a hundred years and had made numerous predictions such as the production of water, that metals can be created by heating combinations of phlogiston-producing substances and some calxes, and many others (see Chang 2010). This suggests that phlogiston theory should be identified as a mature theory byany reasonable standard. 18I will leave it open as to whether this probability is determined by a Bayesian or non-Bayesian methodology.

Spontaneous Generations 9:1(2018) 89 J. Shaw Why the Realism Debate Matters for Science Policy accrued approximately 1.2 billion Euros19 in funding and seeks to “gain profound insights into what makes us human, develop new treatments for brain disease and build revolutionary new computing technologies” (Horrigan 2013). Additionally, the HPB aims to systematize peer-reviewed articles in a uniform manner in order to, as project figurehead Henry Markram has put it, to create a new about the brain. This would integrate all knowledge about the brain, from the structures of ion channels in neural cell membranes to the cognitive mechanisms underlying conscious decision-making, into a single computational framework (Waldrop 2012). However, this project has met a great deal of resistance, with over 280 neuroscientists boycotting HBP funds (Curtis 2014). Many neuroscientists have argued that this project will be unable to attain the goals it has set out to achieve and that the mind-brain sciences should not be systematized at this early point in their development. This has resulted in a strong disagreement about whether the HBP will be successful or not.20 The computational model of the brain has steadily accumulated the ability to represent and to model neuron behaviour effectively.21 Most of Markram’s own work, for example, exemplified successful computational representations of neurotransmitter release probabilities (Senn et al. 2001), generic neural microcircuits (Maass et al. 2002), short-term synaptic plasticity (Richardson et al. 2005), and many other phenomena. More acutely, the HBP explicitly aims to build upon the Blue Brain Project, which successfully modeled approximately thirty thousand neurons of the neocortical column in mice. The challenges the HBP expects to meet involve creating larger and more powerful supercomputers to model approximately four million neurons and one billion synapses.22 While this would require supercomputers with the ability to run 1018 operations per second and the invention of new virtual technologies (e.g., virtual fMRIs), Markram is

19There are also private revenue streams coming from companies such as Hewlett-Packard, GlaxoSmithKline, Olympus, among others. 20It is worth noting the disagreement between scientists is not purely a result of analyzing historical trajectories. Methodological debates play a key role as well. For example, Robert Epstein argues that computational approaches cannot explain behaviour in neonates or pre-linguistic children, or synaptic plasticity (Epstein 2016). Others argue that computational approaches are superior to neurophysiological approaches since they can provide clinically useful biomarkers for neurodegenerative diseases (see Rose 2014). Meta-inductions alone do not justify one approach over the other, but they still provide meaningful arguments to be optimistic or skeptical about the HBP. 21While there are many formulations of different computational approaches in the mind-brain sciences, the computational approach I am referencing in this paper is the specific one endorsed by Markram. 22This would be a modest first step towards modeling the human brain which contains approximately one hundred billion neurons and one hundred trillion synapses.

Spontaneous Generations 9:1(2018) 90 J. Shaw Why the Realism Debate Matters for Science Policy confident that this increase in computing power will be possible by 2023, given the consistent trend of computing power doubling every eighteen months (Waldrop 2012). Markram is optimistic in two respects. First, he is optimistic that computing power will continue to increase as it has done historically (Markram 2006). Here, there is no ontological or structural continuity but the growth of technological capacities. Second, he is optimistic that computational theories of the brain will be successful because they have been so in the past. Markram’s strategy of reverse-engineering (i.e., using supercomputers to run statistical simulations to predict how neuron clusters will combine, and to test these simulations against “real data” from biology) involves various assumptions about the nature of the brain.23 Markram uses past successes of his version of computationalism, including his own aforementioned studies, the Blue Brain wet labs, Steve Furber’s work on asynchronous circuit designs (Furber & Jens 2002), and Karlheinz Meier’s work on the physical grounding of information processing, to justify a belief in its continued success on larger scales.24 This claim can be reconstructed in a straightforward manner: because the ontology presupposed by computationalism has been empirically successful, it must be an approximately true representation of the underlying principles governing neural morphology and architecture. This historical series of successes suggests to Markram, and to proponents of the HBP, that “there is no fundamental obstacle to modelling the brain [computationally] and it is therefore likely that we will have detailed models of mammalian brains, including that of man, in the near future” (Markram 2006, 159). Many neuroscientists, on the other hand, do not share in this enthusiasm. Rodney Douglas, co-director of the Institute for Neuroinformatics, argues, as recounted by Waldrop, that neuroscience is not ready to be unified under one set of paradigmatic assumptions. Douglas argues that contemporary neuroscience requires variance—huge amounts of disagreement and multiple competing accounts of particular domains. At least tentatively, the mind-brain sciences should proceed in a pluralistic fashion, with many different ontologies, since there is not enough knowledge to justify one paradigm over others. For example, brain imaging studies require the assumption that brain areas (or clusters of voxels) are real and non-reducible

23See Maass, Markram, and Natschlager (2002) for a more detailed description of this. 24Markram writes, for instance, “We know that [computational] rules exist because we discovered some of them while laying the groundwork for the HBP” (Markram 2012, 52). He also implies that the fact that computational models of individual neurons used to be three-year PhD projects, and now larger brain circuits can be done more easily, suggests that “[i]t was clear that more ambitious goals would soon become achievable” (Markram 2012, 52).

Spontaneous Generations 9:1(2018) 91 J. Shaw Why the Realism Debate Matters for Science Policy to neurons and synapses; modelling in cognitive science assumes neural networks similarly exist; and even within computational approaches there are disagreement between connectionist and classical notions of information (i.e., the latter sees information as discrete whereas the former sees information as emergent from node activations), and so on. Put another way, many incompatible ontologies have provided explanations and predictions, and have aided in developing useful technologies, and there is no reason (at this point in history) to expect one ontology to ultimately be “correct.” Given the immaturity of the mind-brain sciences, Douglas argues that “we need as many different people expressing as many different ideas as possible” (Waldrop 2012) and that this becomes increasingly difficult when funds and intellectual efforts become diverted from methodologically and ontologically isolated efforts to explore sporadic and remote corners of the brain. Douglas’s claims are implicitly grounded on the history of contemporary mind-brain sciences. Many resist Markram’s reductionist approach to understanding the brain (i.e., beginning at ion channels and having the gaps in knowledge filled in by incoming data from the HBP) and instead use more approximate and abstract models of basic biological functions to explore higher-level brain functions (Waldrop 2012).25 Realism, therefore, is unwarranted given this history of ontological fragmentation. This makes attempts at constructing a unified theory of the brain premature and unwarranted for the time being.26 Both computer scientists and neuroscientists use the history of their own subfields to ground their realist optimism and anti-realist pessimism, respectively. The growth of computer modeling has been cumulative, and is therefore likely to continue being cumulative,27 while the growth of knowledge in neuroscience has been ontologically non-convergent. While I do not want to side with either position of the debate, I will conclude by remarking on these uses of historical inductions to justify their attitudes towards the potential

25This is not the only method. Some computational neuroscientists use models of individual neurons to explain higher-level brain functions as well (Waldrop 2012). 26It is important to underline that Douglas (and others) are not criticising the HBP on the mere fact that it is unificationist. It is due to the fact that it is unificationist and that is detracts resources and downsizes alternative approaches. Furthermore, while they are not cited by the detractors of the HBP, there may be further inductive reasons to be skeptical of the HBP’s promise to find the physiological basis of some brain diseases (specifically Alzheimer’s) given the failures of many previous attempts (e.g., understanding illnesses through disruption in blood circulation, focal sepsis, overstimulation of nerves, and genetic markers). As Owen Whooley puts it, “the history of the search for ... the underlying mechanisms of mental disorders is one of cyclical patterns of enthusiastic optimism and fatalistic pessimism, but ultimately a story of failure” (Whooley 2014, 14). See also Grob 1998. 27Though it has been cumulative, it has not been convergent since the growth of knowledge in computer modeling has not “withstood” any substantial theory-change.

Spontaneous Generations 9:1(2018) 92 J. Shaw Why the Realism Debate Matters for Science Policy fruitfulness of the HBP.

VIII. Reflections on Meta-Inductions and Science Policy The computer scientists’ meta-induction is optimistic. It expects the HBP to be successful given the cumulative growth in knowledge in computational models of the brain. The neuroscientific meta-induction, on the other hand, is pessimistic given the ontologically fragmented growth of knowledge in neuroscience. What is interesting to note here is that different disciplines studying the same domain (i.e., human and mammalian brains), can exhibit both realist and anti-realist trajectories. If we only analyzed the growth of knowledge in computational approaches, we would be realists about their ontology. This would make the pursuit of conflicting approaches in the mind/brain sciences, which are currently making novel predictions and are successful in their own right, unwarranted. This is clearly problematic. A lesson, then, for general meta-inductions is that disciplines studying the same domain must not have differing ontological or structural commitments ifwe are to be fully confident in the theories potential success. Another thing to note is what kind of realism/anti-realism is relevant to the case at hand. Entity realists or traditional realists can make sense of the HBP case in their own ways. However, structural realists would have little to say about many of the mind/brain studies, which make fewer structural commitments than mathematical physics.28 There are simply very few instances of continuity that the structural realist can point to, making a structural realist analysis of the HBP case relatively uninformative.29 Unless one wants to state that current neuroscience is too “immature” to warrant a structural realist analysis, we may generalize this conclusion to state that certain theses are going to be more applicable in particular theoretical contexts. Finally, both meta-inductions use very short periods of time: the computer science meta-induction begins after the late 1990s when substantive computer modelling of the brain began, and neuroscientists refer back as early as the 1960s when many contemporary neuroscientific institutions

28In low-level neurobiology, there are many chemical and electrical laws, but these are, strictly speaking, laws of chemistry and electrophysics. There are fewer mathematical formalisms used in cognitive science or psychology. 29It should be noted that this depends on what is meant by “structure.” Here, I am using Worrall’s notion of structure as a set of mathematical formalisms that make no reference to the kind of entities that constitute the relata. However, some argue that all properties are inherently relational making structural realism viable again for the HPB case (the so-called “upward path”). I am unconvinced that this watered-down structural realism is not simply a repetition of traditional realism, but I will not argue this here (see Psillos 2009 for an argument to this effect).

Spontaneous Generations 9:1(2018) 93 J. Shaw Why the Realism Debate Matters for Science Policy began.30 Neither position refers back to Pascal’s calculator or trepanning in ancient Egypt for support. If the timeframe of either meta-induction had been expanded to this extent, it would have diluted the force of both arguments. Since the HBP is a ten-year project, it may be pointless to speculate on what the state of the mind/brain sciences will be, say, hundreds of years from now. Perhaps, then, meta-inductions should be limited in historical scope, depending on what kind of projections we are attempting to make. I don’t think any of these suggestions are definitive. But they do suggest different ways that meta-inductions could be formulated. More practice-oriented analyses of meta-inductions can reveal greater details about what the history of science can contribute towards science policy.

IX. Concluding Remarks While the division between computer scientists and neuroscientists is not a strict or rigid divide between realism and anti-realism, the two sides make polar-opposite predictions about the fruitfulness of the HBP. A hypothetical resolution, or even an approximate resolution, of the realism/anti-realism debate would have allowed for a more informed policy decision and could have improved those meta-inductions used by computer scientists and neuroscientists. I believe that this pragmatic value, in combination with our advanced knowledge about theory choice and theory pursuit, justifies our continued engagement with the historical record for meta-inductive purposes.

Jamie Shaw The University of Western Ontario 1151 Richmond Street, London, ON. N6A 3K7 [email protected]

References

Blackburn, Simon. 2002. Realism: Deconstructing the Debate. Ratio 15(2): 111-133. Boyd, Richard. 1983. On the Current Status of the Issue of Scientific Realism. In Methodology, Epistemology, and Philosophy of Science: Essays in Honour of Wolfgang Stegm’́uller on the Occasion of his 60th Birthday, ed. [Essler, W, Hempel, C, and Putnam, H.], New York City: Springer Press, pp. 45-90. Brandon, E P. 1997. California Unnatural: On Fine’s Natural Ontological Attitude. The Philosophical Quarterly 47(187): 232-235.

30For example, the International Brain Research Organization was founded in 1960, the International Society for Neurochemistry in 1963, and the European Brain and Behaviour Society in 1968. Spontaneous Generations 9:1(2018) 94 J. Shaw Why the Realism Debate Matters for Science Policy

Busch, J. 2006. Entity Realism Meets the Pessimistic Meta-Induction Argument. The World is not Enough. SATS 7(2): 106-126. Cartwright, Nancy. 1983. How the Laws of Physics Lie. Oxford: Oxford University Press. Chakravartty, Anjan. 2007. A Metaphysics for Scientific Realism: Knowing the Unobservable. Cambridge: Cambridge University Press. Chakravartty, Anjan. 2011. Scientific Realism. The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.). https://plato.stanford.edu/archives/win2016/entries/scientific-realism/. Accessed July 2016. Chang, Hasok. 2001. How To Take Realism Beyond Foot-Stomping. Philosophy 76(1): 5-30. Chang, Hasok. 2010. The Hidden History of Phlogiston. HYLE—International Journal for 16(2): 47-79. Chang, Hasok. 2012. Is Water H2O? Evidence, Realism and Pluralism. New York: Springer. Curtis, Malcolm. 2014. Scientists Threaten Boycott of Brain Project. The Local. July 8. Accessed July 2016. http://www.thelocal.ch/20140708/scientists-threaten-boycott-of- brain-project. Devitt, Michael. 1984. Realism and Truth. Princeton: Princeton University Press. Epstein, Robert. 2016. The Empty Brain: Your Brain Does Not Process Information, Retrieve Knowledge or Store Memories. In Short: Your Brain Is Not a Computer. Aeon. https://aeon.co/essays/your-brain-does-not-process-information-and-it-is -not-a-computer July 8. Accessed July 2016. Feyerabend, Paul. 1962. Explanation, Reduction and Empiricism. In Scientific Explanation, Space and Time, Minnesota Studies in the Philosophy of Science, vol. 3, eds. Herbert Feigl and Grover Maxwell, 28-97. Minneapolis: University of Minnesota Press. Feyerabend, Paul. 1975. Against Method: Outline of an Anarchistic Theory of Knowledge. London: Verso Books. Fine, Arthur. 1984. The Natural Ontological Attitude. In Scientific Realism, ed. Jarrett Leplin, 83-107. Berkeley: University of California Press. Fine, Arthur. 1986. Unnatural Attitudes: Realist and Instrumentalist Attachments to Science. Mind 95(378): 149-179. Furber, S., and S. Jens. 2002. Principles of Asynchronous Circuit Design: A Systems Perspective. Boston: Kluwer Academic Publishers. Frigg, R., and I. Votsis. 2011. Everything You Always Wanted to Know About Structural Realism but Were Afraid to Ask. European Journal for Philosophy of Science 1(2): 227-276. Giere, Ronald. 1999. Science Without Laws. Chicago: University of Chicago Press. Grob, Gerold. 1998. Psychiatry’s Holy Grail: The Search for Mechanisms of Mental Diseases. Bulletin of the History of Medicine 72: 189-219.

Spontaneous Generations 9:1(2018) 95 J. Shaw Why the Realism Debate Matters for Science Policy

Hacking, Ian. 1983. Representing and Intervening. Cambridge: Cambridge University Press. Honigsbaum, Mark. 2013. Human Brain Project: Henry Markram Plans to Spend €1bn Building a Perfect Model of the Human Brain. The Observer. October 13. Accessed 2016. http://www.theguardian.com/science/2013/oct/15/human-brain-project- henry-markram. Horrigan, David. 2013. The Human Brain Project Official Webpage. Accessed 2016. https://www.humanbrainproject.eu/hbp-summit-2013-overview. Hume, David. 1902. An Enquiry Concerning Human Understanding, ed. L. Selby-Bigge. Oxford: Clarendon Press. Kenall, Amye. 2014. Building the Brain: The Human Brain Project and the New Supercomputer. BioMed Central. July 8. Accessed 2016. http://blogs.biomedcentral.com/bmcblog/2014/07 /08/building-the-brain-the-human-brain-project-and-the- new-supercomputer/. Kitcher, Philip. 1993. The Advancement of Science. New York: Oxford University Press. Kuhn, Thomas. 1977. The Essential Tension: Selected Studies in Scientific Tradition and Change. London: University of Chicago Press. Kukla, André. 1994. Scientific Realism, Scientific Practice, and the Natural Ontological Attitude. British Journal for the Philosophy of Science 45(4) 955-975. Ladyman, James. 1998. What is Structural Realism? Studies in History and Philosophy of Science 29(3): 409-424. Ladyman, J., and S. French. 2003. Remodelling Structural Realism: Quantum Physics and the Metaphysics of Structure. Synthese, 136(1): 31-56. Lakatos, Imre. 1970. History of Science and its Rational Reconstructions. PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1970: 91-136. Lange, Marc. 2002. Baseball, Pessimistic Inductions and the Turnover Fallacy. Analysis 62(4) 281-285. Laudan, Larry. 1978. Progress and its Problems: Towards a Theory of Scientific Growth. Berkeley: University of California Press. Laudan, Larry. 1981. A Confutation of Convergent Realism. Philosophy of Science 48(1): 19-49. Lewis, Peter. 2001. Why the Pessimistic Induction is a Fallacy. Synthese 129(371): 371-380. Maass, W., H. Markram, and T. Natschläger. 2002. Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations. Neural Computation 2531-2560. Maass, W., T. Natschläger, and H. Markram. 2002. A Model for Real-Time Computation in Generic Neural Microcircuits. NIPS, 14: 15-24. Mackonis, Aldofas. 2013. Inference to the Best Explanation, Coherence and Other Explanatory Virtues. Synthese, 190(6) 975-995.

Spontaneous Generations 9:1(2018) 96 J. Shaw Why the Realism Debate Matters for Science Policy

Maddy, Penelope. 2001. Naturalism: Friends and Foes. Noûs, 35(15), 37-67. Magnus, P., and C. Callender. 2004. Realist Ennui and the Base Rate Fallacy. Philosophy of Science, 71(3): 320-338. Markram, Henry. 2006. The Blue Brain Project. Nature Reviews Neuroscience 7(2): 153-160. Markram, Henry. 2012. The Human Brain Project. Scientific American 306(6): 50-55. Mitchell, Sandra. 2002. Integrative Pluralism. Biology and Philosophy 17(1): 55-70. Newman, Mark. 2005. Realism as an Answer to the Pessimistic Meta-Induction. Philosophy of Science 72(5): 1373-1384. Nola, Robert. 2008. The Optimistic Meta-Induction and Ontological Continuity: The Case of the Electron. In Rethinking Scientific Change and Theory Comparison, eds. (Soler, L, Sankey, H., Hoyningen-Huene, P.), pp. 159-202. Springer Netherlands. Psillos, Stathis. 1994. A Philosophical Study of the Transition from the Caloric Theory of Heat to Thermodynamics. Studies in the History and Philosophy of Science 25(2): 159-190. Psillos, Stathis. 1996. Scientific Realism and the “Pessimistic Induction”. Philosophy of Science 63: 306-314. Psillos, Stathis. 1999. Scientific Realism: How Science Tracks Truth. London: Routledge. Psillos, Stathis. 2000. The Present State of the Scientific Realism Debate. British Society for the Philosophy of Science 51(4): 705-728. Psillos, Stathis. 2009. The Structure, the Whole Structure and Nothing But the Structure? In Knowing the Structure of Nature: Essays on Realism and Explanation, 136-146. London: Palgrave Macmillan UK. Putnam, Hilary. 1978. Meaning and the Moral Sciences. Boston: Routledge & Kegan Paul Ltd. Richardson M., O. Melamed, G. Silberberg, W. Gerstner, and Markram H. 2005. Short-Term Synaptic Plasticity Orchestrates the Response of Pyramidal Cells and Interneurons to Population Bursts. Journal of Computational Neuroscience 18(3): 323-31. Rivadulla, Andrès. 2010. Two Dogmas of Structural Realism. A Confirmation of a Philosophical Death Foretold. Critica: Revista Hispanoamericana de Filosof￿a 3-29. Rose, Nikolas. 2014. The Human Brain Project: Social and Ethical Challenges. Neuron 82(6): 1212-1215. Rosenberg, C., and A. Hardin. 1982. In Defense of Convergent Realism. Philosophy of Science 604-615. Rouse, Joseph. 1988. Arguing for the Natural Ontological Attitude. PSA: Proceedings of the Philosophy of Science Association 1: 294-301. Saatsi, Juha. 2005. On The Pessimistic Induction and Two Fallacies. Philosophy of Science 1088-1098. Senn, W., H. Markram, and Tsodyks, M. 2001. An Algorithm for Modifying Neurotransmitter Release Probability Base on Pre- and Post-Synaptic

Spontaneous Generations 9:1(2018) 97 J. Shaw Why the Realism Debate Matters for Science Policy

Spike Timing. Neural Computation 13: 35-67. Shaw, Jamie. 2016. Pluralism as a Means of Organizing Research: How Funding Structures Can Combat Disciplinary Fragmentation. Western Journal of Graduate Research 13: 11-17. Sismondo, S., and N. Chrisman. 2001. Deflationary Metaphysics and the Natures of Maps. Philosophy of Science 68(3): S38-S49. Stanford, Kyle. 2003. Pyrrhic Victories for Scientific Realism. The Journal of Philosophy 100(11): 553-572. Stanford, Kyle. 2006. Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives. Oxford: Oxford University Press. Stein, Howard. 1989. Yes, but... Some Skeptical Remarks on Realism and anti-realism. Dialectica 43: 47-65. Waldrop, Mitchell. 2012. Computer Modelling: Brain In A Box. Nature. February 22. Accessed 2016. http://www.nature.com/news/computer-modelling-brain-in-a-box-1.10066. Whitt, Laura. 1990. Theory Pursuit: Between Discovery and Acceptance. PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1990(1): 467-483. Whitt, Laura. 1992. Indices of Theory Promise. Philosophy of Science 59(4): 612-634. Whooley, Owen. 2014. Nosological Reflections: The Failure of DSM-5, the Emergence of RDoC, and the Decontextualization of Mental Distress. Society and Mental Health 1-19. Worrall, John. 1989. Structural Realism: The Best of Both Worlds? Dialectica 43(1-2): 99-124. Zahar, Elie. 2007. Why Science Needs Metaphysics: A Plea for Structural Realism. Chicago: Open Court Publishing Company.

Spontaneous Generations 9:1(2018) 98 Scientific Realism Again

Author(s): James Ladyman

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 99-107.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Focused Discussion Invited Paper Scientific Realism Again* James Ladyman†

The debate about scientific realism has been pronounced dead many times only to come back to life in new form. The success of atomism in early-twentieth-century physics and chemistry was also a triumph for scientific realism, but quantum physics then revitalized empiricism and instrumentalism. The subsequent restoration of scientific realism in the work of Herbert Feigl and was followed by ’s ideas of and revolutions, which inspired many forms of antirealism about science. Many philosophers of science subsequently produced sophisticated defences of scientific realism that took account of the Duhem-Quine problem and the theory-ladenness of observation. Hilary Putnam coined the no-miracles argument, and renewed ideas of natural kinds. Scientific realism became the basis for naturalistic theories in epistemology, and in the philosophy of mind. By the time of Bas van Fraassen’s seminal The Scientific Image (1980), scientific realism was not only dominant again in philosophy of science, but also foundational to the work of many other analytic philosophers (including in the growing community of metaphysicians, notably David Armstrong and David Lewis). Constructive empiricism is a form of antirealism that does not deny scientific knowledge, progress, or the rationality of theory choice in general. For van Fraassen, the problem with scientific realism is not its commitment to the cumulativity and reliability of science, but rather that it adds metaphysics to empirical knowledge for no empirical gain. He argues that speculation about the unobservable world is of purely pragmatic value, to be embraced for its usefulness and dropped when it becomes idle. If theoretical science is regarded as modelling systems, rather than writing the book of the world, then believing in hidden causes and mechanisms beyond the phenomena, and regarding some empirical regularities and laws of nature as natural necessities of some kind, are not required. Van Fraassen’s subsequent work made clear that he regards scientific realism as a rational position for those inclined to it, and seeks only to defend the reasonableness of agnosticism about unobservables. Strictly speaking, constructive empiricism is an axiology, not

* Received August 3, 2017. Accepted August 3, 2017. † James Ladyman is Professor of Philosophy at the University of Bristol.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 99 J. Ladyman Scientific Realism Again an epistemology, since it is first and foremost the view that the aimof science is to arrive at empirically adequate theories. Debate continues about this axiology (Churchland and Hooker 1985), about the meta-epistemological voluntarism within which it is framed (van Fraassen 1989; 2002), and about the theory of scientific representation developed by van Fraassen (2008). The present paper concerns how scientific realism is formulated and defended. It is argued that van Fraassen is fundamentally right that scientific realism requires metaphysics in general, and modality in particular. This is because of several relationships that raise problems for the ontology of scientific realism, namely those between: scientific realism and common sense realism; past and current theories; the sciences of different scales; and the ontologies of the special sciences and fundamental physics. These problems are related. It is argued below that ontic structural realism, in the form of the real-patterns account of ontology, offers a unified solution to them all (or at least that it is required to do so, if it is to make good on the promise of naturalised metaphysics).1 First there is more to note about the recent history of the realism debate. Among the replies to van Fraassen’s The Scientific Image in the collection Images of Science (1985), several themes recur: (1) Realists object to any epistemological or ontological significance being given to the line between the observable and the unobservable. (2) Realists argue that the fallible methods, in particular inference to the best explanation, used in everyday life to arrive at belief in observables that have not been observed, and to make inductive generalisations, are on a par with those used to arrive at beliefs in science involving unobservables. (3) Realists think that the burden of proof is on the antirealist to show that there is a reason to doubt the first-order methods of science that recommend belief in entities such as black holes and electrons. In respect of (3) above, note that while the no-miracles argument was dubbed the “ultimate” argument for scientific realism by Alan Musgrave (1988), scientific realists such as Alexander Bird (2018) regard itas unnecessary, given the reliability of first-order inference to the best explanation in science. Whether or not this is granted, there is no scientific realism debate at all if everything is left to first-order questions that scientists debate about whether, for example, a particular particle or species exists. The idea that there is a wholesale versus a retail way of thinking about scientific realism (Magnus and Callender 2004) is unhelpful, because the issue is essentially a global or second-order one, not a local or first-order one, in the first place. If one is content to let debates within the sciences decide

1 This paper builds on, and defends, the view of Ladyman and Ross (2007), and Ladyman and Ross (2013). See also Ladyman (2017).

Spontaneous Generations 9:1(2018) 100 J. Ladyman Scientific Realism Again what unobservable entities exist, and to take their apparent ontological commitments at face value, then one is a scientific realist. A serious problem for scientific realism that is not addressed by (1) and(2) above, and that provides the anti-realist with a response to (3), is pressed in Larry Laudan’s work (1977, 126; 1984), which argues from the history of theory change in science, and makes an empirical challenge to scientific realism that echoes Henri Poincaré (1952, 160), Ernst Mach (1911, 17), and Hilary Putnam (1978, 25). It is this argument, namely the “pessimistic meta-induction,” that motivates John Worrall’s structural realism (1989). In its crude form it says that since most, if not all, past theories have turned out to be false, we should expect our current theories to turn out to be false. However, Laudan’s ultimate argument from theory change against scientific realism is not really an induction of any kind, but a reductio. No attempt at producing a large inductive base needs to be made; rather, one or two cases are argued to be counterexamples to the realist thesis that novel predictive success can only be explained by successful reference of key theoretical terms (see, for example, Ladyman 2002 and Psillos 1999). There is much discussion about theories of reference in the literature; however, whether or not some particular theoretical term refers is a red herring for realists attempting to account for many cases of theory change. The term “ether” is now regarded as not referring by most commentators on the scientific realism debate, but there is no particular reason why it was abandoned by science, as it could have been retained to describe the electromagnetic field. After all, the term “atom” has been retained to refer to divisible entities that are not purely or even largely mechanical in their interactions (see Stein 1989). Arguably the ether has much more in common with the field than atomic physics does with the atoms of atomism. Regardless of which terms refer, the question is whether it is plausible to say that the metaphysics and ontology of science is true or approximately true. The problem is that there are many cases in which successive theories use the same terms, such as “space,” “time,” “gravity,” “mass,” and “particle,” but the claims made about what there is in the world, and especially about the metaphysics of those theories, are very different. Nonetheless, as Heinz Post (1971) observed, the well-confirmed laws of past theories are limiting cases of the laws of successor theories (which is his General Correspondence Principle). Hence, it is not just the empirical content or phenomenological laws of past theories that are retained, but also the law-like relations they posit. The laws take mathematical form and there are special cases, such as that of Fresnel’s equations, where the very same equations are reinterpreted in terms of different entities. However, structural realism does not require that all mathematical (or any other kind of) structure be retained on theory change. If it did, it would be refuted by the fact that the

Spontaneous Generations 9:1(2018) 101 J. Ladyman Scientific Realism Again mathematics of Newton is different from that of Einstein. The point isthat while the ontological status of the relevant entities may be very different in the different theories, the relationship between the structures of the theories is readily represented, and indeed in many cases it can be studied in depth by investigation of the relevant mathematical structures. Ladyman and Ross (2007) argue that mathematical representation is ineliminable in much of science, and take this to be key to the formulation and defence of ontic structural realism. Note, however, that this does not mean that this kind of structural realism only applies to mathematicized theories. Even though there is no such thing as phlogiston, the tables of affinity and antipathy of phlogistic chemistry express real patterns that we now express in terms of reducing and oxidizing power (see Ladyman 2011). In many cases the strongly empirically successful theories of the past are also the theories presently used in the relevant domain. For example, Newtonian mechanics, ray and wave optics, classical electrodynamics, and classical statistical mechanics are all still used, although of course in every case there is a superior fundamental theory. The ontology of such theories remains “effective,” in the sense that the entities it posits are partof empirically successful descriptions and models. For example, in the BCS theory of superconductivity, the material in question is treated as a lattice of sites at which there may or may not be pairs of electrons, and interactions between the latter are mediated by “phonons,” which are treated as if they were genuine particles within the model. Phonons correspond to vibrations in the system, and they are not taken as fundamental, but they are real enough for scientific practice in the relevant domain. Similarly, in particle physics there are effective theories, with their effective ontologies, that only apply at certain emergent scales of description. (See Ladyman [2015] for more on the idea that particles are effective individuals in physics.) Indeed, the supposedly abandoned ontology of classical physics, in the form of point particles obeying Newtonian dynamics, is still a major object of study in current physics. Entities that are now regarded as emergent are also often the entities of past theories, and are always merely effective with respect to more fundamental descriptions. For example, solids and liquids, as well as magnetic and electric fields, are part of the effective ontology of condensed matter physics, even though none of these things is part of the ontology of quantum field theory. In this sense, much of physics is like special science. The special sciences in general deal with a plethora of emergent entities, properties, and processes. Scientific realism as such has no account of the relationship between ontologies, nor of the relationship between causation and law, at different levels, and in different sciences, and this is problematic forits formulation and defence.

Spontaneous Generations 9:1(2018) 102 J. Ladyman Scientific Realism Again

Eddington’s famous two tables example poses the problem of reconciling the ontologies of atomic physics and everyday material objects. Arguments for realism based on causation, intervention, and in theories dealing with emergent phenomena only establish effective ontology at best. Many philosophers, scientific realists among them, argue for the elimination of all such entities in favour of the fundamental physical stuff. It is often argued that such emergent entities lack any genuine causal powers, these all necessarily residing at the fundamental physical level, and so are merely devices to be adopted for pragmatic reasons. Hence, the antirealist can argue that the scientific realist must grant that taking a pragmatic, but non-realist attitude to reference to theoretical entities is reasonable, because many scientific realists take that attitude themselves to large parts of science. (This argument is also made by Paul Teller [forthcoming].) More generally, scientific realism can hardly be defended as an extension of everyday ontology and reasoning to the unobservable as in (1) and (2) above, if everyday objects are supposed to be eliminated in favour of fundamental physical entities. Furthermore, scientific realism cannot be defended by appeal to the first-order practices of science as in (3) above, ifthe latter can be taken as delivering ontological commitments that are ultimately to be repudiated as mere epistemologically useful fictions. It is ironic that scientific realism taken to extremes, in the form of the view that onlythe fundamental physical stuff is real, is now the major form of instrumentalism about much of the ontology of the special sciences. There is a conflict between fundamentalism and realism about the ontologies of the special sciences. However, it is not necessary to advocate disunity or a patchwork view to reject fundamentalism and , and to maintain realism at the level of the special sciences. Ladyman and Ross (2007) defend the latter’s “rainforest realism” by conjoining it with ontic structural realism in the philosophy of physics, adopting scientific realism about effective ontology, and modifying the theory of real patterns to provide a criterion of existence. A real pattern is, very roughly, something that makes for a simplified description relative to some background ontology. (The real-patterns account thus unifies entity realism with structural realism.) For example, a wave on the beach is a real pattern to a surfer, or a lifeguard, because it is taken as the basis for prediction and explanation. Waves are very ephemeral real patterns, like currents and tides, but rocks and sandbanks are more durable. In physics, quasi-particles like phonons are taken to exist when their half-life is effectively infinite relative to the scale of the interactions that are being studied. In general, Ladyman and Ross argue that ontology is scale-relative, in the sense that different energy levels and regimes, as well as different length and time scales, feature different emergent structures of causation and law. Real patterns are entities of whatever ontological category

Spontaneous Generations 9:1(2018) 103 J. Ladyman Scientific Realism Again that feature (non-redundantly) in projectible regularities. (David Wallace [2015] also argues for the real-patterns account of effective emergent entities such as quasi-particles.) The above account could be interpreted as being of the pragmatics of an ultimately epistemological kind of emergence, but if there is not, or might not be, a fundamental level of reality, nor ultimate individuals of which everything else is made, then all real patterns are on a par. The real-patterns account also explains why we do not abandon old ontologies in much of scientific practice, because the strongly empirically successful theories of the past had many laws that were correct in their domains of applicability. Since everyday objects and their properties are projectible in various ways, the real-patterns account is also applicable to common-sense realism, and so solves the problem of Eddington’s two tables. The table is a real pattern at macroscopic scales, but at the microscopic scale it dissolves into molecules that are bound together by electromagnetic potentials. There is a rough correspondence between the everyday object and the bound state of the particles that compose it, but there is not even token identity between them, since they have different modal properties—the table would exist even if some of the relevant particles did not but the bound state would be different—and because they have different persistence conditions—the table could have a leg replaced but the bound state would not survive such an operation. The problems of vagueness of composition and identity over time as well as generation and corruption all apply to both special science objects and everyday ones. The real-patterns account of ontology offers a unified solution for both problems in all cases. Composition is often dynamical, especially in science, but the time scale of the interactions of the parts is very short compared to the time scale characteristic of the whole (see Ladyman 2017). Generation and corruption are not events at the level of the parts, but real patterns can indeed be created and destroyed by changes in the behaviour of other real patterns, and they can also persist over long time scales relative to the scale of the interactions of their parts. In any case, the scientific realist must give some account ofthe relationship between ontology at different scales and in different sciences. Eliminativism about emergent ontology makes scientific realism a form of antirealism about most, if not all, actual science, and undercuts the arguments for realism (as in [1] and [2] above). Reductionism is arguably not plausible and is certainly not popular among scientific realists. Pluralism is the option many scientific realists take. For example, Teller (forthcoming) thinks in terms of a multitude of simplified models of a much more complex reality, each of which gives a partially but not completely correct picture. Such models, though not completely correct, capture aspects of the modal

Spontaneous Generations 9:1(2018) 104 J. Ladyman Scientific Realism Again structure of the world. For him, and many others, there is no one right form of description. Pluralists are right that there are multiple models, often at different scales and in different regimes. However, this notwithstanding, there is often one theory choice that is right—oxygen over phlogiston being a good example. Moreover, as Ladyman and Ross (2007) argue, pluralism does not do justice to the , nor does it take account of the special status of physics. The need for theories to be compatible where they overlap is a methodologically productive driver of scientific advancement suggestive of a non-pluralist metaphysics. The real-patterns account thus permits an understanding of ontology as scale-relative, that is, compatible both with the unity of science, and with a very weak form of physicalism (though not with most forms of physicalism). Modality is the key to the real-patterns account of ontology, which harmonizes entity realism and ontic structural realism, because featuring in projectible models and/or statements is taken to be the criterion of reality. The real-patterns account also explains why focusing on issues of reference across theory change is a red herring for the realist, because reference can always be secured to some extent whenever there are real patterns that are carried over as approximations. The issue remains as to whether effective ontology deserves to be called ontology at all, however, as it is arguably just an epistemological gloss in lieu of a full description. Weak forms of emergence allow that emergent entities are epistemologically and semantically irreducible, but take it that strong emergence, and full ontological status, would be ruled out by any claim that genuine causal power resides only at the fundamental level. However, if the latter claim is false then that modal structure at all scales could be considered real. Explicating how that is possible requires further elaboration, but given that it presents the most promising way to resolve many of the issues facing scientific realism, I suggest that the present focus of the scientific realism debate should be on providing such an account of the relationship between the modal structures found in scientific theories at different ontological scales. There are different ways this might be done. For both French and Ladyman ontic structural realism has been from the outset about modal structure (see French and Ladyman 2003). However, French is an eliminativist about all ontology except the structures of physics, while the real-patterns account is ecumenical. On the other hand, alternatives to structuralism involving theories of dispositions, essences, and powers have been developed in conjunction with the defence of scientific realism by Alexander Bird, Anjan Chakravartty, and Brian Ellis. At the very least, this all seems to lend credence to van Fraassen’s claim that scientific realism is essentially a metaphysical position of some kind. While Barry Loewer, David Papineau, and Stathis Psillos are Humeans and deny that scientific

Spontaneous Generations 9:1(2018) 105 J. Ladyman Scientific Realism Again realism requires any notion of natural necessity, Berenstain and Ladyman (2012) argue that the arguments for scientific realism are undercut without it. This metaphysical issue is taken as the fundamental one for the realism debate by van Fraassen, and he is right: metaphysics in general, and modal metaphysics in particular, are the crux of scientific realism.

James Ladyman Department of Philosophy University of Bristol [email protected]

References

Berenstain, Nora, and James Ladyman. 2012. Ontic Structural Realism and Modality. In Structural Realism: Structure, Object and Causality, eds. Elaine Landry and Dean Rickles, 149-168. Dordrecht: Springer. Bird, Alexander. 2018. Scientific Realism and Epistemology. In The Routledge Handbook of Scientific Realism, ed. Juha Saatsi. Abingdon: Routledge. Churchland, Paul, and Clifford Hooker, eds. 1985. Images of Science: Essays on Realism and Empiricism, with a Reply from Bas C. van Fraassen. Chicago: University of Chicago Press. French, Steven, and James Ladyman. 2003. Remodelling Structural Realism: Quantum Physics and the Metaphysics of Structure. Synthese 136(1): 31-56. Ladyman, James. 2002. Understanding Philosophy of Science. Routledge. Ladyman, James, and Don Ross. 2007. Every Thing Must Go: Metaphysics Naturalised. Oxford: Oxford University Press. Ladyman, James. 2011. Structural Realism versus Standard Scientific Realism: The Case of Phlogiston and Dephlogisticated Air. Synthese 180(2): 87-101. Ladyman, James, and Don Ross. 2013. The World in the Data. In Scientific Metaphysics, eds. Harold Kincaid, James Ladyman, and Don Ross, 108-151. Oxford: Oxford University Press. Ladyman, James. 2015. Are There Individuals in Physics, and If So, What Are They? In Individuals across the Sciences, eds. Alexandre Guay and Thomas Pradeu, 193-206. Oxford: Oxford University Press. Ladyman, James. 2017. An Apology for Naturalised Metaphysics. In Metaphysics and the Philosophy of Science, eds. Matthew Slate and Zanja Yudell, 141-162. Oxford: Oxford University Press. Laudan, Larry. 1977. Progress and Its Problems: Toward a Theory of Scientific Growth. London: Routledge and Kegan Paul. Laudan, Larry. 1984. Science and Values: The Aims of Science and Their Role in Scientific Debate. Berkeley: University of California Press. Mach, Ernst. 1911. History and Root of the Principle of Conservation of Energy. Chicago: Open Court. Spontaneous Generations 9:1(2018) 106 J. Ladyman Scientific Realism Again

Magnus, P.D., and Craig Callendar. 2004. Realist Ennui and the Base Rate Fallacy. Philosophy of Science 71(3): 320-338. Musgrave, Alan. 1988. The Ultimate Argument for Scientific Realism. In Relativism and Realism in Sciences, ed. Robert Nola, 229-252. Dordrecht: Kluwer. Poincaré, Henri. 1952. Science and Hypothesis. New York: Dover. Post, Heinz. 1971. Correspondence, Invariance and Heuristics: In Praise of Conservative Induction. Studies in History and Philosophy of Science 2(3): 213-255. Putnam, Hilary. 1978. Meaning and the Moral Sciences. London: Routledge Psillos, Stathis. 1999. How Science Tracks Truth. London: Routledge. Stein, Howard. 1989. Yes, but... Some Skeptical Remarks on Realism and Anti-Realism. Dialectica, 43(1): 47-65. Teller, Paul. (forthcoming), present volume. Wallace, David. 2015. The Emergent Multiverse. Oxford: Oxford University Press. Worrall, John. 1989. Structural Realism: The Best of Both Worlds? Dialectica 43(1-2): 99-124. Van Fraassen, Bas. 1980. The Scientific Image. Oxford: Clarendon Press. Van Fraassen, Bas. 1989. Laws and Symmetry. Oxford: Oxford University Press. Van Fraassen, Bas. 2002. The Empirical Stance. New Haven: Yale University Press. Van Fraassen, Bas. 2008. Scientific Representation: Paradoxes of Perspective. Oxford: Oxford University Press.

Spontaneous Generations 9:1(2018) 107 Scientific Realism and the History of Chemistry

Author(s): Robin Findlay Hendry

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 108-117.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.28062

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Focused Discussion Invited Paper Scientific Realism and the History of Chemistry Robin Findlay Hendry*

I. Ephemeral Science? Do scientific theories build cumulatively on the achievements of their predecessors, or do they sweep them away in favour of new theoretical visions? Philosophers are sometimes accused—by scientists, and by philosophers—of generating pseudoquestions by imposing their own abstractions on science. The question of theoretical cumulativity is not like that, for it has been raised by people who are not (or not primarily) philosophers. Here for instance is Henri Poincaré on the “bankruptcy of science”:

The ephemeral nature of scientific theories takes by surprise the man of the world. Their brief period of prosperity ended, he sees them abandoned one after another; he sees ruins piled upon ruins; he predicts that the theories in fashion to-day will in a short time succumb in their turn, and he concludes that they are absolutely in vain. This is what he calls the bankruptcy of science. (Poincaré 1905, 178)

It is well known among philosophers that Poincaré, having identified that pessimistic line of thought, went on to reject it, at least partially. It is not so well known among philosophers that some thirty years later,1 in his Presidential Address to the Chemical Society (later to become the Royal Society of Chemistry), Nevil Sidgwick sketched a much more robust defence of cumulativity in the development of theories of chemical structure:

* Robin Hendry is Professor of Philosophy at Durham University, where he has worked for a long time. His chief research interests are in the history and philosophy of chemistry and its relationship to physics. 1 I do not know whether Poincaré’s musings had any influence on Sidgwick’s, whether direct or indirect.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 108 R.F. Hendry Scientific Realism and the History of Chemistry

There are two ideas about the progress of science which are widely prevalent, and which appear to me to be both false and pernicious. The first is the notion that when a new discovery is made, it shows the previous conceptions on the subject to be untrue; from which unscientific people draw the very natural conclusion that if to-day’s scientific ideas show those of yesterday to be wrong, we need not trouble about them, because they may themselves be shown to be wrong to-morrow. It is difficult to imagine any statement more opposed to the facts. The progress of knowledge does indeed correct certain details in our ideas, but the main result of every great discovery is to establish the existing doctrines on a firmer foundation, and give them a deeper meaning. (Sidgwick 1936, 533)

Leaving aside the second of his pernicious falsehoods, Sidgwick’s conclusion is forthright:

I hope I have said enough to show that the modern development of the structural theory, far from destroying the older doctrine, has given it a longer and a fuller life; and further that the tendency of modern research is not to contract the scope of its material, but on the contrary to call in to its assistance an increasingly wide range of properties, and to bring to bear on its problems the results of every kind of physical and chemical investigation. (Sidgwick 1936, 538)

Sidgwick’s thoughts suggest a bold historiographical hypothesis, whose relevance to the current debate on scientific realism should be obvious: that theories of structure and composition have, since the 1860s, developed through a series of conservative extensions, despite some apparently radical theoretical and conceptual change during this time. A conservative extension of a theory is one where its inferential content before the extension (i.e., that which determines its explanatory and predictive power) is a proper subset of its inferential content afterwards.2 This will happen, for instance, when a theory is extended or reinterpreted so as to allow inferences to be made about a new range of phenomena.

2 Mauricio Suárez (2004) presents a deflationary account of representation: inferential use exhausts what can usefully be said about representation. Whether or not one agrees with that, inference is certainly a very useful way to track the interpretation of a representation by its users.

Spontaneous Generations 9:1(2018) 109 R.F. Hendry Scientific Realism and the History of Chemistry

II. Stability and structure Of course our bold historiographical claim is really a sequence of hypotheses concerning the retention of specific structural features attributed by chemists to molecules. Four key developments suggest themselves for further study: the emergence of structural formulae to account for isomerism in the 1860s; the extension of structure into three dimensions in the 1870s; integration with physical theories and experimental methods in the early twentieth century; and the advent of quantum mechanics from the 1930s onwards.

Bond structure in organic chemistry (1860s) As a body of theory, molecular structure presents a number of apparent difficulties for historians and philosophers of science, posed byits presentation, and by its interpretation. These difficulties can be addressed if we pay careful attention to the inferential uses to which the various visual representations are put, and which reflect sophisticated interpretative stances that the chemists took toward their representational schemes. Presentation: In organic chemistry, structures were represented using a diverse range of representational media, including August Kekulé’s sausage formulae, A.W. Hofmann’s “glyptic formulae” and the chemical graphs of Alexander Crum Brown and Edward Frankland (for examples see Rocke 2010, ch. 5). Moreover the visual nature of these representations (as opposed to, say, mathematical equations) can make it difficult to identify the predictive and explanatory content of the theory. Nevertheless, underlying this diversity of presentation was a relatively unified body of theory, based on the core idea of fixed atomic valences generating a fixed number of ways a given collection of atoms of different kinds can be connected together via bonds (see Rocke 1984, 2010). Interpretation: The theory was developed amid considerable controversy concerning the reality of atoms (see Brock and Knight 1967), and whether hypotheses about structure at the atomic scale could have any legitimate role in chemistry. The resulting interpretative caution among the chemists of the time has been understood by historians and philosophers as a form of instrumentalism. It was that, but the label could obscure finer interpretative distinctions to be observed in the way the theories were being used. Like many chemists at the time, Frankland was careful to point out that structural formulae were intended “to represent neither the shape of the molecules, nor the relative position of the constituent atoms” (see Biggs et al. 1976, 59). William Brock and David Knight have described Frankland as one of the “moderate conventionalists” about structural formulae, people who are “prepared to employ them as useful fictions” (1967, 21). Elsewhere (2010) I

Spontaneous Generations 9:1(2018) 110 R.F. Hendry Scientific Realism and the History of Chemistry have argued that Frankland’s stance also reflects an awareness that the use of structural formulae in chemical explanations required inferences to be made only concerning the order of the connections between the atoms as they affect the identity or diversity of different substances. Although the representations carry excess content, Frankland would make inferences only concerning the bond topology: The lines connecting the different atoms of a compound, and which might with equal propriety be drawn in any other direction, provided they connected together the same elements, serve only to show the definite disposal of the bonds: thus the formula for nitric acid indicates that two of the three constituent atoms of oxygen are combined with nitrogen alone, whilst the third oxygen atom is combined both with nitrogen and hydrogen. (Frankland, quoted in Biggs et al. 1976, 59) The shapes of the molecules and the relative positions of the atoms played no more part in the representation than the particular colours chosen to represent the different kinds of atoms in Hofmann’s glyptic formulae. All of this reflects the conventionalist’s awareness that a given “fiction” willbe useful for some things and not for others.

Stereochemistry (1870s) The explanatory scope of the structural formulae was soon extended so that inferences could be made about the relative positions of the atoms in a molecule. In 1874, Jacobus van ’t Hoff explained why there are two isomers of compounds in which four different groups are attached to a single carbon atom by supposing that the valences are arranged tetrahedrally (with the two isomers conceived of as mirror images of each other). Adolf von Baeyer explained the instability and reactivity of some organic compounds by reference to strain in their molecules, which meant their distortion away from some preferred structure (see Ramberg 2003, Chs. 3 and 4). These stereochemical theories were intrinsically spatial, because their explanatory power depended precisely on their describing the arrangement of atoms in space. The extension of the theory did not require a wholesale revision of the structures that had previously been assigned to organic substances: they were simply embedded in space, allowing new classes of inference to be made. Hence stereochemistry constituted a conservative extension of the earlier structural theory of the 1860s.

Structure in motion (1920s onwards) The structural formulae of the nineteenth century are sometimes described as static. This can be misleading. Nineteenth-century chemists Spontaneous Generations 9:1(2018) 111 R.F. Hendry Scientific Realism and the History of Chemistry clearly entertained the thought that matter must be dynamic at the atomic scale, but they had no means to make reliable inferences about any such motions. Hence they did not make any. The structural formulae may have been static, but they were not interpreted as representing molecules to be static. This caution began to change in the twentieth century. In 1911, Nils Bjerrum applied the “old” quantum theory to the motions of molecules and their interaction with radiation, potentially explaining their spectroscopic behaviour (see Assmus 1992). From the 1920s onwards, Christopher Ingold and others applied G.N. Lewis’s understanding of the covalent bond as deriving from the sharing of electrons to give detailed insight into reaction mechanisms. These explained how, when chemical reactions occur, the structure of the reagents transforms into the structure of the products (see Brock 1992, ch. 14; Goodwin 2007). These developments were based on conservative extensions of the structures that emerged in the nineteenth century, allowing inferences to be made about their motions, and providing explanations of various spectroscopic and kinetic phenomena that had not previously been within the scope of structural theory.

Quantum chemistry What prospects are there for extending this argument into the era of quantum mechanics? This might seem like a hopeless task: surely the quantum-mechanical molecule is radically different from the classical one! But this is too quick. Firstly, “classical” chemical structures are not classical in any way that has a direct connection with physics. They involve no application of (for instance) classical mechanics or Boltzmann statistics. Secondly, when quantum mechanics did come to be applied to molecules in the 1920s and 30s, the resultant Schrödinger equations were insoluble. To make progress, chemists and physicists developed semi-empirical methods that amounted to retaining classical structure by simple inclusion. This is why Linus Pauling, for instance, regarded quantum chemistry as a synthesis of “classical” structure and quantum mechanics (for details see Hendry 2010). By the 1980s, the development of density-functional theory allowed theoretical chemists to compute accurate electron-density distributions for molecules, which were used by Richard Bader (1990) to define structures which could be interpreted as corresponding to the traditional atoms and bond topologies of molecules. The details are quite interesting: for a given molecule, use its electron-density distribution to define “bond paths” between atoms. These bond paths generate a “molecular graph” for the molecule. In many cases the resulting molecular graph is strikingly close to the

Spontaneous Generations 9:1(2018) 112 R.F. Hendry Scientific Realism and the History of Chemistry classical structure for the molecule.3 As Bader puts it, “The recovery of a chemical structure in terms of a property of the system’s charge density is a most remarkable and important result” (1990, 33). But the correspondence between bond path and chemical bond is not perfect. The main problems concern repulsive (rather than attractive) interactions between neighbouring atoms in a molecule. Bader’s algorithm finds bond paths corresponding to these repulsive interactions, even though chemists would not normally regard the mutually repelling pairs of atoms as bonded to each other. One response is to defend these “bonds” as bonds (see Bader 2006), which looks like bullet-biting. Too much of that will undermine the revisionary analysis one is seeking to defend.

III. Structure and Scientific Realism I will conclude with brief discussions of three ways in which I think the historical development of structural theory can inform scientific realism.

Pessimistic induction The development of structural theory from the 1860s to the 1920s would seem to provide a refreshing contrast to the historical case studies usually cited as grounds for pessimistic induction. Frankland was able to identify which features of his representations should be taken seriously, and more than 150 years later these very features are preserved. Perhaps structure in chemistry supports, rather than confutes, the cumulativity of science. The arrival of quantum chemistry provides a more complex example. In the early years, the best explanation of the retention of classical structure in the quantum-mechanical molecular models is no doubt chemists’ wish to find it there, and their resulting inclusion of it “by hand.” No realist inference seems required. On the other hand, Bader’s results seem to show something significant, but just what they show depends on the details of his algorithm. How loose or strict are the constraints on defining molecular graphs? How many independent parameters are there? How significant, from an ontological point of view, is electron density, the quantity in terms of which the molecular graphs are defined? How troubling are the false positives, the bonds found by the algorithm where no chemist would put them? On all these desiderata except the last, I would argue that Bader’s algorithm scores highly.

3 Bader (1990, 72-3) provides molecular graphs for an impressive range of aliphatic hydrocarbons.

Spontaneous Generations 9:1(2018) 113 R.F. Hendry Scientific Realism and the History of Chemistry

Unconceived alternatives Structural theory is also relevant to Kyle Stanford’s skeptical view of eliminative inference (2006), in two ways: firstly, van ’t Hoff arrived athis version of structural theory via eliminative reasoning; secondly, eliminative reasoning in modern chemistry works within a framework of structural possibility defined by this theory. How did van ’t Hoff arrive atthe tetrahedron? Following Johannes Wislicenus, he argued first that it was possible to account for the observed number and variety of the isomers of certain organic substances only by taking into account the arrangement of atoms in space. He then defended a tetrahedral geometry for the carbon atom by rejecting a square planar arrangement: if carbon’s geometry were square planar, there would be more isomers of substituted methane than are observed.4 Assuming a tetrahedral arrangement, in contrast, would be in accord with the observed number of isomers (see Brock 1992, 260). The structural theory developed in the 1870s still provides the space of theoretical possibility within which the normal science of structural thinking is pursued. Roald Hoffmann (1995, ch. 29) provides a beautiful example of explicitly eliminative reasoning in his discussion of how H. Okabe and J. R. McNesby used isotopic labelling to eliminate two out of three possible mechanisms for the photolysis of ethane to ethylene. While Stanford seeks to circumscribe the skeptical problem that unconceived alternatives pose for eliminative inference (2006, 32), here we have everyday scientific inferences that could be expected to be reliable just in case we can trust a broad framework of theory that was itself arrived at by elimination. Is Hoffmann allowed his inference? What is also striking is the extent to which, between the 1860s and the 1950s, molecular structure became much less remote from experience. In the 1860s, structures were regarded as highly speculative, like string theory today. Rightly so: the sole basis for attributing a structure to a substance was the inferences this would allow one to make about the chemical reactions it would undergo. Through the successive introduction of X-ray diffraction, infrared spectroscopy, and nuclear magnetic resonance spectroscopy (and more importantly their mutual reinforcement), molecular structures became the objects of (indirect) observation. But if eliminative reasoning is so unreliable, this should be surprising. How lucky for us that those Victorian chemists were able to conceive of the theoretical possibilities that would underwrite a hundred years of conservative extension, and would be vindicated by indirect observation via physical methods whose existence was unimaginable at the time. Or perhaps van ’t Hoff’s reasoning was reliable. It is plausible that

4 For instance, disubstituted methane (of the form CH2X2) would have two isomers if it were square planar, whereas only one could be found.

Spontaneous Generations 9:1(2018) 114 R.F. Hendry Scientific Realism and the History of Chemistry eliminative reasoning is reliable to different extents in different contexts, but the question is, what makes the difference? Of its very nature, structural and geometrical possibility may just have been more amenable to eliminative reasoning as wielded by a nineteenth-century chemist than the spaces of theoretical possibility appearing in Stanford’s own examples (early thinking about mechanisms of biological heritability). Or maybe epistemic luck plays a role: the historically contingent development of organic chemistry during the nineteenth century delivers to van ’t Hoff as a plausible assumption the very premise that would render an eliminative inference valid.

The ontological reliability of the special sciences It is common among both philosophers and scientists to entertain the possibility that classical molecular structure is but a figment, a heuristic device that worked well enough for the nineteenth century but must be discarded today, at least as describing a real feature of the world. The basis for the rejection is its supposed incompatibility with quantum mechanics, which presents us with a microscopic world of Heraclitean strangeness that cannot accommodate the balls and sticks of molecular structure. I think this is a misinterpretation of both molecular structure and of quantum mechanics (at least as that theory is applied by physicists), but it seems additionally unfair given just how long structural theory has been around. Here is a line of thought to counter this epistemic injustice: if structural thinking in chemistry really is as cumulative as it seems,5 this would also make it an interesting contrast with physics, within which philosophers have sometimes struggled to identify continuity across theory change. Chemistry interprets its theories in ways that render them solid enough to constitute a lasting foundation for later achievements, unlike its neighbouring discipline, now revealed as a flighty follower of ontological fashion. Maybe the difference is in the interpretative caution. To dispassionate observers, some parts of physics (and the philosophy of physics) seem to get lost in a thicket of speculative metaphysical hypotheses generated by taking far too seriously abstract mathematical features of the dynamics of fundamental theories (multiverses, wavefunctions in 3N-dimensional configuration space replacing 3-dimensional space, etc.), even though the truth or falsity of those speculative hypotheses could make no difference to the reasoning that makes those theories testable.6 Chemistry has long regarded itself as cautious and

5 I have not even mentioned the remarkable stability of compositional claims about compound substances since the chemical revolution. Nor have I discussed in any detail how commensuration with physics genuinely deepened structural explanation in chemistry, as Sidgwick argued. 6 I was once told by a philosopher of physics that I could stop worrying about the problem of reducing molecular structure to physics if only I could embrace the multiverse. Spontaneous Generations 9:1(2018) 115 R.F. Hendry Scientific Realism and the History of Chemistry pragmatic, and chooses to be serious only about the parts of its theories with which it can make inferences. Perhaps philosophers should also be wary of regarding physics as the only “ontological science”, and should be cautious about overwriting the results of the special sciences with (possibly highly revisionary) reinterpretations based on the latest speculative interpretation of a physical theory. The problem is not with physics or physical theory, but with where we focus our ontological attention: we should not prioritize the axioms of a theory over its applications, when it is in the latter that all the empirical support is earned.

Robin Findlay Hendry Department of Philosophy Durham University 50 Old Elvet Durham DH1 3HN United Kingdom [email protected]

References

Assmus, Alexi. 1992. The Molecular Tradition in Early Quantum Theory. Historical Studies in the Physical and Biological Sciences 22(2): 209-231. Bader, Richard F.W. 1990. Atoms in Molecules: A Quantum Theory. Oxford: Oxford University Press. Bader, Richard F.W. 2006. Pauli Repulsions Exist Only in the Eye of the Beholder. Chemistry: A European Journal 12(10): 2896-901. Biggs, Norman L., E. Keith Lloyd and Robin J. Wilson. 1976. Graph Theory 1736-1936. Oxford: Clarendon Press. Brock, William H. 1992. The Fontana History of Chemistry. London: Fontana Press. Brock, William H. and David M. Knight. 1967. The Atomic Debates. In The Atomic Debates: Brodie and the Rejection of Atomic Theory, Three Studies, ed. W.H. Brock, 1-30. Leicester: Leicester University Press. Goodwin, W.M. 2007. Scientific Understanding After the Ingold Revolution in Organic Chemistry. Philosophy of Science 74(3): 386-408. Hendry, Robin Findlay. 2010. The Chemical Bond: Structure, Energy and Explanation. In EPSA Philosophical Issues in the Sciences: Launch of the European Philosophy of Science Association, eds. Mauro Dorato, Miklós Rèdei and Mauricio Suárez, 117-127. Berlin: Springer. Hoffmann, Roald. 1995. The Same and Not the Same. New York: Columbia University Press. Poincaré, Henri. 1905. Science and Hypothesis. London: Walter Scott.

Spontaneous Generations 9:1(2018) 116 R.F. Hendry Scientific Realism and the History of Chemistry

Ramberg, Peter. 2003. Chemical Structure, Spatial Arrangement: The Early History of Stereochemistry, 1874-1914. Aldershot: Ashgate. Rocke, Alan J. 1984. Chemical Atomism in the Nineteenth Century: From Dalton to Cannizzaro. Columbus: Ohio State University Press. Rocke, Alan J. 2010. Image and Reality: Kekulé, Kopp and the Scientific Imagination. Chicago: Chicago University Press. Sidgwick, N.V. 1936. Structural Chemistry. Journal of the Chemical Society 149: 533-538. Stanford, P. Kyle. 2006. Exceeding our Grasp: Science, History and the Problem of Unconceived Alternatives. Oxford: Oxford University Press. Suárez, Mauricio. 2004. An Inferential Conception of Scientific Representation. Philosophy of Science 71(5): 767-779.

Spontaneous Generations 9:1(2018) 117 Quo Vadis Selective Scientific Realism?

Author(s): Peter Vickers

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 118-121.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.28056

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Focused Discussion Invited Paper Quo Vadis Selective Scientific Realism? Peter Vickers*

Where is selective scientific realism going? A very large number of historical challenges are out there in the literature (see e.g., the list of twenty cases in Vickers 2013) apparently showing in concrete terms how scientific success actually can be born of hypotheses that are not even approximately true (on any reasonable account of “approximate truth”). In the years since Laudan’s “Confutation of Convergent Realism” (1981) the scientific realist has introduced a whole host of criteria to try to dodge the “counterexamples,” or at least to reduce the number of them. But when one looks at the historical challenges, and especially the newer ones that are sensitive to contemporary selective realism, one might feel that the realist is on the back foot (to put it mildly). Indeed, many in the community tend to think that the historical challenges are overwhelming, such that the realist will only manage to dodge them if she turns realism into an empty position, or if she adds auxiliaries in an ad hoc manner to save what is in fact a degenerating research program. By contrast, my current opinion is that the selective realist is in a strong position vis-à-vis the historical challenges. Certainly the realist needs to invoke some careful criteria for realist commitment, and various nuances concerning the nature of her epistemic commitment, and this may raise the “death by a thousand qualifications” question mark. But the concern is unfounded: the qualifications are all independently motivated, and indeed necessary given the philosophical complexity. Qualifications are to be welcomed here; often the truth is far from simple! To illustrate, let’s start with a list of some of the most serious historical challenges that feature in the contemporary literature:

1. Successful explanations and predictions based on the caloric theory of heat.

2. Successful explanations and predictions based on the phlogiston theory of combustion.

* Peter Vickers is Associate Professor and Reader in the Department of Philosophy at the University of Durham, UK. He is the author of Understanding Inconsistent Science (OUP, 2013).

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 118 P. Vickers Quo Vadis Selective Scientific Realism?

3. Successful explanations and predictions based on the Fresnel “luminiferous ether” theory of light. 4. Successful explanations and predictions based on Kirchhoff’s theory of the diffraction of light. 5. Successful explanations and superb predictions based on Sommerfeld’s 1916 theory of the hydrogen atom. 6. Dirac’s highly significant prediction of the positron based on the “Dirac sea” theory of negative energy electrons. These examples are all chosen because they were extremely successful, so the realist cannot hope to answer them by raising the bar for the success necessary for a realist commitment. But what of making a distinction between the “working parts” of the theory, which are confirmed by the success because of the special role they play in bringing about the success, and the “idle wheels” of the theory, which are not so confirmed? This selective approach to confirmation is independently motivated, has wide appeal, and has been applied to at least some of the cases (1)-(6) to answer the historical threat. Just to give three relatively recent examples, Votsis and Schurz (2012) apply this approach to the caloric theory, Ladyman (2011) applies it to the phlogiston theory, and Cordero (2011) applies it to Fresnel’s theory of light. Granted, in each of these cases the application of the selective strategy varies. But this should cause no concern if we accept a “neutral” approach to the type of theoretical entity that can be confirmed (see Peters 2014): in some contexts it might be right that only abstract “structure” is confirmed, but in other contexts laws, or entities, or some mixture of things might be confirmed. What really matters is that the realist has good reasons for “selecting” the parts that are confirmed, reasons that are completely independent of the fact that those parts were (at least approximately) retained in the successor theory/theories. Of course, contemporary realists are sensitive to this requirement, and argue that they do indeed have independent reasons. What of cases (4), (5), and (6)? Case (4) initially seemed to me to be a serious challenge to the realist, but after further investigation (see Vickers 2016) I have come around to the idea that the realist has a good response here (based, again, on thinking about what is really doing the work to bring about the success). Similarly case (5), on first view, seems extremely challenging to the realist (see Vickers 2012) and I have seen this case put forward at conferences and workshops as the knock-down counterexample to the realist’s success-to-truth inference. However, it turns out that physicists were themselves long perplexed by this case, and one physicist in particular worked to resolve the puzzle. His conclusion—comparing the old Sommerfeld derivation of the success to the contemporary derivation—is that “The Spontaneous Generations 9:1(2018) 119 P. Vickers Quo Vadis Selective Scientific Realism? underlying symmetry of the problem intervenes in a most remarkable and essential way so as to produce the closest possible correspondence between the two (suitably formulated) calculations” (Biedenharn 1983, 14, original emphasis). There is some philosophical work to be done here, but this strongly suggests just the sort of structural correspondence between the old and the new theory that would be expected by a structural realist. That leaves case (6), and, to my knowledge, there is currently no realist response to the challenge. However, the challenge has been put to the realist only quite recently (Pashby 2012), so it is a bit too soon to make a confident judgement. And in addition, this is only one case. Even non-realists such as Chang insist that “[o]ne case, of course, does not have much force” (2003, 910). And one might well wonder whether it would be a really serious challenge to the realist even if there are two or three “counterexamples” to the claim that the working parts of a sufficiently successful theory must be (at least approximately) true. In particular it is crucial to stress that no contemporary realist actually insists that such working parts must be true; instead the claim is that it is a good bet, and “counterexamples” are likely to be rare. It is acknowledged that realists have a challenge cashing out the probability judgement they are making here (e.g., Magnus and Callender 2004), but the basic intuition seems to make sense and realist responses to the “base rate” objection are emerging (e.g., Henderson 2017). Where does this leave us? I am suggesting that cases (1)-(6) are the six most serious historical challenges to scientific realism in the literature, and that contemporary realists currently have answers to five of them that look promising, or at the very least are worth pursuing. So if the historical challenge is supposed to be a pessimistic induction, the realist is in a strong position since the inductive base looks rather weak. But if it is not an induction, then it seems to miss the mark, since the realist proposes a defeasible success-to-truth inference that will survive one or two historical counter-instances. For me, then, contemporary selective scientific realism is a progressive research program, and this is a progressive and fascinating debate. Wherever it is going, it is going somewhere important.

Peter Vickers Department of Philosophy Durham University 51 Old Elvet, Durham DH1 3HN United Kingdom [email protected]

Spontaneous Generations 9:1(2018) 120 P. Vickers Quo Vadis Selective Scientific Realism?

References

Biedenharn, L. C. 1983. The “Sommerfeld Puzzle” Revisited and Resolved. Foundations of Physics 13(1): 13-34. Chang, Hasok. 2003. Preservative Realism and Its Discontents: Revisiting Caloric. Philosophy of Science 70(5): 902-912. Cordero, Alberto. 2011. Scientific Realism and the Divide et Impera Strategy: The Ether Saga Revisited. Philosophy of Science 78(5): 1120-1130. Henderson, Leah. 2017. The No Miracles Argument and the Base Rate Fallacy. Synthese 194(4): 1295-1302. Ladyman, James. 2011. Structural Realism versus Standard Scientific Realism: The Case of Phlogiston and Dephlogisticated Air. Synthese 180(2): 87-101. Laudan, Larry. 1981. A Confutation of Convergent Realism. Philosophy of Science 48(1): 19-48. Magnus, P. D., and Craig Callender. 2004. Realist Ennui and the Base Rate Fallacy. Philosophy of Science 71(3): 320-338. Pashby, Thomas. 2012. Dirac’s Prediction of the Positron: A Case Study for the Current Scientific Realism Debate. Perspectives on Science 20(4): 440-475. Peters, Dean. 2014. What Elements of Successful Scientific Theories Are the Correct Targets for ”Selective” Scientific Realism? Philosophy of Science 81(3): 377-397. Vickers, Peter. 2012. Historical Magic in Old Quantum Theory? European Journal for Philosophy of Science 2(1): 1-19. Vickers, Peter. 2013. A Confrontation of Convergent Realism. Philosophy of Science 80(2): 189-211. Vickers, Peter. 2016. Why Kirchhoff’s Approximation Works. In Gustav Robert Kirchhoff’s Treatise “On the Theory of Light Rays” (1882): English Translation, Analysis, and Commentary, eds. K. Hentschel and Ning Yan Zhu, 125-142. Singapore: World Scientific. Votsis, Ioannis, and Gerhard. Schurz. 2012. A Frame-Theoretic Analysis of Two Rival Conceptions of Heat. Studies in History and Philosophy of Science 43(1): 105-114.

Spontaneous Generations 9:1(2018) 121 How Deployment Realism Withstands Doppelt’s Criticisms

Author(s): Mario Alai

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 122-135.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.27046

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Peer-Reviewed How Deployment Realism Withstands Doppelt’s Criticisms* Mario Alai†

Currently one of the most plausible versions of scientific realism is “Deployment” (or “Partial”, or “Conservative”) Realism, based on various contributions in the recent literature (especially Kitcher 1993), and worked out as a unitary account in Psillos (1999). According to this version we can believe that theories are at least partly true (because that is the best explanation for their predictive success—especially novel predictions), and discarded theories which had novel predictive success had nonetheless some true parts: those necessary to derive their novel predictions. In fact, it has been argued that the partial truth of theories (including the discarded ones) is the only non-miraculous explanation of their success.1 According to Doppelt (2005, 2007) this account cannot withstand the antirealist objections based on the “pessimistic meta-induction” and Laudan’s historical counterexamples. Moreover, it is incomplete, as it purports to explain the predictive success of theories, but overlooks the necessity of also explaining their explanatory success. Accordingly, he proposes a new version of realism, presented as the best explanation for both predictive and explanatory success, and committed only to the truth of the best current theories, not that of the discarded ones (Doppelt 2007, 2011, 2013, 2014). I have argued elsewhere (Alai 2016a) that Doppelt’s “Best Theory Realism” is not really a viable option, for it can explain neither the success of past theories nor their failures; moreover, it is rather implausible, and actually the easiest prey of the pessimistic meta-induction argument. Here instead I argue for the following claims: (a) Doppelt has not shown that Deployment Realism as it stands cannot solve the problems raised by the

* Received June 16, 2016. Accepted September 22, 2016. † Mario Alai holds a Laurea from the University of Bologna, a Perfezionamento from the University of Urbino, a Laudatur from the University of Helsinki, a Ph.D from the University of Maryland and one from the University of Florence, and is currently associate professor of Theoretical Philosophy at the University of Urbino. 1 See Putnam (1975a, 73). In Alai (2014c) I argue that there cannot be an equally viable antirealist explanation.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 122 M. Alai How Deployment Realism Withstands Doppelt’s Criticisms history of science; (b) explaining explanatory success does not add much to the explanation of novel predictive success; (c) Doppelt is right that truth is not a sufficient explanans, but for different reasons than he thinks, and this does not refute Deployment Realism, but helps to detail it better. In a more explicit formulation, the realist Inference to the Best Explanation (IBE) concludes not only that theories are true, but also that the scientific method and scientists are reliable, nature is orderly and simple, and background theories are approximately true.

I. Can Deployment Realism resist the pessimistic induction and meta-modus tollens? The “no miracle” argument, according to which the (approximate) truth of theories is the only plausible explanation of their empirical success (and in particular of their novel predictions), was criticized by Laudan (1981). More recently Lyons (2002) has raised against it what he calls the “pessimistic meta-modus tollens”: some theories now known to be radically false made successful predictions, even novel ones; therefore the predictive success of current theories is no reason to believe they are true. To this Doppelt adds a particularized version of the “pessimistic meta-induction”: “Furthermore, if past successful theories are false, it is likely that current successful theories are false, indeed, wholly false ... because in all probability the entities to which they refer do not exist, just as in the past cases” (2005, 1077).2 To these criticisms Psillos has three replies: (1) the true predictions of some discarded theories were not novel, so they could be explained by a posteriori accommodation, without appeal to truth: realism is committed only to the truth of theories with novel predictions. (2) Some discarded theories had novel success, but we can still recognize them as partially true: the assumptions that were essential in deriving their novel predictions are still part of our theories. (3) A suitable causal theory of reference shows that some successful but discarded theories actually referred to the same entities as currently held theories. Doppelt sees all of these three replies as unsuccessful.

Novelty Against Psillos’s claim (1) that only novel predictions warrant belief in the truth of theories Doppelt has three objections. The first is that the confirmation of a theory by data cannot depend on the contingent factof when those data were discovered, or whether the theory was advanced before

2 My emphases: the reference to successful theories distinguishes this version of the pessimistic induction from the more general one. An example of the original formulation of the pessimistic meta-induction is Putnam (1978, 25).

Spontaneous Generations 9:1(2018) 123 M. Alai How Deployment Realism Withstands Doppelt’s Criticisms or after their discovery. It is more important that there is “a wide range of different kinds of evidence,” or that the theory has “a broader explanatory scope” (2005, 1079-1080). Now, this is a well-known objection used by deductivists to deny the advantage of novelty against predictivists. The latter however have convincingly replied that what is relevant to confirmation is not temporal novelty, but use novelty: i.e., what matters is not that the data predicted by theory T were unknown before it, but that they were not used (or more precisely not used essentially) in building T (Worrall 1985, 319; 2005, 819; Leplin 1997, chs. 2, 3). Of course temporal novelty implies essential-use novelty (for if a datum was not known before T, it obviously was not used in building T); but only essential-use novelty is required to bar all possible explanations except truth, so only essential-use novelty warrants belief in the truth of theories.3 Therefore the only criterion of truth for realists should be essential-use novelty, and Doppelt’s criticism of historical novelty doesn’t apply to this criterion. Actually, it can be shown that the requirement of essential-use novelty is equivalent to the of disparate bodies of evidence, i.e., to broad explanatory scope, which Doppelt rightly takes as strong confirmation of theories. In fact, saying that datum D (even if used) was inessential in the construction of theory T is saying that T was already plausible independently of D, i.e., that T was already the best possible explanation of a sufficiently large body of evidence quite different from D; and this, in turn, is saying that T is the best possible explanation of a wide range of different phenomena (Alai 2013). So, it is quite safe, for Psillos, to use novelty (understood as essential-use novelty) as a criterion for deciding which parts of theories realists should consider to be true.4 Doppelt’s second objection to the novelty criterion is that if a scientific realist is a naturalist, as are many supporters of the “no miracle” argument, she should treat scientific realism itself as she treats scientific theories. But scientific realism does not make novel predictions. So, to be consistent, realists should not require novel predictions from theories (2005, 1080). However, first of all, this argument applies only to naturalists, andone can be a scientific realist, even a supporter of the “no miracle” argument, without being a naturalist. Secondly, it might be pointed out that scientific realism actually does make at least one prediction: that science will go on

3 See Alai(2013). For an overview of the literature on novelty see Gardner (1982, 1), Maher (1988, 273), Barnes (2008, 1-26), etc. 4 Lyons (2002) objected to Psillos that in a number of historical cases some false claims were essentially involved in successful novel predictions, so novel success is not evidence even for the truth of particular claims. I have replied to these objections in Alai (2014b). Spontaneous Generations 9:1(2018) 124 M. Alai How Deployment Realism Withstands Doppelt’s Criticisms being successful, nay even more successful, in the future. Of course, strictly speaking this is not a novel prediction, since although it predicts something new (future success) it is however of the same type as something (past success) already known and used in arriving at scientific realism itself.5 Furthermore, a realist can be a naturalist even without holding that realism is a scientific theory, but rather simply by holding that realism can be supported by a typically scientific inferential pattern (the IBE), based on an empirical fact—the success of science (see Alai 2012). Even Boyd and Putnam claimed that realism is science-like, not that it is a scientific theory: “philosophy is itself a sort of empirical science” (Boyd 1984, 65; my italics) and “in one way of conceiving it, realism is an empirical theory” (Putnam 1978, 12; my italics). Hence, the relation between scientific realism and a scientific theory can be seen more as an important analogy than as an identity; as such, it leaves room for both positive analogies (similarities) and negative analogies (differences). For instance, scientific theories describe facts about particular domains of the natural or social world; scientific realism, instead, hardly describes anything—or if it does, it describes an utterly general relation (truth, i.e., correspondence) obtaining between scientific theories and entities of any domain of natural or social reality. But it is easy to see that something really new can be discovered only about particular natural or social domains, not about most general facts such as those that are dealt with by scientific realism or philosophy in general. So, novel predictions need not be among the similarities between scientific realism and scientific theories. Therefore, pace Doppelt, from the fact that the former does not make novel predictions, it doesn’t follow that the latter are not required to make them. Doppelt’s third and most important objection is that for Psillos (1999, 171) theories should not just predict in the sense of entailing the data, but explain them, in the sense of offering an account that is simple, complete, intuitively plausible, consistent with background beliefs, embedding the entailed datum in a wide picture, etc. (2005, 1080-1081, passim). One reason, Doppelt points out, is that this provides the only defence of realism against the empirical underdetermination argument: thousands of false theories may imply the same body of data, but (supposedly) just one can explain it (i.e., imply it in such a virtuous way) and so be a candidate for belief (2005, 1083). So, he argues, what realism should explain is not only predictive success (i.e., why the theory entails novel data), but also explanatory success (i.e., why it has the theoretical virtues of simplicity, completeness, consilience, broad scope, plausibility, consistency with background knowledge, etc.) (2005,

5 In Alai (2013, § 3.3) I explained why data that are new but of the same type as those used to build the theory cannot count as novel to the ends of realist commitment.

Spontaneous Generations 9:1(2018) 125 M. Alai How Deployment Realism Withstands Doppelt’s Criticisms

1081). However, some of the successful-but-false theories cited by Laudan, and rejected by Psillos because they lacked novel predictions, no doubt possessed the mentioned theoretical virtues. (Doppelt mentions Le Sage and Hartley’s contact-action gravitational ether theory, but the phlogiston and caloric theories and others might also be cited.) In this sense, they had explanatory success. Hence, since for realists truth is the best explanation of success, they should be committed to the truth of these theories (2005, 1081; 2007, 107). Therefore, since we know those successful theories were actually false, for Doppelt it follows that success is not evidence of truth. Doppelt’s argument is fallacious, however, for the kind of explanatory success that is explainable only by truth is not just any sum of novel predictions and theoretical virtues, such that in the limiting case it might consist of many theoretical virtues, but no novel predictions; rather, it is the conjunction of novel predictions with theoretical virtues, where the two conjuncts are both necessary. To see why this is so, suppose a theorist wants to explain a body of known data D: if she is ingenious and patient enough, by reasoning backwards from D she might well succeed in building a theory T that has explanatory success in Doppelt’s sense, i.e., which entails all the data in a way that is simple, elegant, complete, plausible, etc. However, one cannot conclude from this success that T is true: in fact, this “success” (i.e., that T entails D and has those theoretical virtues) is simply due to the general principle that for any body of data there are numberless false theories entailing them (plus a true one), and to the fact that the theorist was ingenuous and patient enough to build not just any theory which entailed D, but one that entailed D in a theoretically virtuous way. One might suspect that finding such a theory is easier said than done.6 Granted, it is certainly not easy, but it is possible, and the history of science is full of such theories that were nonetheless radically false. The Ptolemaic system was extremely ingenious; it explained everything there was to explain, and was so plausible that it was accepted for some 1400 years. Perhaps it was not exactly simple and elegant by our contemporary standards, but these are relative notions, and at least it was simple and elegant enough to be considered explanatorily successful for all those centuries. The Tychonic system, the caloric and phlogiston theories, the vortex theory of heat, etc., thanks to the intelligence and hard work of their authors, were also so explanatorily successful that they were believed to be true, and only with great difficulty were they eventually superseded by their true competitors. So, the “explanatory success” of T in Doppelt’s sense is by itself no evidence for its truth. After all, good historical novels entail a number of true claims about the historical period at hand, and do so in a simple, complete, consilient, and

6 I owe this suggestion to this paper’s Reviewer B.

Spontaneous Generations 9:1(2018) 126 M. Alai How Deployment Realism Withstands Doppelt’s Criticisms plausible way—but they are false! In a word, explanatory “successes” of this kind can be accounted for without assuming the truth of T. The keys to this explanation are ingenuity, patience, and most importantly the possibility of working abductively from a body of previously known data, like in solving a puzzle. But if there is at least one phenomenon that T not only explains, but anticipates (i.e., that was not used in building T), then the “ingenuity and patience” explanation is no longer possible; in this case the only possible explanation, which therefore becomes overwhelmingly probable, is that T has captured some structures of reality that are responsible for those novel effects, i.e., that it is (at least partly) true. So, the realist need not be committed to the truth of any theory or theoretical component, epistemically virtuous as it may be, unless it also yields some novel predictions. Therefore Psillos has no need to consider successful in the required sense those past theories that explained virtuously but yielded no novel predictions, and those theories are no counterexample to the claim that novel predictive success is evidence for truth. Summing up, Doppelt’s criticisms of Psillos’s appeal to the novelty of predictions as a criterion of commitment is off target.

Partial truth The second defensive move of Deployment Realism is the divide et impera strategy: when a discarded theory made genuine novel predictions, one may assume that it had not only false components, but also some true ones: those responsible for such predictions. The false assumptions, instead, will be found to play no role in those discoveries. For instance, the existence of caloric and its properties were not necessary to the novel predictions made by the caloric theory, which can be credited instead to different and true assumptions of the theory (Psillos 1999, 108-118). But Doppelt objects that such false components, idle with respect to predictive success, were necessary to the theory’s explanatory success: i.e., they were an essential part of the simple, consilient, plausible, etc. account given by the theory. So, realists should be committed to their truth, or give up the idea that theoretical assumptions essential to success must be true (2005, 1084-1085; 2007, 108). But as just noted, theoretical virtues by themselves are not evidence for truth; they become a significant success only in connection with novel predictions. So, even if a false component were essential to make a theory virtuous, realists wouldn’t need to accept it as true. Moreover, it is simply not the case that false components of the caloric theory, or the phlogiston theory, etc., were essential to their explanatory success. In fact, take any of these false theories T1, which made novel predictions NP and had true components TC directly responsible for NP; a false component FC (e.g., the existence of caloric, phlogiston, etc.) would be essential to T1’s success Spontaneous Generations 9:1(2018) 127 M. Alai How Deployment Realism Withstands Doppelt’s Criticisms only if it were a necessary component of any other theory T2 predicting NP and at least equally general, complete, simple, plausible etc. However, as a matter of fact, this is simply not the case: each of those discarded theories was rejected precisely when scientists realized that there was an alternative theory T2 that (a) dropped FC, but (b) was at least as simple, plausible, etc., and (c) made all the predictions NP and more. So, why shouldn’t we accept TC, which is responsible for NP and preserved in T2, as true, while rejecting FC, which has been discarded by T2, as false? Thus, the “divide and conquer” strategy works quite well even with respect to the more robust notion of explanatory success.

Referential continuity Psillos’s third defensive move against Laudan’s attack (and more generally against the pessimistic meta-induction) exploits the causal account of reference, in the footsteps of Putnam (1975b): when a term (e.g. “luminiferous ether”) is involved in existential claims that are essential to novel predictions, it can be understood as actually referring to a real entity (e.g., the electromagnetic field) that shares some crucial properties with the entity postulated by the discarded theory, and plays the same causal role. Thus, for instance, ether theorists actually referred to the electromagnetic field, even if attributing to it partly wrong properties; so, their existential claims on “ether” were true, after all. Doppelt’s objection to this move is very similar to the previous one: the wrong properties that the ether theorists attributed to what they called “ether” were probably essential to the explanatory success of their theory, since they were required to make their theory general, complete, simple, plausible, etc. Hence, how could we explain its success, while regarding the attribution of those properties as false? (2005, 1086; 2007, 109). This objection fails for a reason similar to the earlier one: on the one hand, the fact that the ether theory, including those wrong attributions, was coherent, simple, plausible, etc., does not require an explanation in terms of truth, but merely in terms of the ingenuity of its authors. On the other hand, that such a theory predicted (hence, thanks to its virtues, explained) some novel phenomena calls for an explanation in terms of truth, but those wrong properties play no role in that explanation: in fact, even if those predictions were originally drawn from false claims, which attributed wrong properties, the “divide and conquer” strategy shows that they could also be derived by weaker and true claims, which were entailed by the theory but did not involve any wrong properties (see Alai 2013, §7, Alai 2016a §3). Moreover, those weaker true claims are precisely those made by the equally or more virtuous theory (ours) which has dropped those wrong attributions. The success of the idea of luminiferous ether, in other words, is explainable because, and to Spontaneous Generations 9:1(2018) 128 M. Alai How Deployment Realism Withstands Doppelt’s Criticisms the extent that, it resembles the electromagnetic field. It has been objected that understanding “ether” as referring to the electromagnetic field may be overstretching (Worrall 1995). In fact, I don’t think the causal theory of reference as such plays a crucial role in resisting Laudan’s objection and the pessimistic meta-induction: if the ancient ether theorists could come back to learn today’s physics, would they claim that (a) in their time they were actually talking about the electromagnetic field, or would they rather grant that (b) they believed in a non-existent entity? In these terms this is merely a psychological question of little interest to us. But if we ask whether they should hold (a) or (b), there is simply no fact of the matter about this question, because there is no fact of the matter about just how much descriptive difference is compatible with referential continuity. Nor would this matter much to Deployment Realists anyhow: what matters for them is whether (and how many, and which) true components are found in theories. But just as claims about real entities can be false, so claims about inexistent entities can be partially true. For instance, there is no truth in the claim that “ether has weight, ” whether by “ether” we mean ether or the electromagnetic field. On the other hand, if “ether” refers to the electromagnetic field, the claim that “ether oscillates” is true, whilethe claim that “ether is an elastic fluid” is false. If instead “ether” just means ether, and so fails to refer to anything, the claim that “ether oscillates” is false or void of truth-value, yet it entails the true claim that the transparent, weightless, etc., agent responsible for the transmission of light oscillates. Summing up, none of Doppelt’s criticisms refute Psillos’s defence of Deployment Realism against historical objections.

II. Is predictive success a sufficient explanandum and is truth a sufficient explanans? Doppelt also criticizes Deployment Realism as such: as seen above, he holds that (1) realists should explain not only predictive success, but also “explanatory” success; it follows that (2) the (partial) truth of theories is not enough to explain all there is to explain about their success. While I reject the former claim, I substantially agree with the latter, although for reasons partly different from his. What this claim reveals, however, is not amistake, but something missing, or perhaps just left implicit in Psillos’s version of Deployment Realism. However this gap can be easily filled: in fact, in his proposed reconstruction of realism Doppelt himself supplies part of what is missing, and I shall add what is needed to provide a complete account. Doppelt understands explanatory success as the capacity to offer a theoretically virtuous account of data: therefore, “what realism must explain is why a theory succeeds in producing a simple, unifying, consilient, intuitively plausible, and empirical adequate explanation of phenomena” Spontaneous Generations 9:1(2018) 129 M. Alai How Deployment Realism Withstands Doppelt’s Criticisms

(2005, 1082; 2007, 102). This formulation of the question is objectionable for two reasons: (1) taken literally, the question is trivial, and so is the answer—the theory succeeds in giving a simple, etc. explanation, because it is simple, etc. What should be asked is rather: Why has the theorist succeeded in producing a theory that is simple, etc. (and so constitutes a simple, etc. explanation)? It might now be pointed out that (2) the answer becomes easy, and does not involve truth or other realistic assumptions: the theorist produced a theory that was simple, etc., because she wanted to produce such a theory and was ingenious enough to succeed. The really hard question, the answer to which requires the assumption of truth, is how scientists succeed in finding theories that (beside being theoretically virtuous) are predictively successful. Doppelt also provides what he considers an equivalent formulation of his question, although it is actually a different question: “Is it just a lucky accident that true theories turn out to be simple, consilient, unifying, and plausible, as well as empirically adequate?” (2005, 1082). That is, for the realists the predictively successful theories are true; moreover, they turn out to be simple, consilient, etc. Therefore, this conjunction of theoretical virtues and truth is another fact to be explained. Doppelt’s answer is that the true theory is also simple, consilient, etc., because “nature itself is simple, unified and symmetric, in the way implied by these theories” (2007, 102). In his reconstruction of realism, therefore, a new “metaphysical” (2005, 1082) assumption, the simplicity and order of nature, is added to the usual realist claim that theories are true. But, upon reflection, this new metaphysical assumption actually adds nothing to what was already implicit in realism: since theories represent nature as simple and orderly, the claim that they are true already implies that nature is simple and orderly. Moreover, there is a problem with the argument leading to that conclusion: Doppelt asks why true theories turn out to be simple, consilient, etc., and empirically adequate. But who says that they are true? The realists do, on the basis of their explanatory argument! So, realists are not allowed to take the truth of theories as a datum to be explained: truth can only appear as the ultimate conclusion of the realist’s argument, i.e., as the explanans of whatever other facts the realist is entitled to take for granted. A question realists are entitled to ask is rather: Why are the theories making true predictions also simple, consilient, etc.? (In fact, that theories make true predictions and that they are theoretically virtuous are not assumptions, but easily observable facts.) It might be objected that this question needs no answer, for it is just a logical fact about the theory that it has the particular consequences it has, and it is as simple, etc. as it is (see White 2003). Nonetheless, there is a real problem here, which is brought up by

Spontaneous Generations 9:1(2018) 130 M. Alai How Deployment Realism Withstands Doppelt’s Criticisms a more explicit formulation of the realist question: How do scientists succeed in producing theories that make true predictions and are simple, consilient, etc.? As I argued above, the explanation for the simplicity, consilience, etc. can well be the theorists’ ingenuity and desire to have theories with such virtues, and the explanation for true but not novel predictions can be the theorists’ ingenuity and desire to accommodate those known phenomena. Only true novel predictions cannot be explained in this way. Hence, they are the only basis of the realist explanatory argument: pace Doppelt, realists don’t need to explain explanatory success, but only predictive success.

III. The realist explanation and what it requires beside truth Although Doppelt’s claim that truth is not a sufficient explanans is derived from the wrong thesis that predictive success is not a sufficient explanandum, it is still true, albeit for different reasons. We can appreciate this by spelling out the realist IBE in its questions and answers: Question(A): Why and how do scientists succeed in producing theories that make true novel predictions? Answer (A): Because they produce theories including theoretical assumptions close enough to the truth and deep and fruitful enough to predict further significant phenomena in addition to those used in their construction. But: Answer A as it stands is quite unsatisfactory. Why not just answer that: (A′) theorists are lucky enough to produce theories with unexpected true consequences? (After all, there are much greater chances of finding empirically adequate theories than true theories.) So Answer (A) is implausible, unless we in turn explain it: in order to make it more plausible than answer (A′), we need to explain how finding (partially) true theories is possible. In this sense Doppelt is right that truth per se is not a sufficient explanans. So, in order to answer Question (A) we also need to answer the following: Question (B): How do theorists succeed in producing theories that include theoretical assumptions close enough to the truth and deep and fruitful enough to predict further significant phenomena in addition to those used in their construction? The answer (as suggested by Maher 1988 and White 2003) is roughly that theorists succeed in producing such theories because: Answer (B): Theorists (B.i) aim to produce true, deep, and fruitful theories, and (B.ii) employ the scientific method, which is reliable and heuristically effective, i.e., truth-conducive and favouring novel discoveries. However, if the reliability and heuristic effectiveness of the scientific Spontaneous Generations 9:1(2018) 131 M. Alai How Deployment Realism Withstands Doppelt’s Criticisms method are not to remain an empty virtus dormitiva, they should be explained and motivated in their turn. We should explain what makes the scientific method truth-conducive and makes it favour novel discoveries. So, in order to answer Question (B), and hence Question (A), we also need to answer the following: Question (C): Why is the scientific method so reliable and heuristically effective? The realist answer, now, is: Answer (C): Because (C.i) the scientific method endeavours to explain observed phenomena by theoretical hypotheses whose consequences go beyond those phenomena and tend to be very general; (C.ii) it respects empirical constraints; (C.iii) it is based on the assumption that nature is simple, symmetrical, consilient, etc., and thus it reconstructs the unobservable natural systems by analogy, abduction, and inductive extrapolation from observable ones; moreover, (C.iv) nature actually is simple, symmetrical, consilient, etc. (C.v) any background theories employed by theorists or presupposed by their method (see Boyd 1981) are themselves true since they are derived by sound scientific method.7 So, Answer (C) explains Answer (B), which explains Answer (A), which in turn explains the only data to be explained, novel predictions. The full-blown realist argument is that Answer (A) is the best (nay, the only non-miraculous) explanation of predictive success, but only if coupled with Answer (B); and Answer (B) is untenable unless supported by Answer (C). Therefore the answers (A), (B) and (C) are all required to make up the only plausible explanation of predictive success. Hence, they all are part of the conclusion of an IBE from predictive success (the “no miracle” argument), and realists are committed to all of them. It may be observed that (B.i), (C.i), (C.ii), and (C.iii) are rather uncontroversial, as they describe plainly observable facts. In contrast, (B.ii), (C.iv) and (C.v), as well as (A) itself, are rather bold claims, concerning epistemic and metaphysical facts which are not open to direct empirical ascertainment; therefore these claims can only be argued for abductively,

7 The presupposition of earlier theories by the theorist or by her method does not launch an infinite regress, since the earliest, “take-off” theories were introduced without the theorist relying on previous theories or on theory-dependent methods (Barnes 2008, 146-155). This does not mean that we have no reason to assume that those earliest theories were (partly) true, because the claim (C.v) that background theories are true is required only when background theories are employed, which is obviously not the case for take-off theories.

Spontaneous Generations 9:1(2018) 132 M. Alai How Deployment Realism Withstands Doppelt’s Criticisms and supporting them is a remarkable achievement of Deployment Realism.8 In particular, (C.iv) is nothing but Doppelt’s metaphysical assumption of the simplicity, consilience, etc. of nature. But I am arguing that this is just one of the further explanatory factors required to explain and make plausible the assumption that theories are true, which in turn is the only plausible explanation of novel predictive success. Claims (B.i), that theorists follow the scientific method, and (C.iii), that the scientific method presupposes the simplicity, consilience, etc. of nature, together explain what Doppelt calls the explanatory success of theories, i.e., why they are simple, consilient, etc. But, contrary to Doppelt’s claim, these theoretical virtues do not figure in the realist IBE as the primary explanandum, like novel predictions do. Rather, together with the other necessary assumptions, these virtues work to explain novel predictions. In this sense, nothing really needs to be added to the argument from predictive success, once it is made fully explicit. Summing up, Doppelt’s criticisms don’t refute Deployment Realism; at most they help to spell it out more completely.

Mario Alai Department of Pure and Applied Sciences University of Urbino [email protected]

References

Alai, Mario. 2012. Levin and Ghins on the “No Miracle” Argument and Naturalism. European Journal for Philosophy of Science 2 (1): 85-110. Alai, Mario. 2013. Novel Predictions and the No Miracle Argument. Erkenntnis 78 (3): 1-30. DOI: 10.1007/s10670-013-9495-7. Alai, Mario. 2014a. Explanatory Realism. In Science, Metaphysics, Religion: Proceedings of the Conference of the International Academy of Philosophy of Science, Siroki Brijeg 24-24 July 2013, ed. E. Agazzi, 99-116. Milano: Franco Angeli. Alai, Mario. 2014b. Defending Deployment Realism against Alleged Counterexamples. In Defending Realism: Ontological and Epistemological Investigations, eds. G. Bonino, G. Jesson, J. Cumpa, 265-290. Boston-Berlin-Munich: De Gruyter. Alai, Mario. 2014c. Why Antirealists Can’t Explain Success. In Metaphysics and Ontology Without Myths, eds. F. Bacchini, S. Caputo, and M. Dell’Utri, 48-66. Newcastle upon Tyne: Cambridge Scholars Publishing.

8 For a more extensive discussion of this topic see Alai (2014a), §5, and Alai (2016b), §4.

Spontaneous Generations 9:1(2018) 133 M. Alai How Deployment Realism Withstands Doppelt’s Criticisms

Alai, Mario. 2016a. Resisting the Historical Objections to Realism: Is Doppelt’s a Viable Solution? Synthese: 1-24, First Online: 21 April 2016. DOI:10.1007/s11229-016-1087-z. Alai, Mario. 2016b. The No Miracle Argument and Strong Predictivism vs. Barnes. In Model-Based Reasoning in Science and Technology. Logical, Epistemological, and Cognitive Issues, eds. L. Magnani and C. Casadio, 541-556. Cham: Springer. Barnes, Eric C. 2008. The Paradox of Predictivism. Cambridge: Cambridge University Press. Boyd, Robert. 1981. Scientific Realism and Naturalistic Epistemology. In PSA 1980, vol. 2, eds. P. D. Asquith and T. Nickles, 613-662. East Lansing, MI.: Philosophy of Science Association. Boyd, Robert. 1984. The Current Status of Scientific Realism. In Scientific Realism, ed. J. Leplin, 41-81. Berkeley: University of California Press. Doppelt, Gerald. 2005. Empirical Success or Explanatory Success: What Does Current Scientific Realism Need to Explain? Philosophy of Science 72: 1076-1087. Doppelt, Gerald. 2007. Reconstructing Scientific Realism to Rebut the Pessimistic Meta-induction. Philosophy of Science 74: 96-118. Doppelt, Gerald. 2011. From Standard Scientific Realism and Structural Realism to Best Current Theory Realism. Journal for General Philosophy of Science 42: 295-316. Doppelt, Gerald. 2013. Explaining the Success of Science: Kuhn and Scientific Realists. Topoi 32: 43-51. Doppelt, Gerald. 2014. Best Theory Scientific Realism. European Journal for Philosophy of Science 4: 271-291. Gardner, Michael R. 1982. Predicting Novel Facts. British Journal for Philosophy of Science 33 (1): 1-15. Kitcher, Philip. 1993. The Advancement of Science. Oxford: Oxford University Press. Laudan, Larry. 1981. A Confutation of Convergent Realism. Philosophy of Science 48: 19-49. Leplin, Jarrett. 1997. A Novel Defence of Scientific Realism. Oxford: Oxford University Press. Lyons, Timothy. D. 2002. The Pessimistic Meta-Modus Tollens. In Recent Themes in the Philosophy of Science: Scientific Realism and Commonsense, eds. S. Clarke and T.D. Lyons, 63-90. Dordrecht: Kluwer. Maher, Patrick. 1988. Prediction, Accommodation and the Logic of Discovery. PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1: 273-285. Psillos, Stathis. 1999. Scientific Realism. How Science Tracks .Truth London and New York: Routledge. Putnam, Hilary. 1975a. Mathematics, Matter and Method: Philosophical Papers, vol. 1. Cambridge: Cambridge University Press.

Spontaneous Generations 9:1(2018) 134 M. Alai How Deployment Realism Withstands Doppelt’s Criticisms

Putnam, Hilary. 1975b. The Meaning of “Meaning”. In Philosophical Papers, vol. 2: Mind, Language and Reality. Cambridge: Cambridge University Press. Putnam, Hilary. 1978. Meaning and the Moral Sciences. London: Routledge. White, Roger. 2003. The Epistemic Advantage of Prediction over Accommodation. Mind 112 (448): 653-683. Worrall, John. 1985. Scientific Discovery and Theory-Confirmation. In Change and Progress in Modern Science, ed. Joseph C. Pitt, 303-331. Dordrecht: Reidel. Worrall, John. 1995. Il realismo scientifico e l’etere luminifero. In Realismo/antirealismo ed. A. Pagnini, 167-203. Florence: La Nuova Italia. Worrall, John. 2005. Prediction and the “Periodic Law”: a Rejoinder to Barnes. Studies in History and Philosophy of Science 36: 817-826.

Spontaneous Generations 9:1(2018) 135 Being Realistic: The Challenge of Theory Change for a Metaphysics of Scientific Realism

Author(s): Kerry McKenzie

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 136-142.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.26998

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Focused Discussion Invited Paper Being Realistic: The Challenge of Theory Change for a Metaphysics of Scientific Realism* Kerry McKenzie†

In recent years, philosophers have taken pains to emphasize the relevance of modal metaphysics for the debate over scientific realism. While I have no wish to deny that a well-worked out defence of scientific realism will likely require substantive modal commitments at some stage, I nevertheless worry that emphasizing the continuities between scientific and metaphysical theories risks obscuring the profound differences between them. One such difference, I would argue, is that while it seems entirely uncontroversial—at least from a realist point of view—to hope that science can discover truths about the unobservable prior to the emergence of a fundamental scientific theory, it seems that paradigmatic claims of metaphysics must await such a theory before we can assert them with any naturalistic justification. Given that we do not yet possess this sought-for fundamental theory, were modal theories really necessary for scientific realism then the prospects for justifying realism in the present would seem to darken considerably. I take the debate over realism to be the debate over which, if any, of the claims constitutive of contemporary scientific theories we are justified in believing. In my own domain of interest—high-energy physics—the question seems particularly acute. Its high degree of phenomenological remove renders its claims all the more inferential in character, and correspondingly all the more likely to diverge wildly from the truth. But its high degree of mathematicalization means that it can be hard to put into words what the propositional content of those claims is in the first place. Those holding out hope for a realist interpretation of particle physics must therefore ask, firstly, what portions of our present physics theories are justified objects of belief, and secondly, what is it exactly that we are believing in when we believe in those claims?

* Received August 2, 2016, Accepted August 2, 2016 † Kerry McKenzie (UC San Diego) is a metaphysician of science primarily focusing on questions of fundamentality, structuralism, and naturalism.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 136 K. McKenzie Being Realistic

Recent landmark works in the realism debate—perhaps most explicitly that of Anjan Chakravartty (2007)—have argued that answering both of these questions requires the development of a theory of modal metaphysics. It does after all seem incumbent upon realists to provide, at a minimum, a coherent story about how knowledge of the unobservable could even be possible, let alone realized in fact; indeed given van Fraassen’s attacks on realism’s key inferential strategies, many now regard this as all for which realists can reasonably hope (van Fraassen 1980). And it also seems clear enough that success to this end requires the development of a metaphysics of laws, properties, and causation. But one could also make an argument that metaphysical annotation is required merely in order for our beliefs regarding high-energy physics to serve as proper objects of belief in the first place. For it seems that in order for us to believe p we need to be in a position to say what the world would have to be like in order for p to be true; and for that to be the case in the physics context, some story about what physical objects are, what laws are, and how each relates to the other, etc., seems to be necessary. Certainly, a merely disquotational approach to belief seems distinctly unconvincing when what are disquoted are highly abstract mathematical expressions purportedly describing items with which we will never be acquainted. For all that this engagement between scientific and modal metaphysics is well-motivated, however, it presents a considerable and so-far largely unaddressed problem for the realism debate (an exception here is Monton 2011). While one can question the extent to which “thick” modal commitments are necessary prerequisites of realism—there are plenty of Humeans who at least aspire to realism after all—let us suppose that some kind of necessitarian view naturally accompanies realism.1 As often noted, there are two principal kinds of approaches to understanding natural necessity to be found in the literature: those that postulate dispositional essences—that is, that analyze the nature of properties in terms of certain subjunctive conditionals—and those that posit a primitive relation of “contingent necessitation” between universals (Chakravartty 2007, 128). Although located in different “places,” each theory thus posits primitive modal connections between fundamental kind properties. Moreover, each regards the kind properties themselves as intrinsic properties of objects.2 Since the two theories are incompatible, at most one can be true, and the standard lore is that which theory this is is a fact underdetermined by the physics. But it is by now part of the “metaphysical stance” to be comfortable

1 Arguments parallel to those about to follow may be mounted in the Humean context. 2 To my knowledge, this is only implicit in Armstrong’s view but follows from their being subject to free recombination.

Spontaneous Generations 9:1(2018) 137 K. McKenzie Being Realistic with that. However, while both of these views (explicitly or otherwise) present themselves as a metaphysics of modality applicable to fundamental laws and properties, a case can be made that the modal landscape at the most fundamental levels—at least, the most fundamental levels we are currently equipped to theorize about—has a very different structure from that which these views present.3 While I am by no means the only person to have made this point, I will focus here on my own work on laws and kind properties in the context of fundamental quantum field theories.4 What one means by a “fundamental law” in this context is, at a minimum, a law that is consistent with the basic principles of quantum theory and Lorentz invariance, and that remains so even in the limit in which interaction energies grow arbitrarily large. It turns out that this is an extraordinarily difficult constraint to satisfy, as it may be shown that only special combinations of fields and special forms of interaction can actually succeed in doing so. In other work, I have argued that as a result there is room for an interpretation of fundamental laws that is at once necessitarian and Humean. It is necessitarian in the sense that there is only one way for a given set of fundamental fields to behave, at least up to determination of the constants, but Humean in the sense that, once we have posited fundamental quantum fields, it is merely mathematico-logical constraints that do all the work in determining the unique laws with which they accord.5 But since mathematico-logical constraints are regarded as kosher for Humeans, it seems there is scope to argue that this necessity is one a Humean can accept. I have also argued that—due to the delicate balance in field content necessary for maintaining consistency—it is extremely difficult to maintain the posit that the fundamental kind properties are intrinsic features of the fundamental objects that possess them (McKenzie 2016). However, note that neither of these positions seems to find any motivation in classical physics, or indeed any theoretical context other than fundamental quantum field theories: we have no reason to think it is anything other than fundamental theories, framed in the language of quantum theory and relativity, for which these conclusions can be drawn. It seems, then, that there is good reason to think that the two main views discussed in the metaphysics

3 That it is fundamental laws that are of interest is explicit in, e.g., Armstrong (1993, 242). Since an image of a Feynman diagram appears on the cover of Chakravartty (2007) I assume he takes his metaphysics to be applicable in quantum field theoretic domains too. 4 See, e.g., Cei and French (2014), or any critics of the framework of Lewisian modal metaphysics appealing to features of quantum theory. 5 See McKenzie (2014). Since writing that paper I have realized that this argument requires modification, for reasons that I do not have space to enter into here. Suffice fornowto say that the basic moral, for present purposes, of this argument remains.

Spontaneous Generations 9:1(2018) 138 K. McKenzie Being Realistic of science are at best further underdetermined by, but more likely simply incompatible with, the most fundamental science that we know of. While all of this is (predictably) contestable, the basic moral should not come as a surprise to any metaphysician with naturalistic sympathies: as physics changes, certain metaphysical interpretations of the world can become increasingly unnatural or even untenable. So far, my argument simply underlines that modal metaphysics is no different in this regard. Thus in contrast to what seems to be implicit in much metaphysical practice, we cannot use modal theories developed in the context of non-fundamental science as a reliable guide to the modal options appropriate to more fundamental domains. And since most of us would (I take it) hold that the true metaphysics of the world is that which is appropriate at the most fundamental level, and since we know—given the problem of gravity—that even quantum field theory is unlikely to truly describe such a level, there is clearly reason to think that the options will change in future once again. Of course, there is a sense in which we might see it as a plus that our modal metaphysics, far from being disengaged from physics, turns out to be sensitive to it (even if not in general wholly determined by it). But the price of that engagement is that metaphysical theories likewise become subject to a problem of theory change analogous to that often taken to pose the greatest challenge to scientific realism—namely, Laudan’s “pessimistic induction” (1981). As such, anyone who regards the provision of an accompanying modal story as necessary for realism must have something to say about how this problem of theory change in the case of metaphysical theories is to be circumvented. But here the prospects are not good. To see why, it pays to compare the problem at hand with the ordinary problem of scientific theory change. To take a central example, we allknow that the transition from the classical to the present quantum mechanical theories of matter involved enormous conceptual changes. Without reason to think that present quantum theories are immune from future supplanting, there is reason to think that some change of comparable proportions will be visited upon us once again. And if we come to believe in the process that the world is profoundly different to the way our current best theories describe it, then anti-realism about current theories seems the only honest intellectual option. But such a conclusion would of course be too harsh should our present theories turn out to be approximations of those more fundamental descriptions, or—more precisely—approximations of what those theories imply as we tune their parameters between relevant limits. Moreover, there is good historical reason to think this is indeed going to be possible, since such continuity has in crucial cases been secured in the past—in particular, in the case of the quantum-classical transition. For thanks to the development of decoherence theory, we now have a compelling story

Spontaneous Generations 9:1(2018) 139 K. McKenzie Being Realistic about how the laws and ontologies of these two frameworks relate, since we can by this point affirm that many paradigmatic classical systems are, with respect to both their ontology and their dynamics, approximations of what relevant paradigmatic quantum-theoretic models imply in an appropriate limit (Rosaler 2016; Myrvold 2015). Since—in this crucial historical case at least—continuity at both the nomic and ontological levels can be restored, there are grounds to regard Laudan’s pessimistic conclusions from the facts of theory change as overly despondent. Although this obviously requires a much fuller defence and discussion than I can give it here, let us suppose that we can grant the point. What matters for present purposes is that even if we can grant this much in the scientific context (at least in paradigmatic cases), it nevertheless may not, and indeed arguably simply cannot, make sense to say that a metaphysical theory—say, a dispositional essentialist view, with its commitment to properties both intrinsic and essentially dispositional—is an “approximation” to any other modal view. The reason is that being intrinsic is contrasted only with being extrinsic; but since the latter is its logical complement, there seems to be nothing “in between” intrinsic and extrinsic for the former to approximate. Nor, from the point of view of modal metaphysics, would there be any clear purpose served by introducing some notion “intermediate” between intrinsic and extrinsic: for one thing, any external dependence on fundamental property instantiation whatsoever will preclude the applicability of principles of free recombination lying at the heart of Lewis and Armstrong’s metaphysical systems (see, e.g., Armstrong 1989, 47-48). Similarly, a property’s being essentially dispositional is typically contrasted only with its being categorical—that is, with being such that it does not essentially involve any dispositional element (Bird 2007). Nor again is it clear that there is any room for any notion “in between” categorical and dispositional here: it simply doesn’t make sense to speak of just a “dash” of primitive modality in the nature of a property, for example. As such, there would seem to be little sense to the claim that being essentially dispositional “approximates” some other way that a property could be, and similarly with “intrinsic” and “extrinsic.” In sum, then, it seems that a metaphysical claim cannot meaningfully be viewed as an “approximation” of some other claim, but rather only as alike or different in certain respects. For similar reasons, it is difficult to fathom (for me anyway) how a metaphysical interpretation of a physical theory could be said to “imply” that of another in the limits that take us between the physical theories themselves. For in comparison with the carefully quantified subject matters of the theories produced by science, the metaphysical categories we impose upon them have a crude or “clunky” character that makes them in principle unamenable to the manipulations through which more fundamental laws and properties gradually “deform”

Spontaneous Generations 9:1(2018) 140 K. McKenzie Being Realistic into their less fundamental counterparts. Hence with regard to theories involving paradigmatically metaphysical concepts, it seems one could only have displacement and not any kind of continuity, and moreover on a priori grounds—for so long as metaphysics deals in these “clunky” categories the notion of continuity between them simply makes no sense. Here then is the problem. It is held that some kind of modal story must accompany our articulations and defences of realism. But the history of physics suggests that we cannot identify, on the basis of current theories, even the appropriate options for how such a story might go. Moreover, unlike in the scientific case, it seems incomprehensible how any modal theory developed on the basis of current theories could be thought of as an “approximation” to anything that a more fundamental modal theory implies, and hence also how there could be anything resembling cumulative progress toward that goal of developing the right metaphysics. Since it seems likely that an as-yet undiscovered, fundamental scientific theory will be significantly different from our present theories and that that difference will have correspondingly significant implications for the modal theory appropriate to it, it seems foolish to believe that developing a metaphysics apt for current theories is an activity that can generate any knowledge at all. Unless simply as a warm-up exercise or a way to pass the time, what purpose is currently served by agonizing over metaphysical questions is very much an open question. The history of philosophy is peppered with observations that science seems to make stunning progress while metaphysics does not. But perhaps there are grounds to think that it could not be any other way. The above considerations prompt a thought about what makes science, particularly mathematicalized science, uniquely powerful from an epistemic point of view: it suggests itself as the only form of theoretical activity that puts us in a position to say that we know anything before we know we’ve got things exactly right. But it also leaves us asking whether, instead of arguing over the metaphysical interpretation of theories that we know are not fundamental, our time as philosophers would be better spent addressing those problems in institutionalized science that might impede our progress toward the elusive fundamental theory. For it seems only when we have that fundamental theory will there be any reason to do metaphysics at all.

Kerry McKenzie Department of Philosophy University of California (San Diego) 9500 Gilman Dr. La Jolla, CA, USA 92093

Spontaneous Generations 9:1(2018) 141 K. McKenzie Being Realistic

USA [email protected]

References

Armstrong, David M. 1989. A Combinatorial Theory of Possibility. Cambridge: Cambridge University Press. Armstrong, David M. 1993. A World of States of Affairs. Cambridge: Cambridge University Press. Bird, Alexander. 2007. Nature’s Metaphysics: Laws and Properties. Oxford: Oxford University Press. Cei, Angelo, and Steven French. 2014. Getting Away from Governance: A Structuralist Approach to Laws and Symmetries. Méthode—Analytic Perspectives 3(4): 25-48. Chakravartty, Anjan. 2007. A Metaphysics for Scientific Realism: Knowing the Unobservable. Cambridge: Cambridge University Press. Laudan, Larry. 1981. A Confutation of Convergent Realism. Philosophy of Science 48 (1): 19-49. McKenzie, Kerry. 2014. In No Categorical Terms: A Sketch for an Alternative Route to Humeanism about Fundamental Laws. In New Directions in the Philosophy of Science, eds. Maria Carla Galavotti et al. Springer. McKenzie, Kerry. 2016. Looking Forward, Not Back: Supporting Structuralism in the Present. Studies in History and Philosophy of Science (Part A). Online First. DOI:10.1016/j.shpsa.2016.06.005. Monton, Bradley. 2011. Prolegomena to Any Future Physics-Based Metaphysics. In Oxford Studies in Philosophy of Religion, vol. 3, ed. Jon Kvanvig. Oxford. University Press. Myrvold, Wayne C. 2015. What Is a Wavefunction? Synthese 192 (10): 3247-3274. Rosaler, Joshua. 2016. “Interpretation neutrality in the classical domain of quantum theory.” Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 53: 54-72. Van Fraassen, Bas. 1981. The Scientific Image. Oxford: Clarendon Press.

Spontaneous Generations 9:1(2018) 142 The Relevance of Evidence from the History of Science in the Contemporary Realism/Anti-realism Debate

Author(s): K. Brad Wray

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 143-145.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.26986

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Focused Discussion Invited Paper The Relevance of Evidence from the History of Science in the Contemporary Realism/Anti-realism Debate* K. Brad Wray†

I aim to clarify the relevance of evidence from the history of science to the contemporary realism/anti-realism debate as the need for evidence from the history of science is often misunderstood or misrepresented. First, consider the role of evidence from the history of science in anti-realist arguments. In particular, consider the role of evidence from the history of science in one version of the Pessimistic Induction. Some anti-realists (allegedly) appeal to the history of science in an effort to construct an enumerative induction to support the conclusion that today’s best theories, despite their many explanatory and predictive successes, are likely to be discarded in the future. Following the straight rule of induction, this sort of anti-realist (allegedly) argues that most past successful theories have since been discarded and replaced by different theories that make radically different assumptions about the nature of reality. Thus, the history of science is not, as many realists argue, a steady progression of modified theories that (i) retain the successes of past theories, and (ii) explain the successes of earlier theories by appeal to the same sorts of mechanisms and entities posited by earlier theories. Absent some distinguishing feature that would make contemporary successful theories less prone to being discarded than their predecessors, it seems there are good inductive grounds for believing that our contemporary theories will also be discarded in the future and replaced by theories that make radically different ontological assumptions about unobservable aspects of reality (Putnam 1975; Wray 2015). The Pessimistic Induction has been criticized on a number of grounds. Some critics claim that it commits some sort of fallacy, either the Base

* Received July 27, 2016. Accepted July 27, 2016. † K. Brad Wray joined the Centre for at Aarhus University in Denmark in July 2017. Brad has published extensively on the anti-realism/realism debate, Thomas Kuhn’s philosophy of science, and the social epistemology of science. His book Kuhn’s Evolutionary Social Epistemology was published by Cambridge University Press in 2011. He is one of the co-editors of the Springer journal Metascience.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 143 K.B. Wray The Relevance of Evidence from the History of Science

Rate Fallacy, the Turnover Fallacy, or the False Positives Fallacy (Magnus and Callender 2004; Lange 2002; Lewis 2001). Others have taken issue with particular cases of discarded theories that have figured in the debate, cases allegedly supporting the anti-realists’ enumerative induction. These critics argue that specific theories that anti-realists identified as successful but false were either (i) successful and approximately true, or (ii) unsuccessful, and hence incapable of supporting the anti-realists’ induction (Harden and Rosenberg 1982; and Psillos 1996).1 Setting the concerns about the Pessimistic Induction aside, I believe that realists have exaggerated the anti-realists’ need for evidence. Any radical change of theory in a field threatens to undermine the realist’s claim that changes of theory generally preserve the successes of the theories they replace by appeal to the same mechanisms and entities. Any discontinuities that are caused by radical theory changes threaten scientific realism, or at least certain forms of realism. Thus, an enumerative induction is unnecessary. What is the relevance of evidence from the history of science to the realists’ arguments? Realists have been less than transparent about their appeals to evidence from the history of science. But many realists claim that various theoretical values or virtues, such as simplicity, breadth of scope, etc., are correlated with theoretical truth. That is, theories that embody these virtues are more likely true than not (McMullin 1993). The claims to the effect that simplicity or breadth of scope is correlated with theoretical truth are empirical claims, and thus require evidence in their support. One way to justify such claims is by appeal to an inductive argument. But realists have not provided evidence gathered systematically from the history of science to support claims such as: (i) simple theories are likely true or approximately true, or (ii) theories broad in scope are likely true or approximately true. Instead, many realists seem dogmatically to assume the alleged connection is beyond dispute. I take issue with this assumption. There is an alternative route to justifying these sorts of methodological claims. The realist might take a Popperian attitude toward these methodological claims about the relationship between the theoretical values and theoretical truth. The realist might hypothesize such a link, and be open to rejecting it, if the evidence suggests otherwise. Clearly, there have been many cases in the history of science where a simpler theory was inferior to a more complex theory or a theory broad in scope was inferior to a theory that was narrower in scope. Confronted with such cases, the realist need not wholly reject her methodological claims. The realist could refine the claims, specifying, for example, under what specific conditions a simple theory is

1 Laudan (1981) provides the definitive list of theories that are alleged to be successful but false.

Spontaneous Generations 9:1(2018) 144 K.B. Wray The Relevance of Evidence from the History of Science more likely true than a more complex theory. This revised methodological hypothesis would then be subjected to testing. But realists have not pursued this strategy, at least not in any systematic way. It is widely assumed that it is the anti-realist who stakes his case on evidence from the history of science. But I have argued here that (i) realists have failed to recognize the need to collect evidence from the history of science to support their methodological claims, and (ii) anti-realists do not rely on evidence from the history of science to the extent that many suggest.

K. Brad Wray Centre for Science Studies Department of Mathematics Aarhus University Ny Munkegade 118 Aarhus C DK8000 Denmark [email protected]

References

Hardin, Clyde L., and Alexander Rosenberg. 1982. In Defense of Convergent Realism. Philosophy of Science 49(4): 604-615. Lange, Marc. 2002. Baseball, Pessimistic Inductions, and the Turnover Fallacy. Analysis 62(4): 281-285. Laudan, Larry. 1981. A Confutation of Convergent Realism. Philosophy of Science 48(1): 19-49. Lewis, Peter J. 2001. Why the Pessimistic Induction is a Fallacy. Synthese 129(3): 371-380. Magnus, P. D., and Craig Callender. 2004. Realist Ennui and the Base Rate Fallacy. Philosophy of Science 71(3): 320-338. McMullin, Ernan. 1993. Rationality and Paradigm Change in Science. In World Changes: Thomas Kuhn and the Nature of Science, ed. P. Horwich, 55-78. Cambridge, MA: MIT Press. Psillos, Stathis. 1996. Scientific Realism and the “Pessimistic Induction.” Philosophy of Science 63 (Proceedings): S306-S314. Putnam, Hilary. 1975. Mathematics, Matter and Method: Philosophical Papers, vol. 1. Cambridge: Cambridge University Press. Wray, K. Brad. 2015. Pessimistic Inductions: Four Varieties. International Studies in the Philosophy of Science 29(1): 61-73.

Spontaneous Generations 9:1(2018) 145 Four Challenges to Epistemic Scientific Realism—and the Socratic Alternative

Author(s): Timothy D. Lyons

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 146-150.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.26993

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Focused Discussion Invited Paper Four Challenges to Epistemic Scientific Realism—and the Socratic Alternative* Timothy D. Lyons†

Pivotal to epistemic scientific realism is the no-miracles argument (NMA): the success of our scientific theories would be miraculous were they not at least approximately true. It is this argument that justifies believing the meta-hypothesis that our successful theories are at least approximately true. There are essentially four challenges, two primary and two secondary, to NMA and the realist’s meta-hypothesis. Revealing the individual force of each challenge, as well as the relations between them, requires a set of clarifications that depart from the common interpretations of those challenges. (The arguments for each point made here are detailed in the cited texts.) A primary challenge is the historical argument, central to which is a list of theories that are successful but cannot, by present lights, be approximately true. Those that challenge contemporary realism—including sophisticated variants of the meta-hypothesis and of NMA—can be found in Lyons (2002, 2006, 2016a). The inference from such a list is not an induction to the falsity of our current theories; it is a logically valid modus tollens that shows that the realist’s meta-hypothesis is false. In that case, it cannot even be accepted as a ‘fallible’ or ‘defeasible’—let alone a ‘likely’—conjecture. That conclusion is one dimension of the modus tollens. The second dimension, however, is that the list of “miracles”—a list of successes left inexplicable by realists—reveals the falsity of the sole premise of NMA, rendering wholly unacceptable the sole justification for believing the realist meta-hypothesis. (See Lyons 2002, 2006, 2015.) With the modus tollens in place, it becomes clear that any persuasiveness of NMA is merely psychological, and here is one respect in which increasing the quantity of items on the list becomes important: doing so proportionally destroys any psychologically residual hope realists cling to for NMA and so for justifiably believing their false meta-hypothesis. (In Lyons [2016a], I also show that a weakened statistical

* Received August 1, 2016. Accepted August 1, 2016. † Timothy D. Lyons is Chair of Philosophy and Professor, Philosophy of Science, at Indiana University–Purdue University Indianapolis. Research relevant to this article was supported by the United Kingdom Arts and Humanities Research Council Grant, AH/L011646/1: Contemporary Scientific Realism and the History of Science.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 146 T.D. Lyons Four Challenges to Epistemic Scientific Realism version of the epistemic realist’s meta-hypothesis—“it is statistically likely that our successful theories are approximately true”—suffers the same fate.) In advance of considering the other primary challenge, there are two secondary but notable challenges—both of which have suffered recent neglect. The first is that there are alternative non-realist meta-explanations for success. Since the antirealist refrains from believing best explanations, what is key to the alternative is its ability to defuse NMA, in which case the alternative need not be better. Nonetheless, if it is better, it also blocks the weaker—but, apparently, only other—realist justificatory retreat, the inference that approximate truth provides, not the only, but the best explanation of success. One non-realist meta-explanation is “modest surrealism”: the mechanisms postulated by the theoretical system would, if actual, bring about the relevant phenomena observed, and some yet unobserved, at time t; and these phenomena are brought about by actual mechanisms in the world (Lyons 2002, 2003). The availability of such alternatives ties back to the historical modus tollens: insofar as they can explain the counterinstances, they have far greater breadth than the realist meta-explanation. The other secondary threat bears directly on the comparison of competing meta-explanations, but it challenges the capacity of the realist’s meta-explanation to make what it purports to explain, success, “a matter of course”. Otherwise untapped is the fact that this argument can be leveraged against the historical modus tollens, whose list is one of successes that are inexplicable for realism: diluting approximation to accommodate the list makes approximate truth so vacuous as to fail to render success likely. While narrowing approximate truth eliminates its touted explanatory breadth, increasing its permissiveness destroys its explanatory strength. Either way, whether because of counterinstances or vacuity, the realist is unable to explain success, so unable to offer the best, let alone the only, explanation (see Lyons 2003, 2016b). The other primary challenge to epistemic realism is the argument from underdetermination, its central premise being a competitor thesis at the level of scientific theorizing. Take “competitor” to denote an alternative that shares the confirmed predictions of our favoured theory—including alternatives that are otherwise predictively distinct—and whose approximate truth would render our favoured theory patently false. The competitor thesis properly construed is, “there are competitors whose approximate truth we cannot justifiably deny.” Like the historical threat, this thesis can be evidentially supported: there are syntactic relations between historically successful scientific theories and theoretically non-approximating current theories that can be, for any favoured theory, re-instantiated to reveal genuine competitors (not mere “Cartesian fantasies”). And, since science itself accepts

Spontaneous Generations 9:1(2018) 147 T.D. Lyons Four Challenges to Epistemic Scientific Realism the original instantiated relations, the purported lack of the competitors’ explanatory virtues cannot suffice to deny the approximate truth of those competitors. With the competitor thesis secured in this empirical way, and insofar as epistemic realism entails the denial of that thesis (Lyons 2009, 2014, 2015), it follows that epistemic realism is false: it is not the case that we can justifiably believe that our successful theories are approximately true. Moreover, the indefinitely many competitors qualify as empirically informed but ultimately ahistorical counterinstances in the modus tollens. Not only must the NMA be rejected outright, but we also have an explanation for the original list: those historical counterinstances were among the indefinitely many non-approximating theories that, like our current theories, share the range of successes they achieved (see Lyons 2015). (Tying this into the challenge of non-realist meta-explanations, one can take this as a syntactic expression of the modest surrealist’s, semantic, meta-explanation.) With these four interlocking challenges clarified, a serious threat to epistemic scientific realism emerges. My proposed alternative (2005, 2011, 2012, 2015, 2016a, forthcoming) is a non-epistemic, purely axiological—or, as I’ve called it, Socratic—scientific realism. Against antirealists such as van Fraassen and Laudan, central to this position is a wholly realist, but refined, meta-hypothesis that retains deep theoretical truth as the aimof science: in changing its theoretical systems, the scientific enterprise seeks, not truth per se, but an increase in a particular subclass of true claims, those whose truth—including deep theoretical truth—is experientially concretized. For coherence, this axiological meta-hypothesis is directed at the very aim it describes, and it must live up to what it demands. The following four points are among those that bear on its promise. First, the threats above, minimally, have forced epistemic realists to narrow their meta-hypotheses to only a small class of constituents achieving rare success; most of what goes on in science is lost. By contrast, liberated from the chains of semantic belief, the axiological meta-hypothesis is put forward to account for the many and multi-varied features of science, not only theorizing and theory choices but high and low level auxiliary modifications, idealizations, experimental design, theory-laden data selection, etc. Second, irrespective of whether we can say the postulated semantic goal has been achieved, its achievement entails a set of syntactically identifiable theoretical desiderata that are agreed tobe central to theory choice: an increase in empirical accuracy and consistency, and the retention, if not an increase in, breadth of scope, , and three kinds of simplicity. And the quest for that primary goal promotes a fourth kind of simplicity, along with explanatory depth, the confirmation of novel predictions, and explanatory power. Third, because the axiological meta-hypothesis promotes or even requires those eleven desiderata, it can explain and, crucially, justify their collective pursuit; while, fourth, the posit

Spontaneous Generations 9:1(2018) 148 T.D. Lyons Four Challenges to Epistemic Scientific Realism of antirealist goals—say, empirical adequacy, problem solving effectiveness, or the set taken alone—cannot (see Lyons 2005). Striving to maintain a Socratic epistemic humility in the quest for truth—seeking truth without claiming to possess or approximate it—this axiological meta-hypothesis is not asserted as an object of belief. Rather, in accord with a Socratic scientific realist treatment of other empirical hypotheses, it is offered as a testable tool for further empirical inquiry. Despite over three decades of unavoidable epistemic realist retreats, Socratic scientific realism endeavors to provide an encompassing account of the scientific enterprise.

Timothy D. Lyons Chair of Philosophy Professor, Philosophy of Science Department of Philosophy Indiana University–Purdue University Indianapolis Cavanaugh Hall, 333 425 University Blvd Indianapolis, IN, USA 46202 [email protected]

References

Lyons, Timothy D. 2002. Scientific Realism and the Pessimistic Meta-Modus Tollens. In Recent Themes in the Philosophy of Science: Scientific Realism and Commonsense, eds. Steve Clarke and Timothy Lyons, 63-90. Dordrecht: Springer. Lyons, Timothy D. 2003. Explaining the Success of a Scientific Theory. Philosophy of Science 70(5): 891-901. Lyons, Timothy D. 2005. Toward a Purely Axiological Scientific Realism. Erkenntnis 63(2): 167-204. Lyons, Timothy D. 2006. Scientific Realism and the Stratagema de Divide et Impera. The British Journal for the Philosophy of Science 57(3): 537-560. Lyons, Timothy D. 2009. Non-Competitor Conditions in the Scientific Realism Debate. International Studies in the Philosophy of Science 23(1): 65-84. Lyons, Timothy D. 2011. The Problem of Deep Competitors and the Pursuit of Unknowable Truths. Journal for General Philosophy of Science 42(2): 317-338. Lyons, Timothy D. 2012. Axiological Realism and Methodological Prescription. EPSA Philosophy of Science: Amsterdam 2009, European Philosophy of Science Association, Proceedings: 187-197.

Spontaneous Generations 9:1(2018) 149 T.D. Lyons Four Challenges to Epistemic Scientific Realism

Lyons, Timothy D. 2014. The Historically Informed Modus Ponens against Scientific Realism: Articulation, Critique, and Restoration. International Studies in Philosophy of Science 27(4): 369-392. Lyons, Timothy D. 2015. Scientific Realism. In Oxford Handbook of Philosophy of Science, ed. Paul Humphries. New York: Oxford University Press. Online version, August 2015. Lyons, Timothy D. 2016a. Selectivity, Historical Testability, and the Non-Epistemic Tenets of Scientific Realism. Synthese. Lyons, Timothy D. 2016b. Structural Realism versus Deployment Realism: A Comparative Evaluation. Studies in History and Philosophy of Science. Lyons, Timothy D. Forthcoming. Systematicity Theory meets Socratic Scientific Realism: The Systematic Quest for Truth. Synthese.

Spontaneous Generations 9:1(2018) 150 Referential and Perspectival Realism

Author(s): Paul Teller

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 151-164.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.26990

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Focused Discussion Invited Paper Referential and Perspectival Realism*,† Paul Teller‡

Ronald Giere (2006) has argued that at its best science gives us knowledge only from different “perspectives,” but that this knowledge still counts as scientific realism. Others have noted that his “perspectival realism” isin tension with scientific realism as traditionally understood: How can different, even conflicting, perspectives give us what there is really? This essay outlines a program (some published, much forthcoming) that makes good on Giere’s idea with a fresh understanding of “realism” that eases this tension.

I. Agenda This essay addresses two problems in the scientific realism debate. First, Giere (2006) proposes what he calls “perspectival realism.” He gives analogies, suggesting that scientific measurement and theory bring us into contact with the world in the kind of way done by visual perspective and colour vision. But he never quite tells us what perspectival realism is. Second, we all recognize that the world is too complicated for us to get its representation exactly right. But I’ve seen no discussion of ramifications for the referential side of realism. In response to the second issue I will argue that our referential practices, at the level of perception just as much as for theory, function to provide simplifications—or idealizations1—of a vastly more complex reality. I will then apply this reconstrual of our referential practices to a fresh understanding of realism along the lines that I think Giere had in mind. This short essay will give only an overview of how the whole project is supposed to work. Details are in other publications and much work in progress. Contemporary realists agree that even the best of contemporary science is at best “close to the truth.” Where, then, is the realism? Laudan put it succinctly:

* Received August 8, 2017. Accepted August 8, 2017. † Curtis Forbes did a brilliant job in pressing me for clarifications and refinements in this essay. ‡ Paul Teller is Professor Emeritus at the Department of Philosophy, University of California at Davis. 1 I will use “idealization” very broadly.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 151 P. Teller Referential and Perspectival Realism

[A] necessary condition—especially for a realist—for a theory being close to the truth is that its central explanatory terms must genuinely refer. (1981, 33)

This is an attitude reflected in much writing on realism, including some other essays in this collection. Here is a quick way to Laudan’s conclusion: To say that atoms are real is just to say that there are atoms. Restatement in the formal mode gives: To say that atoms are real is just to say that “atom” has a non-empty extension. Thus contemporary scientific realism, in a formulation that I call “referential realism,” comprises two parts:

1) Terms in mature scientific theories typically refer.2

2) Our theories of these referents are “close to the truth.”

Here “referents” is understood broadly to include extensions of predicates, the characteristics that determine extensions, and properties and quantities generally. I will argue that, because the world is too complex, 1) fails. Instrumentalism is no fallback because worldly complexity ensures that reference fails also for the perceptual. What, then, could count as realism? My response will be: All human knowledge is inexact. When our inexact knowledge is deployed with the tools of reference,3 it merits our interpreting it as knowledge of what there is.4

II. Two guiding principles and a practical consideration. Inaccuracy and imprecision are two forms of representational imperfection. (I use “imprecise” for vagueness in a general sense and in accord with the wide use of “precise” as the antonym of “vague.”) For example, consider the statement that John is six feet tall. This can be heard as “John is six feet tall exactly, which is inaccurate (and so strictly speaking false) when John is 5′ 11 1/2′′. But the statement can also be heard as “John is six feet tall ‘close enough,’ ” which will count as true when John’s height is sufficiently close to six feet that the difference makes no present practical difference. Some terminology: I will use “exact” for “both (completely) accurate and (completely) precise.” By an imprecise representation that is free of inaccuracy I understand one that cannot be improved in accuracy

2 This is Putnam’s wording (1975, 73). 3 This covers singular terms as having referents, general terms as having extensions, supplemented with quantification and identity. 4 This is a conclusion that Kantians will embrace with greater comfort than others.

Spontaneous Generations 9:1(2018) 152 P. Teller Referential and Perspectival Realism without also making it more precise. I will refer to such representations as “completely accurate within their level of precision.” The imperfections of inaccuracy and imprecision are not independent. Given a sufficiently but not completely accurate representation, wecan transform it into one with no inaccuracy by “absorbing” the inaccuracy into imprecision. (John, who is 5′ 11 1/2′′, counts as six feet “‘close enough.”’) Conversely, given an imprecise representation that is completely accurate within its level of precision, if we make it more and more precise, doing our best to retain accuracy, eventually we lose complete accuracy. (You’ll never nail a person’s height to the nearest nanometer.) Saying that John is six feet “close enough” and that he is six feet exactly are different statements; in particular, the first can be true even though the second is false. Butwhen the false statement that John is six feet exactly will, for present practical purposes, function as a truth, this statement and the statement that John is six feet “close enough” can be used to accomplish the same representational work. In such circumstances I will say that the two statements are “semantic alter egos” (of one another). This summarizes the idea of the duality of the imperfections of inaccuracy and imprecision. It can be expressed as a principle:

The Principle of Semantic Alter Egos: Inaccuracy and imprecision constitute two exchangeable facets of representational imperfection, and the two members of a semantic alter-ego pair do the same representational work.5

In work in progress I apply this principle to the problem of understanding content and truth for imprecise (remember, vague, in a general sense) statements. With the ubiquitous failure of referential realism for which I will argue below, standard referential semantics provides a highly idealized (but still most useful) account of content and truth. My application suggests a less idealized account by understanding content and truth for an imprecise statement in terms of the capacity of a precise semantic alter ego that, though strictly speaking is false, nonetheless functions broadly as a truth as we conventionally understand truth. I will apply this idea in section 6. Here is a second principle:

The Complex World Constraint: The world is too complicated (relative to our perceptual and cognitive powers) for us to get representations exactly right.

5 In Teller (2017) I develop in considerable detail the ideas that I have here briefly suggested.

Spontaneous Generations 9:1(2018) 153 P. Teller Referential and Perspectival Realism

With one systematic exception—combinatorial facts, broadly finite mathematics—all human knowledge is inexact. No doubt the world is exceedingly complex. But is it complex in ways that defeat our efforts to attach terms to referents and extensions, the problem that defeats referential realism? That has to be independently argued, in ways outlined below. The Complex World Constraint then functions as a summary and way of understanding how referential realism goes wrong. The practical consideration begins with the Lewis-Stalnaker notion of conversational presupposition.6 In practice, participants in a conversation must tacitly agree to treat certain statements as, without qualification, true, and I add, without any imprecision. Conversationalists adopt such presuppositions even when, privately, they have reservations, as long as they are confident that suspected failures in accuracy or precision will makeno difference for present conversational purposes. Thinking in terms ofsuch presuppositions applies much more broadly than local conversations. For example, this is a leading idea in Kuhn’s notion of a disciplinary matrix, the body of material that a student learns to take for granted when working within a discipline. The idea also applies at the personal level. When confident that cutting corners will make no practical difference, we treat our individual beliefs as exact, providing what I call “personal platforms.” In what follows I apply the foregoing in an argument sketch showing how to interpret as knowledge representation that is inexact yet exact enough broadly to meet our needs, in particular how to understand the idealized use of tools of reference as telling us what there is in the world.

III. The inexactness of perceptual representation. I take perceptual states to be representations or constitutively to involve representation. Perception has the marks of representation: Perception carries and facilitates manipulation of information. It has “aboutness,” . Representations can misrepresent—they have correctness conditions. This does not mean that our “percepts” are mental states that we somehow “observe.” Rather, perceptual states are themselves, or constitutively involve, representations.7 Perception of properties, or their instances, is inexact. To illustrate, our perceptual experience of something being red is as of an object having an intrinsic colour property. The relevant problem is not that red comes in indefinitely many shades. Colour science tells us that, rather than perceptual access to intrinsic colour properties, colour perception is a complex relational affair, involving properties of the perceived object, relations tothe

6 See Teller (2017, 157-59) for references and more detail. 7 See Burge (2010, 379-419) for exhaustive arguments. Spontaneous Generations 9:1(2018) 154 P. Teller Referential and Perspectival Realism perceiver, the state of the perceiver, and the state of the environment.8 So colour representations must be taken either to be imprecise, or insofar as considered to be perception of specific intrinsic property instances, far from completely accurate. Perception of other perceived properties and relations fares similarly. Likewise there is ubiquitous inexactness of perception as of specific objects—the perceived stone, table, etc. We draw this conclusion already from the familiar sticking point of indeterminate physical and temporal boundaries. Because perceptual representation and correlative language are inexact, these fail to pick out as specific referents any one collection of parts as opposed to many others. In particular, “the table in front of me” and “John” are imprecise, qualitatively, in the same way as is “the middle class.” Referential realism fails for ordinary objects of perception.9 These conclusions hold with great generality. The sciences of perceptual physiology and psychology show that perception is hardly simple transcription of retinal images. Perceptual access to the world is mediated by an exceedingly complex transduction of input information. In our perceptual apparatus nature has equipped us with mechanisms for building models of the perceived world, mechanisms that compromise between efficiency and getting things precise and accurate enough to meet our practical needs.

IV. The inexactness of theoretical representation. Where nature’s perceptual apparatus provides us with perceptual models, science provides us with models of the sub- (and super-) perceptual. These, like perceptual models, are ubiquitously inexact. I will sketch reasons for thinking that at the theoretical level tools of reference apply only as idealizations. For purported referents and extensions of things larger than molecules the problems are essentially the same as for objects of perception. The hardest cases are “electron,” “nucleus,” and “molecule.” The problems arise from the puzzles of superposition and quantum statistics from the quantum domain. Difficulties begin with cases in which there is no fact of thematter whether, or just when, things are supposed to get into alleged extensions. Consider electron pair creation and annihilation. In the right circumstances a gamma ray will produce an electron and a positron. For an indefinite period quantum theory describes the situation as a superposition of a state with a gamma ray and no electron-positron pair superimposed with a state with

8 See Giere (2006, ch. 2). 9 Ladyman et al (2007, 130) write, “We will argue that objects are pragmatic devices used by agents to orient themselves in regions of spacetime, and to construct approximate representations of the world.” They support this in various places in chapters 3 and 4.

Spontaneous Generations 9:1(2018) 155 P. Teller Referential and Perspectival Realism an electron-positron pair and no gamma ray. During this (not completely determinate) period there is no fact of the matter of just what electrons (and positrons) there are, and so no completely specific extension for “electron.” I call the foregoing “the problem of superpositions for extensions.” Nuclei suffer their own version of this problem. When a nucleus ishithard enough it will disintegrate. But for a (not completely determinate) period the state is of a superimposed intact and disintegrated nucleus. Molecules likewise succumb. Molecules are formed and destroyed in chemical processes that similarly suffer the problem of superposition. Some will claim that the theory provides terms, “electron,” “nucleus,” and “molecule,” that attach to characteristics that have instances when the problem of superposition does not apply. But just when the problem applies is itself not completely determinate. Thinking there is such a sharp line is also a simplification. There is a second kind of difficulty: the alleged instances aren’t “things” in any normal sense. I am here arguing against referential realism, namely that terms have non-empty extensions. But the theory of reference calls for extensions with elements over which we can quantify, as when talking about all or some of the electrons in a box. In turn the theory of reference includes the applicability of identity to the use of quantification, which conflicts with the facts of quantum statistics. Suppose that a were to refer to one electron while b referred to a second: a ≠ b. If this were right there would be a fact of the matter which terms referred to which electron (already supposing that talk of “which electron” made sense). For example, it would make sense to distinguish the situation in which a is in the right-hand side of a box and b on the left, from that in which we had b on the right and a on the left. But this is just what quantum statistics rules out.10 Some will respond: But then the terms will concern (refer to?) “things” in some alternative, broader sense. Let’s use a neutral terminology: No question, there are what we might call electron, nuclear, and molecular “phenomena.” Such a move leaves it wide open just what kind of an ontology might in fact fall under our idealized theory. Some insist that there are only fields and no particles at all. Terms for theoretical properties and quantities also fail to have specific extensions. I take this to be amply clear-cut in the life sciences. For example “cell” and “species” are open-ended notions. For the physical sciences the underlying problem is that all the relevant theories are highly idealized. I have made this out in great detail (Teller, forthcoming) for the case of physical quantities such as time, length, mass, velocity, etc. To give a taste of the argument for length: Length is an artifact of an idealized space-time theory

10Teller (2001) gives a more detailed presentation of this argument.

Spontaneous Generations 9:1(2018) 156 P. Teller Referential and Perspectival Realism that is assumed as background for quantum theories. But in quantum theories there is no determinate classical distance between two objects or events. The complexity of the world ensures the failure of our theoretical terms to attach to specific extensions, extension-determining characteristics, and properties and quantities generally. Section 3 concluded that reference likewise fails at the level of perception. This ubiquitous failure of reference has important repercussions for thinking about idealization. To illustrate: In modelling the orbit of Mars one idealizes Mars as a point particle. But that need not mean that “Mars” as it occurs in the model has no referent. Often one will take “Mars” to refer to an extended object that the model then idealizes as a point particle. I urge that, instead, we take “Mars” never to have a referent. If there are physical objects this will be because there are too many objects that could be the referent—exactly which bits of matter get included? Or perhaps our thinking in terms of physical objects is itself a simplification, for example from a field-theoretic point of view. (Seethe discussion of Eddington’s two tables below.) Broadly we think of idealization as simplified characterization of things successfully picked out with the tools of reference. I urge that, instead, the use of the tools of reference is itself an idealization, what I will call “idealized reference.” Reference ubiquitously fails, but in ways that I will sketch below we often succeed well enough by treating reference as successful. I now turn to explaining how inexact representation, in particular idealized reference, when sufficiently successful in guiding us in pursuit of our interests, nonetheless gives us worldly knowledge.

V. Knowledge by inexact representation. For the moment, put aside the Complex World Constraint. If our representational powers were unlimited there would still be an essential—Kantian—limitation on possible knowledge. We think of representation being, for the most part, representation of things independent of us. Naively we would like to think of a representation of a thing, for example a picture of a table, as something that we could put right up next to the thing represented to see how well they match. But that would be to see how well (our perception of) the picture matches (our perception of) the table. And our perceptions are just more representations! Ultimately, representations can only be compared with other representations. The idea of a “thing in itself” is of a thing apart from any representation of the thing. So a representation of a “thing in itself” is a representation of a thing apart from any representation, which is a contradiction in terms. How then could perfectly exact representation be understood? Since (representational) comparison with something unrepresented makes no sense, the best surrogate would be a perfect representation, one that could not be Spontaneous Generations 9:1(2018) 157 P. Teller Referential and Perspectival Realism improved in precision, accuracy, or detail, a Peircean “limit of inquiry.” A Peircean limit is also problem-burdened. A perfect representation of the whole world could only be the world itself or a complete duplicate. How about a perfect representation of, for example, just one object and all its intrinsic properties? There are notorious problems in distinguishing intrinsic from relational properties, and however that distinction might be drawn, leaving out an object’s substantive relations to the rest of the world leaves its description incomplete. It is also not clear that (a representation of) an ultimate ontology would be in terms of discrete objects. Any neat ontology of discrete referents would be compromised if there are no intrinsic properties or if there are relational properties that do not supervene on the non-relational. Quantum field theoretical considerations compound these difficulties. Most acutely there is Giere’s problem.11 A complete and completely accurate representation of some one object or space-time part of the world would have to represent it as causally isolated. But nothing is completely causally isolated. Despite these problems I will, as a stepping stone, appeal to the idea of “ultimate” representations that admit of no improvement in precision, accuracy, or detail. While it is doubtful that there could be such representations, the idea does not suffer the internal conceptual incoherence suffered by the idea of a representation of “a thing in itself.” I willapply the idea as part of an analogy, a part that will then be discharged. Since it is conceptually coherent, the idea can function in the analogy, and since it will then be discharged, worries about whether there could be such representations won’t apply. The subject of this section is knowledge by imperfect representation. Knowledge requires truth. We can accommodate the requirement of truth if we interpret truth as absence of any inaccuracy. An inexact representation can still count as true (or as accurate without qualification) if it is imprecise but suffers no inaccuracy within its level of precision, that is if it couldnot be made more accurate without also making it more precise.12 Now the analogy: To set things up, put aside the difficulties enumerated so far. You look through spectacles at a table. The spectacles slightly distort with respect to both shape and colour. If things go well, this can still count as perceptual knowledge. If the inaccuracies are irrelevant to any practical

11I learned of this problem from Ronald Giere. 12The foregoing is a two-line summary of a much more detailed account of truth of imprecise statements. See Teller (2017). In particular, the requirement of no residual inaccuracy can be relaxed, but this controversial claim plays no role here and so will not be assumed. Yes, since my “imprecision” is a generalized notion of vagueness, what is in question is an approach to truth of vague statements that, as far as I know, is new to the literature. I have work in progress on this idea.

Spontaneous Generations 9:1(2018) 158 P. Teller Referential and Perspectival Realism interest for anything that might reasonably be expected as a possible occurrence (henceforth: the representation is adequate in practice), swallow up the inaccuracies in a little imprecision in the representing perceptual state (the shape, the colour as represented “close enough”). Then the case can still respect the requirement of truth in perceptual knowledge, knowledge that there is a chair in front of you, its shape, its colour, etc. Very broadly, knowledge by inexact representations, propositional as well as analogue,13 can be understood in terms of this analogy. Now to sanitize the analogy of its problematic elements: Step 1: We let go of the spectacles as the source of inexactness. By the Complex World Constraint, inexact representations are the best that we can do. Such inexactness arises in a great many ways. There is nothing special in what follows for distortion by vision through spectacles. So for the spectacles substitute whatever the source of distortion might be. Step 2: We let go of the table itself as the exemplar of the ultimate target of knowledge. We want a way of understanding how our inexact representations nonetheless provide knowledge of an independent reality. In the analogy we had perceptual knowledge of an independently existing table. But—the Kantian point—representation of a “thing in itself” makes no sense. So we substitute the idea of a representation at the Peircean limit as doing the work of an ultimate target. In the sanitized analogy we have knowledge if departure from the exact target, thus understood, does not matter for anything that might reasonably be expected as a possible occurrence. Again, if you like, reinterpret any residual inaccuracy as imprecision. Step 3: We let go of the perfect representation as what does the work of the ultimate target. It is doubtful whether there could be any such representation, and even if there were, it is not clear how to understand comparison with an inaccessible perfect representation. Instead we apply a sense in which a representation can be understood as one that could be improved in accuracy and/or precision, to be explicated along the following lines: Representation R2 is said to be a refinement of representation R1 iff R2 is in every respect at least as accurate and precise as R1 and more so in some respects. “At least as accurate and precise as” in turn can be understood in terms of functioning at least as well in every relevant representational function, that is, as functioning the way one thinks that an exactly true or otherwise exact representation ought to work.14 To be discharged in step 3 was: Departure from exact representation

13By an analogue representation I mean one that depends on similarity of form, as with pictures and maps. 14So this view, in addition to being deeply Kantian, is also a version of pragmatism, but one with essential differences from more familiar versions. See Teller (2012), section 5.

Spontaneous Generations 9:1(2018) 159 P. Teller Referential and Perspectival Realism does not matter for anything considered important for anything that might reasonably be expected as a possible occurrence. For this we substitute: Departure from possible refinements does not matter for anything considered important for anything that might reasonably be expected as a possible occurrence. Thus, we’ve come to an understanding of what it means to say that an inexact representation is adequate in practice, an understanding that doesn’t assume any role for an exact representation. Pulling all this together: We understand inexact representation as representation that admits of some distortion and/or failure of precise focus. We understand distortion and/or failure of precise focus not in terms of comparison with some ultimate target as perfectly represented, but as admitting of potential improvement, improvement in turn understood in terms of representational functionality. If epistemic requirements for justification are also satisfied, such an inexact representation will still count as knowledge if it is adequate in practice.

VI. Knowledge by imperfect representation and realism. In section 5 I sketched how imperfect representations can still count as knowledge. Realism in both science and perception is a central case. I have formulated contemporary realism as successful reference. The world is too complicated for reference to succeed. But treating unsuccessful reference as successful nevertheless allows us to function effectively when using imperfect representations in grappling with this complex world. The question now is how the idealization of treating our unsuccessful use of the tools of reference as if they did refer—what I have called “idealized reference”—functions in providing genuine knowledge of the world. Unwinding the analogy of section 5 in reverse illuminates how idealized reference is good enough for knowledge. Start by understanding a representation as adequate in practice when refinement in accuracy or precision provides no practical improvement for anything that might reasonably be expected to come up. Think of that as: Practically speaking, we can treat this representation as exact, not permitting further refinement. Finally, think of this idealized representation as a representation of an independently existing target, “just as it is,” including successful reference to this (purported) target, whether a table, a cell, electrons as a kind, etc. The sanitation of the analogy in section 5 provided a step-by-step explication of the function of such idealized reference. Now apply this explication, using the Principle of Semantic Alter Egos, to the idealized use of the tools of reference. Start with the use of tools of reference in a practically adequate representation that counts as knowledge. Supposing that as knowledge this representation is true, and so fully accurate within its level of precision, its shortcomings must lie entirely in its imprecision. The corresponding ideal representation, thought of as Spontaneous Generations 9:1(2018) 160 P. Teller Referential and Perspectival Realism representing a target, will count as precise but not fully accurate since, in fact, reference fails. These two representations exhibit compensating imperfections, the tradeoff between imprecision and inaccuracy. So we may take them to constitute semantic alter egos of one another. The Principle of Semantic Alter Egos tells us that the two do the same representational work.15 The imprecise representation counted as knowledge, so its precise semantic alter ego, though false since reference in fact fails, nonetheless provides an alternative way of presenting the knowledge embodied by the imprecise representation. In this way the analogy’s explication provides a way of seeing how, despite failure of reference strictly speaking, nonetheless use of the tools of reference can function in providing knowledge of the world, a real grip on how things are. This is not to say that reference succeeds in imprecise statements. We talk of John’s height, but there is no such thing as the length that is John’s height. We talk of the people who make up the middle class, but there is no such thing as the collection of people who make up the middle class. Reference fails, period. But the tools of reference nonetheless function in telling us, well enough, how things are. They do this by functioning in precise statements, or statements that are interpreted as precise, that succeed in an idealized way in informing us about the world. The success in question here is the practical success in guiding us through a very wide range of practical considerations.16 Content and truth for imprecise statements, or statements that we appreciate are imprecise, are then understood in terms of the practical success of their precise semantic alter ego counterparts. Strictly speaking, reference fails also for imprecise statements, as in the two forgoing examples. But the successful function of the tools of reference in imprecise statements is understood in terms of the practical success of the idealized reference in the precise semantic alter egos. This is an application of the general approach to vague statements briefly mentioned in the third paragraph of section 2. Remember that instrumentalism can provide no alternative, because it presupposes the unavailable reference at the level of perception that was supposed to fill the epistemic gap seen at the level of theory. Realism as successful reference fails. But once we recognize the repercussions of the Complex World Constraint, idealized reference, as explicated with the analogy, is the best realism we can have. And it is realism enough. The practical consideration has also applied throughout this discussion in our treating referential tools as having referents, even though this is, when we

15Teller (2017) presents much detail on the back and forth between inexact truths and the semantic alter egos that, though false, function as exact truths. 16For the connections with pragmatism, again, see Teller (2012), section 5.

Spontaneous Generations 9:1(2018) 161 P. Teller Referential and Perspectival Realism are reflective, seen to be a simplification. One naturally supposes that itis, at least in part, because we are almost always working within some common ground or personal platform that we fail to notice that the referential tools that we are using apply, strictly speaking, only in an idealized way.

VII. Perspectival realism. I have motivated retaining the epithet “realism” for knowledge embodied in idealized reference. Where is the perspectivalism? The quick answer is: in the different, even incompatible, modelling idealizations needed in practice for treatment of different aspects of a broader subject matter. This has little application at the level of perception since nature usually imposes a univocal modelling scheme in our perceptual experience.17 But in science, the world’s complexity requires us to use different modelling schemes—different “perspectives”—to get at different facets of one larger subject matter. This applies broadly, but I’ll illustrate how it plays out for reference by revisiting Eddington’s two tables. Many foundational physicists insist that there are no particles, there are only fields.18 Put yourself in that mindset. How, then, would you understand talk of ordinary objects of perception? Let us use the term “table phenomenon” for the kind of circumstance that we would ordinarily describe by saying that an agent (veridically) sees a table in front of her/himself. In our ordinary description of a table phenomenon we say that there is a table in front of the agent and that the agent perceives the table. We say that the table is a discrete physical object with identity conditions in the sense that it is a factual matter whether or not the table is the same as the one seen yesterday, and the like. On the field-theoretic description, much of this is incorrect. I reject any suggestion that one of these “perspectives” is correct to the exclusion of the other. Yes, the field-theoretic description is “more accurate,” but only in certain respects. Moreover, it has no humanly available applicability for virtually all the things about tables in which we are ordinarily interested. Each perspective provides representations that are adequate in practice for the domains for which they are intended, and our understanding of the world is richer by having both. One more example will illustrate the point. Most will say that with general relativity we now know that there is no force of gravity, there is only the

17More detailed development will refine this statement. Different visual perspectives, which gave us the ur-metaphor of scientific perspectivalism, provide some wiggle room. Also, contemporary perceptual psychology suggests that different perspectives get built right into individual percepts. See No’́e (2006). 18See Hobson, (2013), and many further references therein.

Spontaneous Generations 9:1(2018) 162 P. Teller Referential and Perspectival Realism curvature of space-time. No! Representations involving gravitational force and space-time curvature are both idealizations. By using both we have much richer access to the way things are. It’s not just that Newton’s gravity makes calculations easier when greater accuracy is not needed. Explanations that instil real understanding are at stake. Think of trying to explain the tides using general relativity. Idealized use of the tools of reference within different inexact modelling schemes, when adequate in practice, provides a variegated grip on what there is and what it is like.

VIII. Referential and perspectival realism. Contemporary realists take referential realism to be, in many cases, literally correct. I have argued that reference gives us only idealized access to what there is. When we take the idealization into account we see room for different perspectives. When one or another individual perspective gives representations that are adequate in practice—adequate for the purposes for which each is intended—they tell us how things, in those respects, are. If you like, say redundantly, they tell us how things in those respects are really.

Paul Teller Department of Philosophy University of California at Davis 1 Shields Avenue Davis, CA 95616 USA [email protected]

References

Austin, J. L. 1965. How to Do Things with Words. Oxford: Oxford University Press. Burge, Tyler. 2010. Origins of . Oxford: Oxford University Press. Giere, Ronald. 2006. Scientific Perspectivism. Chicago: University of Chicago Press. Hobson, Art. 2013. There are No Particles, There Are Only Fields. American Journal of Physics 81(3): 211-223. Laudan, Larry. 1981. A Confutation of Convergent Realism. Philosophy of Science 48(1): 19-49. Noë, Alva. 2006. Action in Perception. Cambridge, MA: MIT Press. Ladyman, James, Don Ross, David Spurrett, and John Collier. 2007. Every Thing Must Go: Metaphysics Naturalized. Oxford: Oxford University Press.

Spontaneous Generations 9:1(2018) 163 P. Teller Referential and Perspectival Realism

Putnam, Hilary. 1975. Mathematics, Matter, and Method, vol. 1. Cambridge: Cambridge University Press. Teller, Paul. 2001. The Ins and Outs of Counterfactual Switching. Nous 35(3): 365-393. Teller, Paul. 2012. Modeling, Truth, and Philosophy. Metaphilosophy 43(3): 257-274. Teller, Paul. 2017. Modeling Truth. Philosophia 45(1): 143-161. Teller, Paul. Forthcoming. Measurement Accuracy Realism. In The Experimental Side of Modeling, eds. Isabelle Pechard and Bas van Fraassen. Minneapolis: University of Minnesota Press.

Spontaneous Generations 9:1(2018) 164 Theoretical Practices That Work: Those That Mimic Nature’s Own

Author(s): Nancy Cartwright

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 165-173.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.27045

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Focused Discussion Invited Paper Theoretical Practices That Work: Those That Mimic Nature’s Own*,† Nancy Cartwright‡

I. The problem If the topic is realism about scientific theories, particularly highly abstract, general theories, what might the question be? It is, I shall argue, unfair to suppose that theories of this kind make claims since any way of rendering their equations and principles as propositions either makes them false or undersells their usefulness. So the question should not be: “Are the theory’s claims true?” Instead I propose: “What image of the world makes intelligible the successes and failures of our theoretical practices?” The answer I propose: Nature does it just like us. Our successful theoretical practices work well for predicting and manipulating the world because they mimic Nature’s own practices. I have always been a kind of Wittgensteinian Tractarian about the world; an instrumentalist about high-level theories of it;1 and an empiricist—a prudent empiricist—about warrant. As to theory: it is best seen as a tool we use to produce descriptions of what happens in concrete situations. As to the world: there are just the facts, which involve concrete things, features, relations, processes, powers, and happenings. Possibilities are real but they depend entirely on the facts and on how Nature brings them about, and Nature, I suggest, brings them about in the same way that we predict them. As to warrant: claims about the empirical world are warranted by the facts; and when two sets of hypotheses are supported by the same facts, the less bold hypothesis is the better warranted.

* Received August 11, 2016. Accepted August 15, 2016. † It will be obvious how much my thinking owes to Bas van Fraassen and Hasok Chang; for concern with details, to Pat Suppes; for finding the possible in the actual, toRuth Marcus; and for the toolbox of science, to my co-authors Towfic Shomar and Mauricio Suarez (Cartwright, Shomar, and Suarez 1995). ‡ Nancy Cartwright is a philosopher of the natural and social sciences. She holds appointments at Durham University and the University of California at San Diego. 1 At least in the disciplines I have studied.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 165 N. Cartwright Theoretical Practices That Work

We use our theories to build models of concrete systems for prediction, manipulation, and explanation, all of which, when the model is successful, involve well-warranted propositions.2 The (usually complicated) propositions that describe the world as the model models it use irreducibly “theoretical” concepts. For example, any detailed description of what happens when a laser emits a beam of coherent light will refer to electrons, photons, quantum energy, and excited states, and at some stage should employ some version of Schrödinger’s equation. That secures an image of the world replete with many of the quantities, kinds, qualities, and processes named by our theoretical concepts. This makes trouble for instrumentalism. It does not seem that theory can be just an instrument, like a barometer or a pocket calculator. These theoretical terms are meaningful, with considerable evidence that they refer, and the equations play a role as equations in constructing our best descriptions of the facts. Van Fraassen’s constructive empiricism provides a “middle” way to finesse this problem: read the claims of theory literally, but you need not count them true; the theory is acceptable just in case its empirical consequences are correct. That, I urged long ago with arguments that I still endorse, will not do. First note that what we call the “principles” of a theory are seldom propositions. In the mathematical sciences they are frequently equations; in the non-mathematical sciences, sentences without scope indicators.3 Despite decades of efforts (see below), I have never found a way to render them as propositions so that the principles of our best theories turn out well-warranted. My current view is the same, after all this effort, as at the start. These theories will not prove acceptable by realist standards nor by the standards of constructive empiricism. That is because theories, if taken literally, do not get their empirical consequences right, for three reasons:

a. Theoretical equations are usually turned into propositions by adding universal quantifiers.4 But we have evidence against this universalism—myriads of cases where we cannot get the equations to fit the measured values. Universalists explain by saying there must be factors present that aren’t represented; were they represented, the equations would hold. This is a bold conjecture. An alternative explanation is that the equations just don’t fit those cases.

2 How the Laws of Physics Lie called these “phenomenological models” (Cartwright 1983). 3 As for instance in “[S]treaming ... costs very little, improves test scores, and is therefore extremely cost-effective.” From the J-PAL website (cited in full in “References”). 4 For example, “∇ × E = −δB/δt” becomes “For all systems x, ∇ × E(x) = −δB(x)/δt”.

Spontaneous Generations 9:1(2018) 166 N. Cartwright Theoretical Practices That Work

b. In a great many cases where the equations are used to describe what happens, they are amended from their original form by “correction factors.” They don’t literally describe what happens, they only begin to do so.

c. Much of science works by the analytic method: we pair representations of separate parts with rules of combination5 to calculate what happens when the parts are arranged together. How are we to turn the representations for the parts into true propositions? The familiar case is forces. The law of gravity 2 becomes “(x)(y) F(y) = Gm(x)m(y)/rxy ”; Coulomb’s principle, 2 “(x)(y) F(y) = ϵ0q1(x)q2(y)/rxy .” For some systems y that have both mass and charge, F(y) = 0. So these propositions are not true.6

These constitute specific reasons that these high-level theoretical principles rendered as claims in the canonical way are not warranted: they don’t have the right relationship to more “low-level” claims about concrete situations that are well-warranted for warrant to flow up to them. Yet they do not seem mere instruments to construct descriptions of concrete behaviours in which their form and content are completely invisible, like “F(y) = 0 in this setting” and “Q-switching in this laser produces pulse durations on the order of nanoseconds.” So here is the task for scientific realism as I reconstruct it: to identify theoretical claims that are well-warranted and that preserve something of the form and content of the theory’s principles. The key to accomplish this is to give far more central prominence to theoretical practices.

II. Realist proposal With theoretical practice centre stage, if we want to find some well-warranted claims that resemble the equations and principles we write in our theories, I propose that we take seriously what we say when we use these theoretical principles in situ. This will secure an open-ended set of true situation-specific theoretical claims in which our theoretical principles appear much as we represent them. Here’s a simple illustration, a situation-specific claim in which the functional form of Coulomb’s principle appears explicitly: “If in this setting, charge q1 is added this way here, charge q2 located r away 2 from it will experience an additional force ϵ0q1(x)q2(y)/rxy .”

5 These are often an open-ended set of “rules of thumb” that get adjusted in different ways in practical application, unlike the fixed and explicit rule of vector addition for forces. 6 This is in stark contrast with the simultaneous equations models of economics where each and every equation must be true at once.

Spontaneous Generations 9:1(2018) 167 N. Cartwright Theoretical Practices That Work

Though this claim sounds almost identical to Coulomb’s principle, it is situation-specific. It would not be true for instance ifq1 were introduced in the middle of a Faraday cage.7 I’ll expand on this in Section 5. Before that I shall explain how my more direct attempts to turn these principles into credibly true propositions have failed. This takes us on a detour through powers, which is one way of arriving at my overall conclusion. But if you don’t like powers, you can still adopt the “realist” position I propose.

III. Failed alternatives Here is one approach I’ve tried to deal with problem (a). Turn an equation, say “y = g(z1,...zn),” into a proposition not by adding universal quantifiers across all systems but rather with something like: “For any situation s, so long as all the features that fix the value of y in s are represented byz1,...zn, then y = g(z1,...zn) in s.” This allows but does not require that the zs always pick up all the determinants of y. But it undersells the force of the equations. There are many cases where the equations play a significant role in building a model even where the zs do not represent all the determinants of y, or at least we don’t recognise how to use them to do so. This brings us to (b) since many of the corrections may be needed because there are factors at work that we can’t represent in the theory. I have never—until now—come up with a reasonable “realist” approach for dealing with (b). For (c) we can call on powers, or as I say, “capacities” (Cartwright 1989). Hume rejected powers because he could find no difference between the presence of a feature and its acting, as there should be if the feature were a power.8 The analytic sciences seem to populate the world with powers in this sense. There’s the presence of a charge, the Coulomb force it exerts on another charge when the two are appropriately arranged, and the total force the second charge experiences. At first I called these intermediates, the “influence” or “contribution” the power makes to the actual outcome. For example, “When q1’s Coulomb power is activated, it produces a contribution 2 Fc = ϵ0q1(x)q2(y)/rxy to the total force a charge q2 located r away from it experiences.” But this is misleading. It sounds as if the contribution is there in the same way as the outcome, like the stones in a wall. I don’t find that plausible even with forces, which do not add but combine vectorially—let alone where more complicated rules of combination apply.9 John Pemberton

7 Faraday cages block electric fields. 8 Or its occurrence was in some other way associated with the presence of a power. 9 For instance, resistances (say R1 and R2) from different resistors combine differently if the resistors are arranged in series (Rtotal = R1 + R2) or in parallel (Rtotal = R1R2 / (R1 + R2), and their effect on the current in a circuit with one battery isgivenby

Spontaneous Generations 9:1(2018) 168 N. Cartwright Theoretical Practices That Work and I tried calling these “exercisings” (Cartwright 2015; Cartwright and Pemberton 2013) since it seems that powers are always active in contributing to outcomes. But this seems a cheat. What does it mean to say, “When the 2 Coulomb power acts, it exercises ϵ0q1(x)q2(y)/rxy -ly?” And what can we possibly say about a resistor labelled with “resistance = 5 Ohms”? “It resists 5-Ohmly”? Both Pemberton and I now think all this is a mistake. When powers exercise they always exercise in some actual situation. Yet we think of our representations of them as situation-free, as if they could act “nowhere” or in a world where they occur “all alone.” It is easy to get sucked into this by concentrating on forces, which we may suppose we imagine operating in an “idealisation” of a real situation: one where the components are present—say both masses—located physically with respect to each other, but no other forces obtain. So we say, as I myself have, “In this situation we see what gravity does on its own.” Then we suppose that, in other situations, gravity is “trying to do the same thing.” But what is the cash value of this? It just means that the actual force that occurs when gravity acts is whatever other force is there vector-added to what gravity produces “on its own.” Even this much sense cannot be made for other powers. In economics we talk of the powers of the supply mechanism and of the demand mechanism to affect price and we represent these powers in separate equations. But it makes no sense to imagine either operating “on its own.” And I don’t even know how to start to think about resistances in no circuit whatsoever. We are also misled into positing contributions by thinking of powers as dispositions that need a conditional analysis: “X is water-soluble iff, if x is placed in water in the right way, x dissolves.” Similarly, as above: “Q1 has the Coulomb power iff, when activated in the right way, it produces a 2 contribution Fc = ϵ0q1(x)q2(y)/rxy to the total force a charge q2 located r away from it experiences.” But powers, so I argue, are part of our basic ontology. We don’t need contributions because powers don’t need conditional analysis. They don’t need analysis at all; they just are—lots and lots of them. Looking just to physics, there’s gravity, the Coulomb power, friction, the power to resist the flow of electricity, the power to turn on an axis, andso on and so on and so on. There’s much to say about powers as a category and there’s much to say about specific powers. Naturally we pick out different powers by what they do in various circumstances, or what they look like,

Ohm’s law using Rtotal. But with two batteries and 3 resistors or simple bridge circuits, let alone really complex circuits, we end up using a toolkit of methods to calculate the total effect on current, including various reduction schemes and sets of simultaneous equations. For further examples, including the role of the Coulomb law in determining ground state carbon energy levels, which I discuss in Section 5, see Cartwright (2003, Essay 3).

Spontaneous Generations 9:1(2018) 169 N. Cartwright Theoretical Practices That Work or how they affect measuring instruments. But that’s how we identify which is which. None of that constitutes an analysis, which is fine because things don’t need analyses in order to be real. So I propose to drop talk of “contributions,” “exercisings,” “intermediates.” Of course when a power acts, it does something. But there is not some one thing it does. Rather, we have a way of representing the power, a representation we know how to use to calculate what happens when that power acts in various arrangements. So I have not solved problem (c) because I have no candidate proposition we might regard as true to associate with the principles for components parts in theories where the analytic method is employed. Still it advances my overall project, but only given a new story about how Nature decides, occasion by occasion, what happens.

IV. How Nature builds what happens Here’s an old story. There are true universal laws from which what happens in every situation can be deduced. Since the laws are true, what they say happens is what does happen. So Nature looks at the laws and produces what they say. I can’t use this story. I have no laws, just a large battery of different powers that we represent in our theories and for which we have theoretical practices that allow us to calculate what happens when these powers act in various arrangements. What image of Nature makes sense of the successes and failures of these practices? I propose that Nature does it just the way we do it, piecemeal and with no firmer rules than we employ. Why suppose that Nature employs a tight system rather than a loose one like ours? This is by far the most straightforward account of how our practices succeed: they replicate Nature’s own practices.10

V. Back to realism Here’s how I see theory working when it works well: we develop sets of theoretical practices and representations that together allow us (using our theoretical representations as representations, not as claims) to generate situation-specific truths. These truths are true because Nature generates the situation-specific facts in the same way that we do. Or rather, this works for us because we have cottoned on to how Nature does it. As I noted in Section 1, generally our theoretic representations—the theory’s principles

10This leaves space for genuine indeterminacy: what’s open in Nature is much like what is open so far as we can calculate with a really good theory. I welcome this since it is how I experience the world. But it is not a necessary consequence of the view. For further discussion, see Cartwright and Merlussi (forthcoming). Spontaneous Generations 9:1(2018) 170 N. Cartwright Theoretical Practices That Work and equations—are invisible in these situation-specific truths, though many of our theoretic concepts may appear in them. Sometimes, though, even the representation itself, much as it appears in our theories, appears in these claims. For the pleasure of coming full circle and also to find a simple illustration, I’ll consider the case of the five energy levels of the ground state of acarbon atom that I discussed in How the Laws of Physics Lie (Cartwright 2003, Essay 3).11 Textbook treatments first derive three levels using only the Coulomb potential, then divide the 3P state into three by adding terms to correct for spin effects. In How the Laws of Physics Lie I made a few attempts at formulating some true factual claims with the quantum Coulomb potential for the two 2p carbon electrons as cause and the three levels as effect but with no success. I finally proposed a counterfactual: “[T]he Coulomb potential, ifit were the only potential at work, would produce the three levels” (Cartwright 2003, 69), which, I suppose, if it is to be true at all must be said of some particular carbon atom, say Carby. But what makes this counterfactual true? The answer is surely Schrödinger’s equation and the quantum version of Coulomb’s principle. But how do they do this, given that these cannot be rendered as true propositions from which the counterfactual can be derived? The counterfactual is true and it depends on Schrödinger’s equation and the quantum version of Coulomb’s principle. But that’s not because these are true laws and Nature makes happen what true laws say. It is rather because this is the way Nature operates. She uses both Coulomb’s principle and Schrödinger’s equation, but as representations, just as we do. She uses them to determine what energy levels Carby will display. We have ample evidence that this is how she operates. So we also have good warrant for the claim that if Carby’s spin effects were missing, she would see to it that Carby displayed the three levels in question. There are also a great many true factual claims. Supposing Carby has been prepared for a finely-tuned experiment to measure energy levels (asin n. 11 above), Schrödinger’s equation in almost its pure form (i.e., with few “correction factors”), with both the Coulomb term and a spin-orbit term in it, will be true of Carby. This too is true for Carby: the Coulomb potential is literally an additive part of the potential that determines Carby’s ground state energy levels. Of course these two claims would be available given my solution to problem (a). But we should want far more than (a) provides for both the Schrödinger equation and Coulomb’s principle since we use them

11Here I depart from my own standards in aid of providing a simple illustration. This is not any real carbon atom but itself a tool we use to construct models of real carbon atoms. Nevertheless, let’s pretend that it is some real carbon atom, perhaps one whose fine-structure intervals are being measured by laser magnetic resonance, as for instance in the experiment by Saykally and Evenson (1980).

Spontaneous Generations 9:1(2018) 171 N. Cartwright Theoretical Practices That Work for a vast number of cases where correction factors must be included.

VI. Conclusion I suggest that Nature does not derive what is to happen from some set of true claims. She does it just as we do, from representations and practices like ours. This surely should be counted as a kind of realism. Certainly it renders the principles of a successful theory true in the sense of pragmatic truth that Hasok Chang defends in this volume. But I think this image of Nature permits a more robust realism about our theoretical representations. First, many of their terms refer. Second, Nature uses them for fixing what happens in the very form in which they appear in our theories, just as we use them in the process of predicting those happenings. Third, they are associated with a great many situation-specific truths in which their form appears almost unchanged. Finally, returning to my prudent empiricism, this is an image of Nature that the successes of our theories and their practices supports, unlike one in which Nature employs claims we can’t formulate and methods for which we have no use.

Nancy Cartwright Durham University Department of Philosophy 50 Old Elvet Durham DH1 3HN United Kingdom [email protected]

References

Cartwright, Nancy. 1989. Nature’s Capacities and their Measurement. New York: Oxford University Press. Cartwright, Nancy. 2003. How the Laws of Physics Lie. New York: Oxford University Press. Cartwright, Nancy. 2015. How Could Laws Make Things Happen? In Laws of Nature, Laws of God? Proceedings of the Science and Religion Forum Conference 2014, ed. Neil Spurway, 115-139. Cambridge: Cambridge Scholars Publishing. Cartwright, Nancy, and John Pemberton. 2013. Aristotelian Powers: Without Them, What Would Modern Science Do? In Powers and Capacities in Philosophy: The New Aristotelianism, eds. John Greco and Ruth Gross, 93-112. New York: Routledge. Cartwright, Nancy, and Pedro Merlussi. Forthcoming. Are the Laws of Nature

Spontaneous Generations 9:1(2018) 172 N. Cartwright Theoretical Practices That Work

Consistent with Contingency? In The Laws of Nature, eds. Walter Ott and Lydia Patton. Oxford: Oxford University Press. Cartwright, Nancy, Towfic Shomar, and Mauricio Suarez. 1995. The Tool Box of Science: Tools for the Building of Models with a Superconductivity Example. In Theories and Models in Scientific Processes, eds. William Herfel et al., 137-149. Amsterdam: Rodopi. The Abdul Latif Jameel Poverty Action Lab. “Increasing Test Score Performance.” J-PAL website. Accessed 2017. https://www.povertyactionlab.org/policy-lessons/education/ increasing-test-score-performance. Saykally, Richard, and Kenneth Evenson. 1980. Direct Measurement of Fine Structure in the Ground State of Atomic Carbon by Laser Magnetic Resonance. Astrophysical Journal 238 (June): L107-L111.

Spontaneous Generations 9:1(2018) 173 Machine Learning and the Future of Realism

Author(s): Giles Hooker and Cliff Hooker

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 174-182.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.27047

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Focused Discussion Invited Paper Machine Learning and the Future of Realism* Giles Hooker† and Cliff Hooker‡

I. Machine learning, its rise and nature The preceding three decades have seen the emergence, rise, and proliferation of machine learning (ML). From half-recognised beginnings in perceptrons, neural nets, and decision trees, algorithms that extract correlations (that is, patterns) from a set of data points have broken free from their origin in computational cognition to embrace all forms of problem solving, from voice recognition to medical diagnosis to automated scientific research and driverless cars, and it is now widely opined that the real industrial revolution lies less in mobile phone and similar devices than in the maturation and universal application of ML. Among the consequences just might be the triumph of anti-realism over realism. A venerable and widely familiar example of an ML tool is a so-called neural net. The neural net learning algorithm in simple form involves procedures for representing inputs (data points) as loads on net-input layer nodes. These loads are then redistributed across nodes of the next layer according to a weighting formula on node connections. This redistributing operation is repeated on successive internal layers until the last (output) layer is reached. Various node connection weightings are trialled on a training data set until the net yields the appropriate outputs (until it has learned its task) and then it can be applied predictively to further, related data sets. In this fashion nets have learned many tasks easily that continue to be difficult and sometimes impossible for humans. However, only the initial (input or setup) and last (output) layers are cognitively interpretable; in general, the remainder cannot be assigned any cognitively meaningful state. Just this is the secret of their success. The net learning algorithm specifies the states of internal nodes and the functions that change them numerically, not logically, so that net processing is sub-categorical with respect to the input and output

* Received August 14, 2016. Accepted August 14, 2016 † Giles Hooker is Associate Professor of Biological Statistics and Computational Biology at . ‡ Cliff Hooker is Professor Emeritus of Philosophy at the University of Newcastle (Australia).

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 174 G. Hooker and C. Hooker Machine Learning and the Future of Realism psychological categories. The output content is not arrived at from the input by any process of concept-preserving logical analysis. Rather, just this shift greatly increases accessible net states and state change functions. Using nets involves rejecting the long and widely-held assumption of adhering to agency categories when constructing psychological theory.1 Neural nets are not the only ML algorithms—the overview Wikipedia entry lists 50+ forms (see n. 1). Other ML algorithms are based on quite different methods that do not have the same (mostly false) biological motivation. But they do highlight one challenging feature widely (but not universally) shared among such algorithms: the absence of interpretability for their internal states. This absence can also extend to their outputs as well, if, for example, the output, and the internal states, are expressed as an infinite dimensional vector. Nonetheless, when the data sets concern the behaviour of entities under study, the algorithms deliver, via their estimated correlation relationships, predictions of entity behaviour as accurate as the data set permits and, as the data set increases, accuracy as high as any model could achieve. Note again that a learning algorithm will produce “naked predictions,” that is, numbers stripped of any ontological interpretation: it is not that they present a different ontological interpretation, or that one will appear once the data are “cleaned up”; they offer none at all. Their predictive success combined with their a-ontolological status stands at the heart of the challenge they pose to realism.

II. Machine Learning supports the centrality of a-model naked prediction The usual context in which prediction is discussed in philosophy of science is one where predictions are derived by deduction from initial data conjoined to some theoretical model that is interpreted (realized) in terms of that theory’s ontology, whether that be atomic or field, dynamical or functional, etc. When such a model is not available, neither is prediction. But there are myriad cases where this requirement is not satisfied, where we can determine that the system has changed but there is no satisfactory interpreted model of how that happens. Examples include processes too complex to follow, as occurs in complex systems where dynamical form may transform, and critical and chaotic regimes arise, and/or data too vast for humans to comprehend, as occurs with increasing frequency (in big business, in bio-molecular science, etc.). ML offers naked prediction in every case, whether or not those predictions also follow from an interpreted model, and

1 For an introduction to machine learning see, e.g., Wikipedia, 2016. Cf. Ray 2015. For the sub-categorical nature of the neural net algorithm see Smolensky, 1988. For more on neural nets see Churchland 1989, Hooker 1993.

Spontaneous Generations 9:1(2018) 175 G. Hooker and C. Hooker Machine Learning and the Future of Realism is often our only way to advance research and understanding. Given this, naked prediction seems more central to science than modelling does. Why not then accept that science is based solely on naked prediction, ignore the call to understand the theoretical world beyond this as a siren call, and argue that the diverse goals of science (prediction, explanation, unification, etc.) can all be parsed in terms of naked predictive performance alone? Note that this is not a proposal to remove this or that class of models, but rather the claim that it is not necessary to use theoretical models at all, indeed not necessary to use theory at all (Breiman 2001). And with that goes any claim that science guides choice of ontology. The centrality of a-model naked prediction is by default an anti-realist position. The only commitments to human-scale empirical elements (input and outcome data, measuring instruments, etc.), are those required by a recurring phenomenological empiricism. ML still requires a human to identify a prediction task, choose a competent ML algorithm, specify relevant inputs and outputs, and procure adequate training and an experimental data set. These are not at present interpretation-free activities, which raises the prospect that ML illegitimately smuggles in interpretive models.2 Some entertain finessing the problem by use of all possible data of all possible kinds to produce a universal correlation web. We will return to this issue in §5 below. Meanwhile, the point remains that ML does allow us to avoid an interpretable characterization of the relationship between these inputs and outputs once they are defined.

III. History of this cleavage Science has been constantly faced with this issue. It is, for example, the motivating division behind those who, a century and more ago, pushed for a science of psychology founded on revealing the inner structure and contents of minds (e.g., Wundt) and those who urged confinement to just extracting patterns within stimuli and responses (e.g., Skinner). But back then it was a matter of high philosophical differences that carried the debate. Now the development of ML forces this issue in an immediate, unavoidable way. The wheel has also turned full circle: Skinnerian empiricism was replaced by computational cognitivism’s quasi-internalist modelling, only to be challenged in turn by ML’s externalist agnosticism. More broadly, this

2 An important (to humans) aspect of this is the issue of how to deal with ethical and social biases that are exhibited in the data sets we give it, e.g., prejudicial attitudes toward, and institutional barriers to, opportunities for coloured people in the US. Do we allow ML to perpetuate these biases? See the recent workshop on Machine Learning for Social Good (Faghmous et al. 2016). It is possible to design ML to avoid these biases, but only when there is an independent human decision to do so.

Spontaneous Generations 9:1(2018) 176 G. Hooker and C. Hooker Machine Learning and the Future of Realism methodological difference is the deepest expression of the realist/anti-realist cleavage that runs through the philosophy of science.3

IV. The central question and its answer Is science satisfactorily characterizable in terms of prediction alone (“naked prediction”) or is there something more that is essential to science? A second question follows: By what criteria ought the preceding question be decided? A popular but reasonable response to this latter question is that any additional end proposed for science ought to “pay its way,” i.e., to add something that is valuable independently of the value that naked prediction brings. Naked prediction serves the inherent epistemic value of naked pattern knowledge, and with that naked explanation, and also the pragmatic value of control, we can manipulate the world (including ourselves) to achieve various pragmatically valuable ends from engaging in research to building bridges and curing maladies. What then of any ends of science beyond what prediction supports? The most compelling answer, we suggest, is the provision of an intelligible conception of deep or underlying being (ontology). What sort of being or beings is the cosmos? How does it work? How are living beings constituted as living? What are their ends (if any) inherent in their being? These and related questions are all central to our personal and collective drive for enlightenment. If anything beyond manipulative or naked pattern knowledge has inherent value, it is ontological enlightenment. This notion is tied to theoretically interpretable ontology, and hence to realist interpretable modelling.

V. Supports, persuasive and not, for interpretable modeling Simplicity. Might simplicity considerations support interpretable modelling? ML is vulnerable to an Occam’s razor-type argument in that its models are more complex than needed if an interpretable model will do the job. But the prediction-alone approach equally avoids commitment to any ontological categories beyond the empirical phenomena of experimentation, etc. noted above and in any case these categories are shared with interpretable modelling. And with that, the prediction-alone approach also avoids evaluating theory coherency, historical sequences of theories that might support realism, and much more (cf. n. 3). Moreover, as input-output data increases, the ML output converges on the predictions of a true model, any differences of predictive accuracy go away and any differences inoutput

3 See, e.g., van Fraassen versus realists in Churchland and Hooker 1985, and Hooker’s recent review in Hooker 2011, §6.2.5.

Spontaneous Generations 9:1(2018) 177 G. Hooker and C. Hooker Machine Learning and the Future of Realism simplicity go with them. That is, arguments about simplicity (or, for instance, Popperian criteria) are tacitly predicated on the presumption that the only scientific method is one based on interpretable models, which begs the question. ML as a scientific process adjusts for the complexity of data; it is not so much unfalsifiable as self-correcting. Our conclusion is that simplicity is too complex, so it does not produce a useful test. Risk. ML similarly cannot convincingly claim that it takes on less risk than does interpretable modelling (or vice versa). While ML’s prediction-only approach avoids the ontological risks of error noted above, it still risks using an inadequate method to resolve small differences between predictions by not including relevant input sets and/or by not having sufficient internal discrimination (e.g., a neural net with too few nodes per layer) or by having insufficient or noisy data. (This is the problem faced in §2.) However, lacking any guidance from what is there (being ontologically non-commital), the only option available is to include all possible input data to an infinitely internally diverse algorithm and to trust it will remove the risks. These requirements are certainly in principle beyond practical realization, especially for finite beings that commence in ignorance and remain fallible, but is such an algorithm nonetheless in principle able to do the job if realized? We suppose not, but leave the question open.4 Either way, our conclusion is that risk is also too complex, so it does not produce a useful test. Efficiency. Interpretable models provide important categories of information that ML cannot in principle provide, and interpretable models provide it in immediately intelligible and usable forms. In general, interpretable models tell us about the states and processes that studied systems exhibit, while ML can only characterize input-output relationships. I can transfer the force that holds our solar system together, found out from its correlational data, to any other solar system without alteration, and so start working out its physics immediately. However, it will be replied, whenever practical use is to be made of this information, whether for astro- or astrodome physics, it always suffices directly to use the relevant input-output relationships. If so, the intermediary details are merely one way to arrive at the outputs but are no more essential to doing so than are the particular internal details of a competent ML algorithm. Thus far our preference for interpretable details is just that: an affectation or cultural habit that does no independent work. There is this further consideration. An interpretable model comes with a framework of dynamical modalities: possibility, actuality, necessity, including super-possibilities and super-necessities for how they may change

4 Cherniak 1986 is a standing rebuke to anyone tempted to think that discriminating what is “in principle” possible (impossible, necessary, etc.) is straightforward.

Spontaneous Generations 9:1(2018) 178 G. Hooker and C. Hooker Machine Learning and the Future of Realism their specific versions of these, for example under bifurcations. Whata fundamental theory of physics primarily tells us is what the dynamical possibilities and necessities are in some domain, and the rest is input-output relations (called flows or vector fields). ML algorithms, being confined to naked data and their relationships, cannot in principle provide the modal information.5 But again comes the response that, while modality may make it easier for us to make predictions, it is also unnecessary for that. Like our interest in states and processes, our interest in modality is just that: a pragmatic preference deriving from ease of human use when that was an important consideration, perhaps, but not a distinction that does any independent work. But before acquiescing, note one potential sticking point: some complex dynamical systems show bifurcation, a sharp change in dynamical form itself that amounts to change in what is dynamically possible and necessary, and that raises the question of whether ML can adequately characterize these changes (with whatever internal discriminators it uses). There may be reason to think not, and if so we have one ineliminable functional argument for retaining interpretable models in addition to their inherent value for us.6 Unification. The value of ontological understanding is supported, we contend, by the partial but deep successes of a closely related scientific end: theoretical unification. A unification requires provision of a single unifying ontology, thus driving ontological understanding deeper through transformation of the possibility-necessity framework. The unification of electromagnetic and mechanical dynamics to form relativity theory provides a deep revelation of our spatiotemporal being in the world. Again the currently developing unification of biological chemistry and quantum macromolecular theory will provide a deep understanding of living beings. And so on. However, following the preceding indecisiveness, we must ask whether machine learning can again provide all the relevant input-output relationships. If machine learning is presumed to have no limitations to its capacity to form these relationships, it will develop input-output representations of systems that are pre-unification and systems that are post-unification, but of course as naked inputs and outputs, not characterized in interpretable ontological terms. Again, there is nothing further needed

5 This is essentially Sellars’s critique of anti-realist empiricism (see, e.g., Sellars 1965). 6 One argument is this: machine learning by definition uses computational algorithms, but all computer programs are finite and so must generate rounding-off errors. Thus dynamical processes that can generate large changes from differences, no matter how small, as bifurcation can, cannot be accurately characterised computationally, whether by ML or not, because computational processes must ultimately use finite rounding off. We do not consider this argument final, but this is not the place to pursue the issueand it is anyway likely premature to do so at this juncture.

Spontaneous Generations 9:1(2018) 179 G. Hooker and C. Hooker Machine Learning and the Future of Realism for the use of the predictions. (There remains their inherent value for us.) However, most or all of these inter-theory relations are specified in terms of asymptotic relations (1/c -> 0, h -> 0, etc.) and these also involve indefinitely fine discriminations, so the preceding doubt (n. 6) remains.

VI. Realism in a partially accessible world. We live in a world that quickly becomes epistemically inaccessible to us beyond the simplest systems. We may know bare Newton’s equations, but when joined with specific interactions and constraints for actual use, the equations quickly become insoluble in algebraic functions beyond their simpler forms. This bars our way to interpretable knowledge of them. But since we know the equations we can in principle acquire a lesser but still interpretable knowledge of them by numerically approximating their states in sequence beginning from initial inputs. Since we are not in general able to infer parametric models from these data, or even much (though not no) knowledge of possibility and necessity, this is not very different from pure input-output knowledge. (The exception is that the basic components [this lever, those atoms, etc.] will be interpretively known throughout, so long as they are not dissipated by the dynamics.) But this is not the only part-way house of knowledge that can arise. Suppose, for example, that Newton could not deduce the gravitational principle that held the solar system together, but had access to ML to apply to astronomical data. Suppose that he could isolate the samenesses and differences in the internal ML states corresponding toa variety of planets and moons. What could he learn? The inverse square law? No. This is not because the states are naked (we want the spatial pattern, not its ontology), but because what could easily, and would likely, appear is a very complex set of numerical interrelationships taking perhaps hundreds of pages to print out, and making for a wholly uninterpretable, hence inaccessible, representation of gravity. But while we cannot make use of it, the ML device can. Realists need interpretable access to dynamics. They can simply choose to live within the limits, often quite severe limits, on this access, pragmatically constructing interpretable models whenever possible. But while this may be a valid policy for the setting,7 the preceding considerations raise again the issue of what the criteria are for interpretable models. Are they simply the entities and relations that we humans find sufficiently convenient to use, say because of the way our perception and action has evolved? Or is there something more species-transcending (and so objective) behind their characterization? Suppose a Martian were to claim, as someday an ML robot might, that

7 This is where one of us (CAH) left the discussion in a first essay into the domain (Hooker 2011) despite the informal efforts of the other (GJH).

Spontaneous Generations 9:1(2018) 180 G. Hooker and C. Hooker Machine Learning and the Future of Realism it was comfortable employing the hundreds-of-pages conception of gravity. Does it have an interpretable model? Where does it leave our notion of an interpretable model? It is not enough to say that it is what a theory quantifies over. This only shifts the question back to what constitutes an intelligible theory. Is it one that quantifies over entities intelligible to us?8 These questions loom larger than this issue, reaching out to considerations of how diversity in cognitive capacities ought to bear upon accounts of objectivity, adequate translation, and so on. Yet they are also relevant here, since ML bids fair to being a domain of distinctive cognitive capacities that is destined to expand and deepen, and to intertwine itself with us in myriad ways.

VII. Conclusion The inherent value of an ontological description remains, for us, a compelling defence of realism. However, access to it is limited, and successful naked prediction challenges traditional conceptions of the scientific method and philosophy of science, whether realist or not. We expect these challenges to grow as machine learning presents an alternative to an ever-wider variety of tasks, resulting in a mode of scientific advance that is alien to our present philosophical conceptions. This will change the terms of the realist debate and many others.

Giles Hooker Department of Biological Statistics and Computational Biology 1186 Comstock Hall Cornell University Ithaca, NY 14850 United States [email protected]

Cliff Hooker Department of Philosophy Newcastle University Callaghan, NSW 2308 Australia [email protected]

8 Compare the variety of conceptions of model interpretability as some form of intuitability that Lipton (2016) reveals.

Spontaneous Generations 9:1(2018) 181 G. Hooker and C. Hooker Machine Learning and the Future of Realism

References

Breiman, Leo. 2001. Statistical Modelling: The Two Cultures. Statistical Science 16(3): 199-215. Cherniak, Christopher. 1986. Minimal Rationality. Cambridge, MA: Bradford/MIT Press. Churchland, Paul. 1989. A Neurocomputational Perspective. Cambridge, MA: Bradford/MIT Press. Churchland, Paul, and Cliff Hooker, eds. 1985. Images of Science. Chicago: University of Chicago Press. Faghmous, James et al. 2016. “#Data4Good: Machine Learning in Social Good Applications”, https://arxiv.org/html/1607.02450. Accessed March 31, 2017. Hooker, Clifford A. 1985. Surface Dazzle, Ghostly Depths. In Churchland and Hooker 1985, 153-196. Hooker, C. A. 1996. Toward a Naturalized Cognitive Science: A Framework for Cooperation between Philosophy and the Natural Sciences of Intelligent Systems. In Psychology and Philosophy: The , eds. William O’Donohue and Richard F. Kitchener, 184-206. London: Sage. Hooker, Cliff. 2011. Introduction to Philosophy of Complex Systems. Part B:An Initial Scientific Paradigm + Philosophy of Science for Complex Systems. In Philosophy of Complex Systems, ed. Cliff Hooker, 843-912. Amsterdam: North Holland/Elsevier. Lipton, Zachary C. 2016. The Mythos of Model Interpretability. arXiv:1606.03490v1. Ray, Sunil. 2015. “Essentials of Machine Learning Algorithms (with Python and R Codes)”, https://www.analyticsvidhya.com/blog/2015/08/ common-machine -learning-algorithms. March 31, 2017. Sellars, Wilfred. 1965. Scientific Realism or Irenic Instrumentalism: A Critique of Nagel and Feyerabend on Theoretical Explanation. In Boston Studies in the Philosophy of Science, vol. 2, eds. Cohen, R. and M. Wartofsky, 171-204. Humanities Press: New York. Smolensky, Paul. 1988. On the Proper Treatment of Connectionism. Behavioural and Brain Sciences 11(1): 1-23. Wikipedia: The Free Encyclopedia. s.v. Outline of Machine Learning. https://en.wikipedia.org/wiki/List_of_machine_learning_concepts. March 31, 2017.

Spontaneous Generations 9:1(2018) 182 REVIEW: Rebecca Lemov. Database of Dreams: The Lost Quest to Catalog Humanity

Author(s): Jennifer Fraser

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 183-185.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.27204

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Reviews Rebecca Lemov. Database of Dreams: The Lost Quest to Catalog Humanity. 354pp. New Haven: Yale University Press, 2015.

Jennifer Fraser*

2015 was a big year for “Big Data.” In February filming began on Snowden, a dramatized version of the life of the National Security Agency whistleblower who, in 2013, made a series of revelations about American domestic surveillance practices. In June, Angela Merkel argued that Germany should get on board with large-scale information collection or “risk being left behind in the global Big Data gold rush” (Marr 2015). Finally, November saw the world premiere of Spectre, the latest addition to the James Bond franchise. In this fast-paced spy flick, Bond battles a multinational data processing network called “Nine Eyes” that threatens to collect and analyze metadata about the electronic lives of the world’s population. Aside from popular culture and politics, Big Data also became a topic of interest for many historians of science. The year 2015 saw the publication of many scholarly works that historicize the data sciences, one of the most prominent being Rebecca Lemov’s Database of Dreams: The Lost Quest to Catalog Humanity. Like Merkel, Snowden, and Bond, Lemov grapples with the emergence of data-driven modes of knowledge production. However, rather than examining the cultural and political consequences of collecting, processing, and storing large amounts of information, Lemov looks at the material culture of Big Data, focusing especially on the tools and technologies that comprised the “database of dreams.” This little-known effort to create and preserve sociological knowledge was carried out by the Harvard-trained social science researcher Bert Kaplan during the mid-twentieth century, and was one of the first attempts to capture and preserve the raw data of psychologists involved in the culture and personality movement. By examining the communities involved in building this behavioural science databank, and the lists, forms, filing systems, and storage facilities necessary to make this information legible, Lemov sheds light on how the dream of

* Jennifer Fraser is a PhD candidate at the University of Toronto’s Institute for the History and Philosophy of Science and Technology. Her dissertation charts the history of Canadian cancer epidemiology, focusing specifically on how cancer control campaigns were advertised and applied to Indigenous communities during the mid-twentieth century.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 183 J. Fraser REVIEW: Rebecca Lemov, Database of Dreams

“total data management” came to be applied to the behavioural sciences during the late 1940s to 1960s. In order to uncover this “lost history of data” (p. 1) Lemov divides her work into ten sections, with each portion describing a tool or technology that contributed to Bert Kaplan’s effort to create his dream repository. Starting with the paper technologies that made this kind of research possible, Database of Dreams charts the history of projective tests, dream analyses, and life history interviews (Section 1). Highlighting how these research methods not only provided social scientists with the means of capturing the inner lives of entire populations, but also large amounts of paperwork, Lemov demonstrates how sociological methods and materials converged to produce a new kind of data that was able to reveal the inner lives of diverse individuals (Section 2). After chronicling how dream data came to be produced, Database of Dreams explores mid-twentieth century ideas about how these materials should be preserved. To do this, Lemov charts Bert Kaplan’s personal trajectory and describes how he became interested in the large-scale collection and conservation of dream data. She also discusses the history of micro-techniques, and other storage technologies that made the preservation of large amounts of personal information possible (Sections 3-5). The second half of the book recounts how Kaplan’s vision for a dream databank was actualized through the combined efforts of administrators (Section 6), anthropologists (Section 7), research participants (Section 8), and library scientists (Section 9). After making it clear that Kaplan’s archive was the end result of a congeries of techniques, tools, technologies, and individuals, Lemov concludes her work by reflecting on the ephemeral nature of data, and considering what Kaplan’s experiment in data collection tells us about past and present dreams of “total information” (Section 10). It is hard to argue with Jamie Cohen-Cole’s assertion, quoted on the back cover of Database of Dreams, that Lemov’s book is well-researched and impressively argued. It is also as entertaining as it is informative. From Hermann Goering’s Rorschach test results, to Sherlock Holmes’s cameo in her chapter on microphotography, Database of Dreams delights readers at every turn. One area that I wish Lemov would have engaged with a little more is the existing literature on biocolonialism. In recent years, historians like Joanna Radin, Emma Kowal (2015), Kim TallBear (2013), and Warwick Anderson (2008) have started to consider the medical and economic value afforded to Indigenous body parts during the mid-to-late twentieth century. Although these historians have tended to focus their attention on biospecimens, such as DNA, blood, brains, and other frozen tissue sections, Kaplan’s dream data can also be seen as biocapital, as the collection, preservation and exchange of this introspective information is a form of embodied work performed by social scientists’ Indigenous research subjects. It would have been nice to

Spontaneous Generations 9:1(2018) 184 J. Fraser REVIEW: Rebecca Lemov, Database of Dreams hear more about the socio-political consequences that mid-twentieth century data-mining practices had on the populations from which a large amount of sociological information was extracted. That being said, Database of Dreams is beautifully written, and is essential reading for anyone interested in the history of information management, or in the social or behavioural sciences more generally. It also fits in nicely with the growing trend of treating today’s “data deluge”asa historical phenomenon, as exemplified by historians like Dan Bouk (2015), Dana Simmons (2015), and the members of the new Max Planck working group on “Historicizing Big Data.”

Jennifer Fraser Institute for the History and Philosophy of Science and Technology University of Toronto Victoria College, Room 316 Toronto, ON M5S 1K7 [email protected]

References

Anderson, Warwick. 2008. The Collectors of Lost Souls: Turning Kuru Scientists into Whitemen. Baltimore: Johns Hopkins University Press. Bouk, Dan. 2015. How Our Days Became Numbered: Risk and the Rise of the Statistical Individual. Chicago: University of Chicago Press. Marr, Bernard. 2015. Big Data: 12 Amazing Highs and Lows of 2015. Forbes.com. https://www.forbes.com/sites/bernardmarr/2015/12/27/ big-data-12-amazing-highs-and-lows-of-2015/#1a5127cbc09f. 27 December 2015. Radin, Joanna, and Emma Kowal. 2015. Indigenous Biospecimen Collections and the Cryopolitics of Frozen Life. Journal of Sociology 51(1):63-80. Simmons, Dana. 2015. Vital Minimum: Need, Science & Politics in Modern France. Chicago: University of Chicago Press. TallBear, Kim. 2013. Native American DNA: Tribal Belonging and the False Promise of Genetic Science. Minneapolis: University of Minnesota Press.

Spontaneous Generations 9:1(2018) 185 REVIEW: Douglas A. Vakoch and Matthew F. Dowd, The Drake Equation: Estimating the Prevalence of Extraterrestrial Life through the Ages

Author(s): Andrew Oakes

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 186-191.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Reviews Douglas A. Vakoch and Matthew F. Dowd. Editors. The Drake Equation: Estimating the Prevalence of Extraterrestrial Life through the Ages. 319 pp. Cambridge University Press, Cambridge, U.K. 2015.

Andrew I. Oakes*

Fifty-seven years ago an American astronomer sketched out a seven-variable equation for discussion with his like-minded colleagues. Presented to a small ten-person invitational gathering in 1961 at the National Radio Astronomy Observatory in Green Bank, West Virginia, the purpose of the equation was to stimulate debate and to ponder systematically the likelihood of extraterrestrial intelligent life. The meeting’s concluding aim was to see if the quantity of in-principle observable, technical civilizations in the cosmos could actually be determined. Frank Drake was the astronomer who penned the equation “as a way to organize thinking and discussion” (p. 183). In time, the original equation became known as the Drake Equation. With the publication of The Drake Equation: Estimating the Prevalence of Extraterrestrial Life through the Ages, co-editors Douglas A. Vakoch and Matthew F. Dowd have assembled a very timely and scholarly effort that brings the now almost three-score years old (and still ongoing) discussion to current times. The book is recognized as the first stand-alone volume on this topic. Professors Vakoch of the SETI Institute and Dowd of the University of Notre Dame Press collaborated in gathering a fourteen-chapter book, which contains a foreword by Frank Drake, an introduction placing the Drake Equation in context by astrobiologist Steven J. Dick, and an afterword by physicist Paul C. W. Davies. In his wrap-up observations, Davies notes that, as of today, the first three terms of the Drake Equation refer to factors “that are now known with reasonable precision [compared to 1961] ... for meaningful statistics to be developed” while the remaining four factors, “the biological ones,” have not experienced comparable “progress” and have “not been matched by a similar leap in understanding” (p. 298). Any university student or historian and philosopher of science will be intrigued by this book, not only by the many millennia-old question of the

* Andrew Oakes is a PhD student at the Institute for the History and Philosophy of Science and Technology at the University of Toronto. His current research focuses on the early history of professional astronomy in Canada.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 186 A.I. Oakes REVIEW: Vakoch and Dowd, eds.), The Drake Equation plurality of other worlds and their possible extraterrestrial civilizations, but also by the modern and appropriate science-based approach this discussion has now entered, taking the question from the pre-Drake science-fictional domain to a more robust level of scientific analysis that allows researchers to conduct a serious investigation of this age-old subject. With the high calibre of scientists contributing their analyses to the volume, it is clear that it is no longer perceived as a career-limiting move to pursue such thought-provoking questions.1 The remarkable progress in astronomy since 1961, especially in astrobiology, has elevated the topic to a serious level of discussion, whereby a firm answer can one day be indeed given—one way or another—after much more work is done on the astronomical, social, and, especially, astrobiological factors. The multi-chapter text is the last of an eight-volume Cambridge University Press Astrobiology Series. In his foreword, Frank Drake notes that in 1961 “very few people worked on the variables: today the number of workers is not really known but is surely in the many thousands”; and he further adds, “Now it is time to stand back and look at where we are. That is what this book is about” (p. xvii). We are informed by the co-editors that there are two specific reasons for writing this book. The first is to provide a comprehensive review and analysis of each term of the equation. This is done expertly. The book succeeds admirably in providing a solid “resource for scientists and graduate students actively conducting research in the many disciplines related to astrobiology” (p. xx). The second reason is to place contemporary astrobiological research in its historical context. This is done effectively by featuring two chapters for each term of the seven-termed equation. The first chapter on each term covers the concept under review prior to the 1961 Green Bank conference, while the second companion chapter examines developments post-1961 to the present. The twinned-chapters approach embodies a very creative way to outline the state of affairs then and now. Each chapter, whether it deals with thepre- or post-1961 period, offers comprehensive historical background information and abundant bibliographic references that would be welcomed by any serious scholar. The contributing authors include an array of internationally known expert researchers from the fields of astronomy, anthropology, astrobiology, history and philosophy of science, neuroscience and animal behaviour and intelligence, space science, and SETI (Search for Extraterrestrial Intelligence) research. The Drake Equation serves more than just the task of updating the Drake

1 Recall the late Carl Sagan’s science fiction novel of 1985, Contact, where astronomer Dr. Ellie Arroway faces professional opposition in seeking extraterrestrial signals from space, and her loss of research funding from the ruling scientific establishment.

Spontaneous Generations 9:1(2018) 187 A.I. Oakes REVIEW: Vakoch and Dowd, eds.), The Drake Equation

Equation itself. The book traces the thinking of many historical individuals from antiquity, through the Scientific Revolution and the Enlightenment, to modern times. The authors incorporate into their analyses well-known personalities who are often mentioned in studies of history, science, and philosophy. As a textbook, The Drake Equation appears as an indispensable companion for understanding the scope of astronomy though the ages, and the role of astrophysics and astrobiology in current terrestrial and space-based research in our solar system and deep within our galactic home within our universe. The book is indeed a welcome addition to the Cambridge series and has every potential of becoming a classic with a long life in print.

Andrew Oakes Institute for the History and Philosophy of Science and Technology University of Toronto Victoria College, Room 316 Toronto, ON M5S 1K7 [email protected]

Spontaneous Generations 9:1(2018) 188 REVIEW: Melinda Baldwin, Making Nature: The History of a Scientific Journal

Author(s): Andrew Oakes

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 189-191.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Reviews Melinda Baldwin Making Nature: The History of a Scientific Journal. 309 pp. Chicago & London: The University of Chicago Press, 2015.

Andrew I. Oakes*

There aren’t many scholarly books that begin with a warning to the reader. Historian Melinda Baldwin does just that by cautioning her readers upfront that they will be disappointed if they think her book will reveal the secret of getting an article published in the journal Nature, the subject of her narrative. We learn later on that, in the early twenty-first century, Nature rejects about 92 percent of submissions, whereas they accepted almost anything in the early years following its first publication on 4 November 1869. The author positions the journal as “arguably the world’s most prestigious scientific journal” (p. 21). Her book-length treatment of this now 147-year old academic weekly focuses almost exclusively on the paper edition, although in the conclusion of her book she does discuss the rise of electronic publishing. Here she touches on such festering issues as peer-reviewing, restricted knowledge access, hefty subscription rates, and paying twice for knowledge—for articles whose research was already funded by taxpayer-supported academies, and other publicly subsidised institutions, followed by the cost of a full subscription or article purchases. Although many of these issues are modern-day concerns facing electronic academic publishing, they will eventually work themselves out in the market place as they normally do over time. To be sure, with her thoroughly solid research and an engaging writing style, Baldwin has produced an outstanding account of the life, times, and evolution of a British journal of science that has published continuously and profitably for a surprisingly long period with an enormously effective international impact. Some scholarly books may warrant only a single read-through. Others, on the other hand, call out for multiple attention and a much deeper study. Such is the case with Baldwin’s Making Nature: The History of a Scientific Journal. The book is indeed a tour de force in its genre and worthy of detailed reading as well as wide dissemination among undergraduate and graduate students, and the practitioners/teachers

* Andrew Oakes is a PhD student at the Institute for the History and Philosophy of Science and Technology at the University of Toronto. Andrew’s current research focuses on the early history of professional astronomy in Canada.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 189 A.I. Oakes REVIEW: Melinda Baldwin Making Nature of the history of science. The book is nothing less than an informational gem in recounting the origins and evolution of Nature, this prestigious science journal that began its publishing life in the latter half of the nineteenth century in Victorian London and, to date, has only had seven of editors-in-chief—namely such luminaries as astronomer Sir Norman Lockyer (1869-1919), science writer Sir Richard Gregory (1919-1939), co-editors agronomist A.J.V Gale (1939-1962) and botanist L.J.F. Brimble (1939-1965), physicist Sir John Maddox (1966-1973), geophysicist David Davies (1973-1980), physicist Sir John Maddox (1980-1995), and, currently, physicist Philip Campbell (1995-present). Baldwin succeeds admirably in her goal of not simply listing “all the important papers that Nature has published,” but rather examining how “Nature has changed over time, what role it has played within the scientific community at different points in history, and how Nature both responded to and influenced changes in science” (p. 1). To the historian and the student, the book’s value rests in how Baldwin organizes the narrative and unpacks the history of the journal itself. Making Nature features eight chapters in which the author studies the shifting audiences and contributors; the changing of the British scientific guard; the application of the word “scientist”; the politics of the interwar period (the First and Second world wars); the concept of intellectual freedom; the impact of the Cold War; the rise of the United States as a scientific powerhouse; and the issue of scientific self-policing in the 1980s. Making Nature is Baldwin’s first book, a follow-up to her Princeton University doctoral dissertation, “Nature and the Making of a Scientific Community, 1869-1939,” completed in September 2010. A point of interest the author makes is that “journals have played almost no role in the literature on early twentieth-century scientific internationalism” (p. 12). She cites Professor Nikolai Krementsov, of the Institute for the History and Philosophy of Science and Technology at the University of Toronto, as noting in a recent book on international genetics congresses that journals are notably absent from the writing of scholars who have studied internationalism by looking only at international associations, research facilities, philanthropies, and societies. With her Making Nature, Baldwin opens up a fresh opportunity for a clearer understanding of how the history of science is made. In her “Note to the Reader,” she argues that “Nature’s history reveals a lot about how the rules that govern science have evolved over time. It also reveals how scientific practitioners have thought about their place in society” (p. 2). She also underlines that “Nature’s editorial staff, readers, and scientific contributors were deeply aware of science’s relationship with politics, culture, and the social order” (p. 2). Making Nature offers an extensive bibliography of primary and secondary

Spontaneous Generations 9:1(2018) 190 A.I. Oakes REVIEW: Melinda Baldwin Making Nature sources, and is accompanied by comprehensive endnotes and commentary for each of its sections. One of the audiences Baldwin hopes will read her book are the regular readers of Nature, whom she identifies as practicing scientists, science journalists, and others who love science. If they do so, Baldwin will certainly sell a lot of copies, and this group of readers will not be disappointed with a good addition to their library collection.

Andrew Oakes Institute for the History and Philosophy of Science and Technology University of Toronto Victoria College, Room 316 Toronto, ON M5S 1K7 [email protected]

Spontaneous Generations 9:1(2018) 191 REVIEW: Andre Holenstein, Hubert Steinke, and Martin Stuber, eds. Scholars in Action: The Practice of Knowledge and the Figure of the Savant in the 18th Century, Volumes 1 and 2.

Author(s): Kristen M. Schranz

Source: Spontaneous Generations: A Journal for the History and Philosophy of Science, Vol. 9, No. 1 (2018) 192-195.

Published by: The University of Toronto DOI: 10.4245/sponge.v9i1.

EDITORIALOFFICES Institute for the History and Philosophy of Science and Technology Room 316 Victoria College, 91 Charles Street West Toronto, Ontario, Canada M5S 1K7 [email protected] Published online at jps.library.utoronto.ca/index.php/SpontaneousGenerations ISSN 1913 0465

Founded in 2006, Spontaneous Generations is an online academic journal published by graduate students at the Institute for the History and Philosophy of Science and Technology, University of Toronto. There is no subscription or membership fee. Spontaneous Generations provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Reviews Andre Holenstein, Hubert Steinke, and Martin Stuber, eds. Scholars in Action: The Practice of Knowledge and the Figure of the Savant in the 18th Century, Volumes 1 and 2. 502+430 pp. Leiden and Boston: Brill, 2013.

Kristen M. Schranz*

The two-volume Scholars in Action: The Practice of Knowledge and the Figure of the Savant in the 18th Century is an ambitious collection of papers that were presented at a conference honouring the 300th anniversary of the birth of Swiss polymath, Albrecht von Haller. Although many chapters focus on Haller’s scholarly pursuits, the compilation also investigates the broader culture of knowledge production in eighteenth-century Europe by examining the ways savants or learned individuals from different social, cultural, and economic contexts used innovative forms of communication, representation, and institutional supports to produce, validate, and apply new forms of knowledge. The chapters draw on a variety of visual and textual sources, including architectural plans, book reviews, pamphlets, personal letters, prefaces, travelogues, and even a diorama of a Freemason lodge. The collection is divided into six parts (three in each volume). Each part groups together papers that explore a specific type of scholarly activity. The first part demonstrates how savants gained support and recognition for their intellectual work by publishing and presenting themselves respectably, as well as through patronage and membership in esteemed institutions. Particularly notable is Laurence Brockliss’s chapter on the complexities of the Republic of Letters in eighteenth-century Europe. Building on his previous work on the “mini-Republics” of the Enlightenment (2002), Brockliss explores the systems of meritocracy and reciprocity that governed this constellation of scholarly correspondents. He also shows how the ideal of freely sharing useful information was often impeded by the reality of unequal knowledge exchange and impolite behaviour. Also of note is I. Fleßenkämper’s study of academic patronage in Scottish universities. Fleßenkämper demonstrates that skillful lecturing was more highly valued than publishing, offering a stark contrast

* Kristen M. Schranz is a PhD candidate at the University of Toronto’s Institute for the History and Philosophy of Science and Technology. Her research is focused on the Scottish chemist, author, and manufacturer, James Keir, whose life and work is situated at the fruitful intersection of philosophical and practical chemistry in late-eighteenth-century Britain.

Spontaneous Generations 9:1 (2018) ISSN 1913-0465. University of Toronto. Copyright 2018 by the HAPSAT Society. Some rights reserved. 192 K.M. Schranz REVIEW: Holenstein et al, eds. Scholars in Action to the wider Republic of Letters that emphasized and rewarded publishing over teaching. The second part of the first volume explores the ways scholars acquired, organized, and evaluated knowledge, including the way Haller’s book reviews helped guide his readers’ opinions and built up his scholarly reputation (see Florence Catherine and Anne Saada’s chapters.) The third part emphasizes how learned individuals responded to the diverse political, religious, and scientific polemics of their day. Kirill Abrosimov does an excellent job challenging some key tenets of the Habermasian public sphere by showing how, in the case of Voltaire’s Calas affair, emotional appeals trumped rational arguments, and concealment of information was often more beneficial than the “greatest possible degree of publicity” (p. 365). Turning to the second volume, part four traces how knowledge was presented and circulated by scholars. Private letters, published texts, journals, and travelogues enabled a new “global education” (see Karl S. Guthke’s chapter). In this part, Simona Boscani Leoni evaluates the critical role of correspondence networks and questionnaires in gathering and exchanging information about natural history. Also of interest is Clorinda Donato’s investigation of inter-language mediators in scholarly networks. Looking more specifically at the production of knowledge, part five highlights practices of observation and experiment in the workofa savant. While Lorraine Daston scrutinizes observational methods, Simone De Angelis sheds new light on Haller’s medical controversy with Georg Erhard Hamberger, revealing that it was not evidence that was being disputed, but rather different beliefs about sources of authority. Lastly, the sixth part examines the role of learned experts in advising and serving governmental bodies. In his chapter, Holger Böning establishes the new image of an enlightenment scholar: preoccupied with knowledge for its own sake but also set on making knowledge practical for the common good. In Barbara Braun-Bucher’s research, Haller also emerges as a social climber who depended on numerous publications, vast correspondents, and connections with noble leaders, advising the latter on topics of education, governance, legal matters, morality, and resource management. While each chapter provides excellent entry points into specific topics of interest, some parts of the editors’ introduction to the collection need greater clarity and organization. The first half of the introduction provides some key signposts for what the volumes accomplish, specifically the role and legitimation of the scholar in knowledge production and the importance of the social, political, and economic contexts of those operations. The introduction’s account of the six parts, however, leaves the reader with the impression that the whole compilation will focus on Haller. The editors give a summary of how chapter groupings relate to parts of Haller’s life but

Spontaneous Generations 9:1(2018) 193 K.M. Schranz REVIEW: Holenstein et al, eds. Scholars in Action it’s not made clear that some papers will explore themes beyond Haller’s work. Additionally, the editors’ preview of these thematic parts proceeds in a different sequence than the actual text does, making it seem likethe introduction was composed before the sections were properly ordered. That said, at the end of the introduction, the editors provide a useful discussion about linking up both the history of scholarship and the history of science with the more general studies of the early modern era. The editors identify three specific areas that require future research: the role of scholarly activities in government administration, the relationship between the history of science and the understanding of the Enlightenment, and the outworking of economic activities in history of science research. Still, concerning their discussion of the last area, it is a little surprising to see the editors plead with the reader to “turn away from the heroes of science to consider those actors who brought about the merging of useful technical knowledge and the spirit of entrepreneurship” (p. 41). It could be argued that the various social, cultural, and spatial “turns” of the past few decades have already moved historians away from such a narrow research approach. Perhaps epitomized in Patricia Fara’s Science: A Four Thousand Year History (2009), historians have now generally eschewed “heroes of science” to focus on the more “mundane” or hidden contributions to knowledge production. As a whole, this compilation has much to offer. Seeing as the contributors come from a wide range of academic interests (including historians of education, literature, medicine, politics, and science) this collection offers a multiplicity of interpretations and research sources that will appeal broadly to readers and researchers interested in the complexity and diversity of eighteenth-century scholars and scholarship. Whether through their growing interest in the natural world, the increase of their publication outlets and correspondence networks, or the enlightenment of their bodies and minds, eighteenth-century scholars individually and collectively produced, evaluated, and diffused a new culture of knowledge that is worthy of detailed study. Although these volumes may be criticized as Eurocentric—it was based on conference papers in honour of Haller, after all—they purposefully bring to light recent scholarship from French, German, and Italian speaking regions, discourses that are often left out of Anglo-American research. Not only were many of these chapters translated specifically for this publication, but also the numerous footnotes throughout the text shed light on a plethora of French, German, and Italian sources still unknown to most English-speaking academics.

Kristen Schranz Institute for the History and Philosophy of Science and Technology University of Toronto Spontaneous Generations 9:1(2018) 194 K.M. Schranz REVIEW: Holenstein et al, eds. Scholars in Action

Victoria College, Room 316 Toronto, ON M5S 1K7 [email protected]

References

Brockliss, Laurence W. B. 2002. Calvet’s Web: Enlightenment and the Republic of Letters in Eighteenth-Century France. Oxford: Oxford University Press. Fara, Patricia. 2009. Science: A Four Thousand Year History. Oxford; New York: Oxford University Press.

Spontaneous Generations 9:1(2018) 195