THE ROUTLEDGE HANDBOOK OF SCIENTIFIC REALISM

Scientific realism is a central, long-standing, and hotly debated topic in philosophy of science. Debates about scientific realism concern the very nature and extent of scientific knowledge and progress. Scientific realists defend a positive epistemic attitude towards our best theories and models regarding how they represent the world that is unobservable to our naked senses. Various realist theses are under sceptical fire from scientific antirealists, e.g. empiricists and instrumentalists. The different dimensions of the ensuing debate centrally connect to numerous other topics in philosophy of science and beyond.

The Routledge Handbook of Scientific Realism is an outstanding reference source – the first collection of its kind – to the key issues, positions, and arguments in this important topic. Its thirty-four chapters, written by a team of international experts, are divided into five parts:

• Historical development of the realist stance • Classic debate: core issues and positions • Perspectives on contemporary debates • The realism debate in disciplinary context • Broader reflections

In these sections, the core issues and debates are presented, analysed, and set into broader historical and disciplinary contexts. The central issues covered include motivations and arguments for realism; challenges to realism from underdetermination and history of science; different variants of realism; the connection of realism to relativism and perspectivism; and the relationship between realism, metaphysics, and epistemology.

The Routledge Handbook of Scientific Realism is essential reading for students and researchers in philosophy of science. It will also be very useful for anyone interested in the nature and extent of scientific knowledge.

Juha Saatsi is Associate Professor of Philosophy at the School of Philosophy, Religion and History of Science, University of Leeds, UK. ROUTLEDGE HANDBOOKS IN PHILOSOPHY

Routledge Handbooks in Philosophy are state-of-the-art surveys of emerging, newly refreshed, and important fields in philosophy, providing accessible yet thorough assessments of key problems, themes, thinkers, and recent developments in research. All chapters for each volume are specially commissioned and written by leading scholars in the field. Carefully edited and organized, Routledge Handbooks in Philosophy provide indispensa- ble reference tools for students and researchers seeking a comprehensive overview of new and exciting topics in philosophy. They are also valuable teaching resources as accompaniments to textbooks, anthologies, and research-orientated publications.

Recently published

The Routledge Handbook of Philosophy of Pain Edited by Jennifer Corns The Routledge Handbook of Mechanisms and Mechanical Philosophy Edited by Stuart Glennan and Phyllis Illari The Routledge Handbook of Metaethics Edited by Tristram McPherson and David Plunkett The Routledge Handbook of and Philosophy Edited by Richard Joyce The Routledge Handbook of Libertarianism Edited by Jason Brennan, Bas van der Vossen, and David Schmidtz The Routledge Handbook of Collective Intentionality Edited by Marija Jankovic and Kirk Ludwig The Routledge Handbook of Pacifism and Nonviolence Edited by Andrew Fiala

For a full list of published Routledge Handbooks in Philosophy, please visit www.routledge. com/Routledge-Handbooks-in-Philosophy/book-series/RHP THE ROUTLEDGE HANDBOOK OF SCIENTIFIC REALISM

Edited by Juha Saatsi First published 2018 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN and by Routledge 711 Third Avenue, New York, NY 10017 Routledge is an imprint of the Taylor & Francis Group, an informa business © 2018 selection and editorial matter, Juha Saatsi; individual chapters, the contributors The right of Juha Saatsi to be identified as the author of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data Names: Saatsi, Juha, editor. Title: The Routledge handbook of scientific realism / edited by Juha Saatsi. Other titles: Handbook of scientific realism Description: New York : Routledge, 2017. | Series: Routledge handbooks in philosophy | Includes bibliographical references and index. Identifiers: LCCN 2017032031| ISBN 9781138888852 (hardback : alk. paper) | ISBN 9780203712498 (e-book) Subjects: LCSH: Science—Philosophy. | Realism. Classification: LCC Q175.32.R42 R68 2017 | DDC 501—dc23 LC record available at https://lccn.loc.gov/2017032031 ISBN: 978-1-138-88885-2 (hbk) ISBN: 978-0-203-71249-8 (ebk)

Typeset in Bembo by Apex CoVantage, LLC CONTENTS

List of contributors ix

Introduction: scientific realism in the 21st century 1 Juha Saatsi

PART I Historical development of the realist stance 5

1 Realism and logical empiricism 7 Matthias Neuber

2 The realist turn in the philosophy of science 20 Stathis Psillos

PART II Classic debate: core issues and positions 35

3 Success of science as a motivation for realism 37 K. Brad Wray

4 Historical challenges to realism 48 Peter Vickers

5 Underdetermination 60 Dana Tulodziecki

6 Kuhn, relativism and realism 72 Howard Sankey v Contents

7 Instrumentalism 84 Darrell P. Rowbottom

8 Empiricism 96 Otávio Bueno

9 Structural realism and its variants 108 Ioannis Votsis

10 Entity realism 120 Matthias Egg

11 Truthlikeness and approximate truth 133 Gerhard Schurz

PART III Perspectives on contemporary debates 149

12 Global versus local arguments for realism 151 Leah Henderson

13 Perspectivism 164 Michela Massimi

14 Is pluralism compatible with scientific realism? 176 Hasok Chang

15 Scientific progress 187 Ilkka Niiniluoto

16 Realism and the limits of explanatory reasoning 200 Juha Saatsi

17 Unconceived alternatives and the Strategy of Historical Ostension 212 P. Kyle Stanford

18 Realism, antirealism, epistemic stances, and voluntarism 225 Anjan Chakravartty

19 Modeling and realism: strange bedfellows? 237 Arnon Levy

vi Contents

20 Success and scientific realism: considerations from the philosophy of simulation 250 Eric Winsberg and Ali Mirza

21 Scientific realism and social epistemology 261 Martin Kusch

PART IV The realism debate in disciplinary context 277

22 Scientific realism and high-energy physics 279 Richard Dawid

23 Getting real about quantum mechanics 291 Laura Ruetsche

24 Scientific realism and primordial cosmology 304 Feraz Azhar and Jeremy Butterfield

25 Three kinds of realism about historical science 321 Derek Turner

26 Scientific realism and the earth sciences 333 Teru Miyake

27 Scientific realism and chemistry 345 Paul Needham

28 Realism about cognitive science 357 Mark Sprevak

29 Scientific realism and economics 369 Harold Kincaid

PART V Broader reflections 381

30 Realism and theories of truth 383 Jamin Asay

31 Realism and metaphysics 394 Steven French

vii Contents

32 Mathematical realism and naturalism 407 Mary Leng

33 Scientific realism and epistemology 419 Alexander Bird

34 Natural kinds for the scientific realist 434 Matthew H. Slater

Index 447

viii CONTRIBUTORS

Jamin Asay is Assistant Professor of Philosophy at the University of Hong Kong. He works in metaphysics, philosophy of language, and philosophy of science, focusing on the topics of truth and truthmaking. He is the author of The Primitivist Theory of Truth (2013).

Feraz Azhar is a philosopher of cosmology at Trinity College, Cambridge, UK. He has a PhD in theoretical physics and will, from 2017, be at Harvard University.

Alexander Bird is Professor of Philosophy at the University of Bristol. He is the author of Philosophy of Science (2nd ed., 2005), Thomas Kuhn (2000), and Nature’s Metaphysics (2007). His research interests include Kuhn, naturalism, epistemology, and the metaphysics of science.

Otávio Bueno is Professor of Philosophy and Chair of the Philosophy Department at the Uni- versity of Miami. He works in philosophy of science, philosophy of mathematics, philosophy of logic, metaphysics, and epistemology. His latest book is Applying Mathematics: Immersion, Inference, Interpretation (with Steven French, OUP, forthcoming).

Jeremy Butterfield is a Senior Research Fellow in philosophy of physics at Trinity College, Cambridge, UK.

Anjan Chakravartty is Professor of Philosophy and Director of the John J. Reilly Center for Science, Technology, and Values at the University of Notre Dame, and Editor in Chief of Studies in History and Philosophy of Science. He is the author of A Metaphysics for Scientific Realism: Knowing the Unobservable (2007) and Scientific Ontology: Integrating Naturalized Metaphysics and Voluntarist Epistemology (2017).

Hasok Chang is the Hans Rausing Professor of History and Philosophy of Science at the University of Cambridge. He received his degrees from Caltech and Stanford and has taught at University College London. He is the author of Is Water H2O? Evidence, Realism and Pluralism (2012) and Inventing Temperature: Measurement and Scientific Progress (2004). He is a co-founder of the Society for Philosophy of Science in Practice (SPSP) and the Committee for Integrated History and Philosophy of Science.

ix Contributors

Richard Dawid started his academic life in theoretical physics and now is a Senior Lecturer for the philosophy of science at Stockholm University. He has worked on epistemic and ontic questions in fundamental physics and their relations to the general philosophy of science.

Matthias Egg teaches philosophy at the University of Bern, Switzerland. He is the author of Scientific Realism in Particle Physics: A Causal Approach.

Steven French is Professor of Philosophy of Science at the University of Leeds. He is Co-Editor in Chief of the British Journal for the Philosophy of Science and Editor in Chief of the Palgrave- Macmillan series New Directions in Philosophy of Science. His most recent book is The Struc- ture of the World: Metaphysics and Representation (2014), and his next one is Applying Mathematics: Immersion, Inference, Interpretation, with Otávio Bueno.

Leah Henderson is Assistant Professor and Rosalind Franklin Fellow at the University of Gro- ningen. She has a DPhil in physics from Oxford University and a PhD in philosophy from the Massachusetts Institute of Technology. She works on a variety of topics in philosophy of science and philosophy of physics.

Harold Kincaid is Professor of Economics at the University of Cape Town. Early books were Philosophical Foundations of the Social Sciences (Cambridge 1996) and Individualism and the Unity of Science (Rowman and Littlefield 1997). He is the editor of theOxford Handbook of the Philosophy of Social Science (2013) and co-editor of Scientific Metaphysics (Oxford 2013), What Is Addiction? (MIT 2010), Distributed Cognition and the Will (MIT 2007), Toward a Sociological Imagination (University Press, 2002), The Oxford Handbook of the Philosophy of Economics (Oxford 2009), Classifying Psy- chopathology (MIT 2014), Establishing Medical Reality, (Springer, 2008), Value Free Science (Oxford 2007), The Routledge Companion to the Philosophy of Medicine (2017), and numerous journal articles and book chapters on the philosophy of science and social science. In addition to his philosophy of science work, Kincaid is also involved in multiple projects in experimental behavioral econom- ics focusing primarily on risk and time attitude elicitation and addiction.

Martin Kusch is Professor for Philosophy of Science and Epistemology at the University of Vienna. He is currently Principal Investigator of ERC Advanced Grant 339382 (‘The Emergence of Relativism’, 2014–2019) and working on two books relating to this topic.

Mary Leng is Senior Lecturer in Philosophy at the University of York. She works on the phi- losophy of mathematics and science and more broadly on post-Quinean ontology. Her most recent research has focussed on parallels between some debates in the philosophy of mathematics and in metaethics. Her book Mathematics and Reality (2010) defends a fictionalist approach to mathematics.

Arnon Levy is Senior Lecturer in philosophy at the Hebrew University of Jerusalem. His research engages with a range of topics in the philosophy of science and , with a special focus on the role of models and idealization. For further information, please visit www.arnonlevy.org.

Michela Massimi is Professor of Philosophy of Science at the University of Edinburgh. She works in philosophy of science and in particular on the history and philosophy of modern physics. She is currently the PI on a five-year ERC Consolidator grant dedicated to perspectival realism.

x Contributors

Ali Mirza is a doctoral student in the Department of the History & Philosophy of Science & Medicine at Indiana University, Bloomington. His research focuses on philosophical and histori- cal dimensions of evolutionary theory – especially as related to the field of paleontology and with an emphasis on organism-environment interactions over geological timescales.

Teru Miyake is Associate Professor of Philosophy at Nanyang Technological University in Sin- gapore and will be a fellow at the Radcliffe Institute for Advanced Study at Harvard University in 2017–2018. His research has focused on the growth of knowledge in particular sciences such as celestial mechanics and geophysics. He is also interested in the history of philosophy of science, particularly that of the 19th century.

Paul Needham is Professor Emeritus in Theoretical Philosophy at the University of Stockholm. He received his degrees in chemistry and philosophy from Birmingham and Uppsala. His inter- ests include the philosophy of science, specialising in chemistry, related issues in metaphysics, and the work of Pierre Duhem.

Matthias Neuber is Academic Collaborator at the Department of Philosophy at the Uni- versity of Tübingen. He is a co-editor of the Moritz Schlick Edition. His dissertation was on Schlick, Cassirer, and the ‘problem of space’. He has numerous publications on the history of logical empiricism.

Ilkka Niiniluoto is Professor Emeritus of Theoretical Philosophy at the University of Helsinki. His main works in the philosophy of science deal with inductive logic, abduction, explanation, theory change, scientific progress, truthlikeness, and critical scientific realism.

Stathis Psillos is Professor of Philosophy of Science and Metaphysics at the University of Ath- ens, Greece, and a member of the Rotman Institute of Philosophy at the University of Western Ontario. He is the author or editor of seven books and of more than 135 papers and reviews in learned journals and edited collections, mainly on scientific realism, causation, explanation, and the history of philosophy of science. He is a member of the Academy of Europe and of the International Academy of Philosophy of Science.

Darrell P. Rowbottom is Professor of Philosophy and head of the Philosophy Department at Lingnan University, Hong Kong. He is also an Associate Editor of Australasian Journal of Phi- losophy. He works mainly in general philosophy of science and philosophy of probability, and occasionally publishes in other areas such as social epistemology, metaphysics, and philosophy of mind. His textbook, Probability, appeared with Polity Press in 2015. He has recently com- pleted a monograph, The Instrument of Science, in which he articulates and defends a new form of instrumentalism.

Laura Ruetsche is Louis E. Loeb Collegiate Professor of Philosophy at the University of Mich- igan. In her research, she explores the foundations of physical theories, particularly quantum the- ories, with an eye toward how issues in the foundations of physics might inform and be informed by more general issues in the philosophy of science.

Juha Saatsi is Associate Professor at University of Leeds. He works on various topics in philos- ophy of science, and he has particular interests in the philosophy of explanation and the scientific realism debate.

xi Contributors

Howard Sankey is Associate Professor of Philosophy in the School of Historical and Philo- sophical Studies at the University of Melbourne, Australia. His areas of research and publication include semantic incommensurability, relativism and rational theory-choice, methodology, epis- temic naturalism, and scientific realism.

Gerhard Schurz is Professor at the Department of Philosophy of the Heinrich Heine Univer- sity of Düsseldorf, Germany, where he directs the Düsseldorf Center for Logic and Philosophy of Science (DCLPS). He is president of the German Association for Philosophy of Science. He does research in philosophy of science and logic, as well as epistemology and cognitive science. He is an editorial board member of Synthese, Erkenntnis, Episteme, Journal for General Philosophy of Science, and Grazer Philosophische Studien.

Matthew H. Slater is Associate Professor of Philosophy at Bucknell University, having received his PhD from Columbia University. He is the author of Are Real? (2013) and The Nature of Biological Kinds (forthcoming), and has co-edited such volumes as Carving Nature at Its Joints, The Environment, Reference and Referring, and Metaphysics and the Philosophy of Science: New Essays. He writes on issues in the philosophy of biology, the metaphysics of science, and social epistemology.

Mark Sprevak is Senior Lecturer in Philosophy at the University of Edinburgh. He works on philosophy of mind, philosophy of science, and metaphysics, with particular focus on the cog- nitive sciences.

P. Kyle Stanford is Professor and Chair of the Department of Logic and Philosophy of Science at the University of California, Irvine. He is the author of Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives and many further articles concerned with scientific realism and instrumentalism, as well as the philosophy of biology, the history of modern philosophy, and the philosophy of language.

Dana Tulodziecki is Associate Professor of Philosophy at Purdue University. She works in philosophy of science, and history and philosophy of science.

Derek Turner is Professor of Philosophy at Connecticut College, in New London, CT, USA, where he is also associate director of the college’s Goodwin-Niering Center for the Environment. He has held visiting fellowships at the University of Pittsburgh Center for Philosophy of Science (2008) and the KLI in Klosterneuburg, Austria (2015). He is the author of Making Prehistory: Historical Science and the Scientific Realism Debate (2007) as well as Paleontology: A Philosophical Introduction (2011).

Peter Vickers is Associate Professor and Reader in the Department of Philosophy at the Uni- versity of Durham, UK. He studied mathematics and philosophy at York (UK) then completed an MA and PhD at Leeds, specialising in integrated history and philosophy of science. He was a Postdoctoral Fellow at the Pittsburgh Center for Philosophy of Science 2010–2011 before start- ing at Durham in 2011. He is the author of Understanding Inconsistent Science (2013).

Ioannis Votsis holds degrees from the University of California, Berkeley (BA 1998) and the London School of Economics (PhD 2004). He is currently Senior Lecturer at the New College of the Humanities and a fellow at the London School of Economics. He has previously taught at the universities of Bristol and Düsseldorf and has held visiting research fellowships at the

xii Contributors

University of Pittsburgh and the University of Athens. Among other things, he recently co-edited the volume Recent Developments in the Philosophy of Science: EPSA13 Helsinki.

Eric Winsberg is Professor of Philosophy at the University of South Florida. He is the author of Science in the Age of Computer Simulation (2010) and the forthcoming Philosophy and Climate Science. His research interests include models and simulations, the philosophy of climate science, and the philosophy of physics, as well as general philosophy of science.

K. Brad Wray is at the Centre for Science Studies at Aarhus University in Denmark. He has published on the anti-realism/realism debate, the social epistemology of science, and Kuhn’s philosophy of science. In 2011, his book Kuhn’s Evolutionary Social Epistemology was published by Cambridge University Press. He is one of the co-editors of the Springer journal Metascience.

xiii

INTRODUCTION Scientific realism in the 21st century

Juha Saatsi

Realism debates in philosophy, like debates between political views, are an essential fibre in humanity’s reflective fabric. The debate about scientific realism, more specifically, is an essential part of our reflection and critical appreciation of scientific knowledge, its nature, and its reach. Outside the realism debate many naturally adopt an uncritical stance according to which sci- ence unquestionably provides us knowledge of quarks, electrons, DNA, black holes, quantum entanglement, and other mind-independent, unobservable features of reality that centrally fea- ture in our best science. This arguably naïve stance is widely shared and unsurprising given the astonishing predictive and instrumental successes of science, but it is attacked from all sides in the realism debate. Although science commands authority as a source of empirical knowledge, there are serious philosophical challenges to any unsophisticated realist position. Defending anything like the uncritical stance quickly turns out to be hard work! Scientific realists, it is sometimes said, take scientific theories (more or less) ‘at face value’. Instrumentalists, by contrast, regard theories as ‘mere instruments’ for predicting and manipulat- ing observable phenomena. And whereas realists ‘believe in unobservable entities’ like electrons and quarks, empiricists ‘only care how well our theories save the observable phenomena’. Such crude caricatures are useful for initial fixing of ideas, even if they quite fail to convey the intri- cate contours of the debate. What scientific realism in actual fact amounts to is perhaps best thought of in relative, contrastive terms (much like the political Left/Right). A scientific real- ist defends a degree of rationally justifiable optimism regarding scientific knowledge, progress, or representational adequacy with respect to directly unobservable features of reality, beyond what an anti-realist acknowledges. What does anti-realism amount to, then? The connotation is partly historical, associated with grand themes of empiricist, instrumentalist, social construc- tivist, and pragmatist traditions in philosophy, characterised in terms of broad categories of mind-depen dence, language-dependence, relativity, and so forth. And partly it is again a relative matter of anti-realists challenging the kind of optimism that realists attest to. Such optimism and challenges thereof can manifest themselves in a rich variety of ways, resulting in a range of more specific ‘-isms’ on both sides of the vague divide. The debate between realists and anti-realists is a venerable one, going back beyond the logical positivist roots of professional philosophy of science. By now hundreds of noteworthy research articles and dozens of landmark publications have accumulated, spanning over several decades, indicating the debate’s maturity and lively history. While some specific disputes inevitably have

1 Juha Saatsi become stale, uninspiring, or even forgotten, the realism debate at large – like most illustrious topics in philosophy – is vibrant as ever, due to its capacity to renew itself by finding fresh per- spectives and positions on the issues at stake and evolving with the sciences and the rest of philos- ophy of science. The key questions animating the debate, concerning the very nature of scientific knowledge and its extent, are still firmly at the core of philosophy of science. This handbook aims to provide an up-to-date review of the state of the debate, going now well into the 21st century. With 34 chapters written by many of the leading (anti-)realist thinkers and active participants in the current debate, it aims to provide solid grounding for further work. Part I opens the volume with two chapters that map the historical trajectory of the con- temporary realism debate from the logical empiricist era onwards. In order to properly grasp the shape of any grand philosophical ‘-ism’, one must appreciate the historical backdrop against which it has developed and evolved. For late-20th-century scientific realism, this backdrop was provided by logical positivism and logical empiricism, but one should not view the latter as a clear-cut foil to realism, since various realist tendencies can be traced back to logical empiri- cist philosophers themselves. Post-logical empiricist (re-)emergence of realism in the 1960s and 1970s – the ‘realist turn’ in philosophy of science – was characterised by a shift from verifica- tionist semantics to a more literal construal of scientific theories, accompanied by an appreciation of the explanatory endeavour of science itself and also of realism as an optimistic epistemology of science that best explains its empirical success. Much of today’s discussion around scientific (anti-)realism still takes place in the context of now-classic debates in the 1970s and 1980s that followed the blooming realist revival. Part II offers nine chapters of contemporary review of the core issues, arguments, and result- ing ‘-isms’ of this golden era of the scientific realism debate. Anti-realists quickly challenged one of the new cornerstones of the ‘realist turn’, the simple but intuitively extremely attractive idea that one can defend realism by capitalising on the empirical success of science. One set of challenges turned on the (‘Duhem-Quine’) underdetermination thesis, alleging that all theories are underdetermined by data in a way that makes it impossible to empirically confirm theories in a way required for realist optimism. Another challenge followed the very influential historicist critiques of realism by, for example, Thomas Kuhn and Larry Laudan, in the spirit of integrated history and philosophy of science, suggesting that a form of relativism or constructivism provides a better image of science as the historicists know it. The more abstract concern of underdetermi- nation partly motivated van Fraassen’s constructive empiricism – a hugely influential redevelop- ment of an antirealist tradition that views science as a matter of ‘saving the phenomena’. Realists’ resistance to the siren call of empiricism or instrumentalism – for those who did resist – was largely due to the unshakeable intuition that in empirically increasingly successful science as we know it, theories are by and large getting better as representations of reality. The latter notion of theoretical progress has kept exercising realists ever since the 1970s, as they have attempted to articulate more precisely and more convincingly what (increasing) ‘approximate truth’ or ‘truth- likeness’ should amount to. Part III comprises 10 chapters that focus on significant themes that have emerged over the past 15 or 20 years to stimulate the contemporary realism debate. The emergence of these themes is partly due to shifting interests and points of emphasis in philosophy of science at large. For example, the increasing interest in models and modelling practices in science – including simu- lations – is clearly influencing the way in which some of the key issues in the realism debate are conceived. While the way in which modelling essentially involves idealisations, and their rele- vance to the realism debate more specifically, has been much discussed in the literature since the 1980s, the idea that the modelling practices of science may support distinctively perspectival real- ism about scientific knowledge is a much more recent one. Progress in other topics of philosophy

2 Scientific realism in the 21st century of science has also had an impact on the contemporary realism debate. For instance, while much of the realist gambit since the ‘realist turn’ has revolved around vindication of abductive reasoning (inference to the best explanation), only fairly recently the epistemology of explanatory reasoning has started to benefit from advances in the philosophy of explanation itself. Some of the contemporary themes are fruitful reconceptualisations of old issues. The way in which the problem of unconceived alternatives reshapes the historicist anti-realist challenge of Laudan (and others) from the early 1980s is a case in point. The history of science is indeed still widely taken to provide a ‘testing ground’ of sorts, in the spirit of integrated history and philosophy of science (HPS), for various philosophical theses propounded in the realism debate, and further historical case-studies and illustrations have kept throwing interesting new light on the viability of particular realist ideas. Using the history of science as ‘second-order’ evidence in the realism debate – complementing the ‘first-order’ scientific evidence – raises significant meta- level questions about the level of generality on which the realism debate should take place. For several decades, from the ‘realist turn’ onwards, the scientific realism debate has belonged firmly to general philosophy of science, which strives to provide a unified understanding of science on the whole, abstracting away from whatever methodological and other differences there are between individual sciences. But is scientific realism best construed as a ‘macro-level’ thesis about all of (mature) science? Or should the realist claim, and challenges thereof, be assessed at a more local level – even if not ‘micro’, perhaps as a ‘meso-level’ affair – without going the whole hog? While historically much of the realism debate has been conducted in global terms (experimental or entity realism being an exception), in the contemporary debate some prefer to defend and debate realism as a more local thesis, one discipline or domain of science at a time. And if we choose to discuss our epistemic commitments regarding all of science, perhaps realism and anti-realism are best characterised as epistemic stances, as opposed to theses to be defended by reason alone? A more local approach to the realism debate is encouraged by the increasing specialisation within philosophy of science. For years there has been something of a disconnect between the philosophies of specific sciences and the scientific realism debate as conducted within general philosophy of science. This has recently begun to change as philosophers have started to pay more attention to the potential implications of discipline-specific issues – pertaining to, for example, underdetermination – in geology, historical sciences, quantum physics, cosmology, and so on. The eight chapters in Part IV examine (anti-)realism in specific disciplinary contexts, ranging from the aforementioned areas of science to high-energy physics, chemistry, cognitive science, and economics. The issue of underdetermination comes up in a specific and powerful way in the context of quantum physics and its different interpretations, for instance. This is just one example of the fascinating ways in which discipline-specific details can drive the realism debate further. There is a lot more work to be done in this spirit, and hopefully the chapters here set a fruitful agenda for thinking realism-related issues further in a way that is fully informed by relevant, rich scientific details. While one can examine issues of scientific knowledge and progress in a narrow disciplinary context, the broader connections of scientific realism to other debates in philosophy also loom large, lending further importance to the subject. Traditionally realism has been taken to have epistemic, semantic, and metaphysical dimensions. Corresponding to these dimensions, the five chapters of Part V discuss broader connections between the central realist tenets and theories of truth, epistemology, philosophical naturalism and philosophy of mathematics, and the status of metaphysics, and (more specifically) natural kinds. I am delighted to have had the opportunity to edit the first-ever collected edition of this kind to the service of continuing research on this important and fascinating topic. The quality and richness of the work that I gratefully received from the contributors has given me further

3 Juha Saatsi confidence in the bright future and continuing importance of this area of philosophy. No hand- book can pretend to be 100% complete in its coverage of relevant issues, and there are some omissions that no doubt will be noted by the experts. But I trust the volume will achieve its central purpose: to provide a first-rate resource for researchers in philosophy and a pedagogical resource for philosophy of science lecturers and for advanced undergraduate students and more broadly for anyone interested in cutting-edge philosophical reflections on the nature and extent of scientific knowledge.

4 PART I Historical development of the realist stance

1 REALISM AND LOGICAL EMPIRICISM

Matthias Neuber

1 Introduction This essay argues that, in order to understand realism and its roots, we need to understand the emerging realist tendencies in logical empiricism and avoid naïve juxtaposition between it and realism. However, the relation between both currents is rather intricate. On the one hand, the logical empiricists rejected realism as an outdated, unwarranted, and entirely meaningless doctrine. On the other hand, they attempted to account for a realistic understanding of the language of science. It is for this reason that, according to the logical empiricist agenda, a distinction must be drawn betweenmetaphysical ‘bad’ realism and empirical‘good’ (or ‘scientific’) realism. Given this distinction, the relation between realism and logical -empiricism can be dis cussed along two lines. There is, firstly, the well-known – and rather infamous – logical empiricist critique of metaphysics, according to which the realism issue is nothing but a ‘pseudo-problem.’ However, secondly, in the philosophy of science, the logical empiricists obviously sought for a reconciliation of empiricist and realist components in our philosophical interpretation of scientific concept formation and theory construction. As will be shown in the following, there existed (at least)three varieties of such a conciliatory view: a ‘probabilistic,’ a ‘pragmatic,’ and an ‘invariantist’ version of the intended ‘empirical realism’. Yet in order to adequately understand the logical empiricist approach towards the debate over realism, we first need to consider the ‘canonical’ anti-metaphysical attitude.

2 Against ‘metaphysical’ realism What is metaphysics? According to the early logical empiricists – especially the members of the Vienna Circle – a sentence proves metaphysical if it cannot be verified by means of perception or (in more liberal terms) observation. No wonder, then, that realism as a philosophical doctrine falls victim to the verificationist criterion of meaning. Thus Carnap, in his 1928 Pseudoproblems in Philosophy, categorically states: “In the realism controversy, science can take neither an affirmative nor a negative position since the question has no meaning” (Carnap [1928a] 1968: 333). And Schlick, in his essay “Positivism and Realism” from 1932, points out that realism has no place in science because “the ‘problem of the reality of the external world’ is a meaningless pseudo- problem” (Schlick [1932] 1979: 263). It is interesting to see that realism, in both cases, is explicitly

7 Matthias Neuber confronted with science. Therefore, it seems to follow that a scientific realist position is anathema for the logical empiricists. However, it must be seen that the logical empiricist critique of metaphysics was primarily directed against the global positing of an ‘external world.’ As early as in 1926, Schlick argued against this sort of metaphysical conception in the following way:

It is not possible to formulate conceptually, or express in words, what existence or reality properly are. Criteria can of course be given, whereby we distinguish in science and daily life between the ‘really existent’ and the merely ‘illusory’ – but the question about the reality of the external world notoriously involves more than that. Yet whatever this ‘more’ may really be, which we have in mind when attributing existence to the external world, it is at all events wholly inexpressible. We have nothing against anyone attaching meaning to such a question, but must insist with all emphasis that this cannot be stated. (Schlick [1926] 1979: 100)

Moreover, in his 1931 lecture “The Future of Philosophy,” given at the Seventh International Congress of Philosophy in Oxford, Schlick pointed out:

Any cognition we can have of ‘Being,’ of the inmost nature of things, is gained entirely by the special sciences; they are the true ontology, and there can be no other. Each true scientific proposition expresses in some way the real nature of things – if it did not, it would simply not be true. So in regard to metaphysics the justification of our view is that it explains the vanity of all metaphysical efforts which has shown itself in the hopeless variety of systems all struggling against each other. Most of the so-called metaphysical propositions are no propositions at all, but meaningless combinations of words; and the rest are not ‘metaphysical’ at all, they are simply concealed scientific statements the truth or falsehood of which can be ascertained by the ordinary methods of experience and observation. (Schlick [1931] 2008: 301–302)

Thus according to the logical empiricist point of view, there is no place for a realistic metaphysics within our theoretical account of the world. It is empirical science by which questions of ontol- ogy are answered. Consequently, ‘scientific philosophy,’ as conceived of by the logical empiricists, turns out as a form of naturalism.1 It is a well-known fact that there existed a further variant of metaphysics that was criticized especially by Carnap. As Carnap points out in his Der logische Aufbau der Welt, the term ‘meta- physics’ is used by some philosophers “for the result of a nonrational, purely intuitive process” (Carnap [1928b] 1968: 295). Carnap refers the reader to the writings of Henri Bergson and claims that “[i]n referring metaphysics to the area of the nonrational, we are in agreement with many metaphysicians” (ibid.). However, in his famous 1932 critique of Martin Heidegger’s philosophy of ‘Being,’ Carnap rejected the nonrational variant of metaphysics as completely meaningless. (There is much more to be said about this aspect of the logical empiricist critique of metaphysics, but that would require an essay of its own.)2 Coming back to the logical empiricist critique of the realist ‘external world’ metaphysics, it is important to note that this critique was directed not only against a particular philosophical system but also against the views of certain contemporary scientists, especially physicists. Thus, for instance, Schlick argued in his “Positivism and Realism” that those physicists who postulate an external world ‘behind’ the world of observable phenomena are on the wrong metaphysical

8 Realism and logical empiricism track (cf. Schlick [1932] 1979: sect. III). One must see in this connection that Schlick’s argument was an implicit critique of his teacher Max Planck, who in “Positivismus und reale Außenwelt” rejected the positivist doctrine of Ernst Mach (rather aggressively) in favor of a ‘realist’ position according to which the external world must be presupposed in order to identify the causes of the very appearance and behavior of observable phenomena (see Planck [1931] 1932: esp. p. 82). Schlick commented on this as follows:

However we may twist and turn, it is impossible to interpret a reality-statement oth- erwise than as fitting into a perceptual context. It is absolutely the same kind of reality that we have to attribute to the data of consciousness and to physical events. Scarcely anything in the history of philosophy has created more confusion than the attempt to pick out one of the two as true ‘being.’ Wherever the term ‘real’ is intelligibly used, it has one and the same meaning. (Schlick [1932] 1979: 276)

Accordingly, for Schlick the theoretically postulated entities of physics, for example electrons, do not have the status of transcendent ‘things-in-themselves.’ They rather form part of Kantian empirical reality. In Schlick’s own words:

In order to settle the dispute about realism, it is of the greatest importance to alert the physicist to the fact that his external world is nothing else but the nature which also surrounds us in daily life, and is not the ‘transcendent world’ of the metaphysicians. The difference between the two is [. . .] quite particularly evident in the philosophy of Kant. Nature, and everything of which the physicist can and must speak, belongs, in Kant’s view, to empirical reality, and the meaning of this [. . .] is explained by him exactly as we have also had to do. Atoms, in Kant’s system, have no transcendent reality – they are not ‘things-in-themselves.’ Thus the physicist cannot appeal to the Kantian philosophy; his arguments lead only to the empirical external world that we all acknowledge, not to a transcendent one; his electrons are not metaphysical entities. (Schlick [1932] 1979: 278)

It must be kept in mind that by ‘empirical reality’ Schlick meant the whole realm of verifiable – and at the same time regular – perceptual connections. It is for this reason that he claimed that logical positivism (or what he alternatively called “coherent empiricism”) and empirical realism are compatible with each other. Everyone accepting the principle of verification, Schlick proclaimed, “must actually be an empirical realist” (Schlick[1932] 1979: 283).3

3 Three varieties of ‘empirical’ realism

3.1 Reichenbach’s probabilistic realism Carnap, Schlick, and the other members of the Vienna Circle were not the only representatives of the logical empiricist movement. There was also the so-called Berlin Group around Hans Reichenbach.4 However, it was Reichenbach himself who drew a sharp distinction between both groups, reserving the term ‘logistic empiricism’ for the Berlin group while characterizing the stance defended by the Viennese group, especially by Carnap in his Der logische Aufbau der Welt, as ‘logistic positivism.’5 Thus in his article “Logistic Empiricism in Germany and the Present State of Its Proponents,” Reichenbach states:

9 Matthias Neuber

The tautological character of the positivistic system [. . .] could not justify the predic- tive character of science. The system, in its seductive symmetry, lacked one essential quality: it did not correspond to the meanings of propositions, as these are expressed in the practice of science. It could not develop a theory of propositions about the future. (Reichenbach 1936: 152)

And Reichenbach pointedly continues:

This was the precise reason why the Berlin group could not accept positivism. The members of this circle insisted upon the necessity of a theory of propositions about the future. They maintained that any philosophy which neglected the fact and function of propositions about the future in science flagrantly contradicted the very first condition of empiricism: viz., to correspond to the practice of science. (ibid.)

As Reichenbach had already complained in his 1931 review of Carnap’s Aufbau (see Reichen- bach 1931), the Viennese – ‘positivistic’ – conception suffered from a systematic neglect of the inductive part of scientific theory construction. Verification by observable events and facts alone would not suffice in order to account for the “practice of science.” Rather, theprediction of as yet unobserved events and facts is, what according to Reichenbach, defines the central aim of science. But exactly this precludes the task of ultimate verification. Or, as Reichenbach put it in his 1936 article, “propositions about the future can not be expressed to state certain truths” (Reichenbach 1936: 153). Rather, a (non-symmetric) “probability-logic” (154) must be established by which it becomes possible to “apply the laws and concepts of probability to reality” (156). It was in his seminal Experience and Prediction from 1938 that Reichenbach systematically worked out this sort of probabilistic approach toward reality.6 His framework for designing the intended probabilistic and at the same time realist point of view was the theory of meaning, that is, semantics. What he proposed was what he called a “probability theory of meaning” (see Reichenbach 1938: §7), which he thought was strong enough to incorporate a semantics for theoretical terms, such as ‘atom,’ ‘electromagnetic field,’ or what have you (see ibid., §25). Reichenbach’s crucial point in this connection was the assumption of a surplus meaning of the- oretical terms: the meaning of theoretical terms, according to Reichenbach, is not exhausted by their being reducible to an observational evidence base. However, exactly this was implied by the Vienna Circle’s verificationist criterion of meaning. More exactly speaking, this crite- rion demanded that the whole realm of ‘indirect’ theoretical statements (containing theoretical terms) be converted into ‘direct’ observation statements (or so-called protocol-sentences).7 That is, according to the verificationist account, the truth of scientific statements was definitely decidable by their being exhaustively reducible to observational facts. Reichenbach, on the other hand, rejected this “truth theory of meaning” (as he called it) in favor of his own “probability theory of meaning.” In his own words: “The truth theory of meaning [. . .] has to be abandoned and to be replaced by the probability theory of meaning” (Reichenbach 1938: 53). Reichenbach’s central argument for this claim was that indirect state- ments, theoretical statements and statements about the future, do not have the form of an equiv- alence. According to the criticized positivistic doctrine, an indirect statement A is equivalent to a concatenation of direct statements [a1, a 2, . . ., a n]. Reichenbach calls this method of determining the meaning of indirect statements the “principle of retrogression” (1938: 49) and comes to the conclusion that this principle “does not survive more rigorous criticism” (50). For, in Reichen- bach’s view, there is no logical implication from direct statements to the corresponding indirect

10 Realism and logical empiricism statement. What we have is only what Reichenbach calls a “probability implication” (51), and it is for this reason that he points out:

It is not possible to maintain the postulate of strict verifiability for indirect sentences; sentences of this kind are not strictly verifiable because they are not equivalent to a finite class of direct sentences. The principle of retrogression does not hold because the inference from the premises to the indirect sentence is not a tautological transforma- tion but a probability inference. We are forced, therefore, to make a decision: either to renounce indirect sentences and consider them as meaningless or to renounce absolute verifiability as the criterion of meaning. (53)

Reichenbach’s decision is clear: He opts for the renouncement of the verifiability criterion and – consequently – ends up with a certain form of scientific realism. Reichenbach’s position is a scientific realist position insofar as he invests theoretical terms like ‘electron’ with an autonomous dimension of explanatory import, which, Reichenbach main- tained, could be elegantly captured by a probabilistic theory of inductive inference. More pre- cisely, Reichenbach – in the context of his famous ‘cubical world’ analogy (see Reichenbach 1938: §14 and pp. 212–225) – made clear that the existence of theoretical (‘unobservable’) enti- ties can be inferred inductively by searching for the causes of (regularly occurring) observable effects (like, for example, the tracks in a Wilson cloud chamber). The inferred entities, which Reichenbach called “illata” (see ibid.: 212), had the status of independently existing things, and their relation to immediately observable entities – Reichenbach called them “concreta” (ibid.) – was that of a “probability connection.” Or, as Reichenbach explained by referring to the example of atoms:

Since all observable qualities of the macroscopic bodies are only averages of qualities of the atoms, there are no strict inferences from the macroscopic bodies to the atom but only probability inferences; we have, therefore, no equivalence between statements about the macroscopic body and statements about the atoms but only a probability connection. (216)

All of this suggests a strong commitment to the scientific realist agenda. Relying on the specifi- cation of the basic scientific realist theses, as it has been provided, for example, by Stathis Psillos (1999: xix–xxi), Reichenbach’s position might be summarized as follows: On the ontological level, the independent existence of theoretical entities (such as atoms) is assumed; on the semantic level, we have a theory of meaning for theoretical terms, namely the probability theory of meaning; on the epistemological level, it is assumed that theoretical entities (and their causal properties) are inductively accessible. In short, Reichenbach endorsed all of the central features of modern sci- entific realism.8 However, Reichenbach’s conception gave rise to strong objections. Ernest Nagel, for example, argued that by invoking a frequency interpretation of probability (see Reichenbach 1938: §§32 and 38), Reichenbach remains entirely within the field of directly observable phenomena (see Nagel 1938: 271). That is to say that the (abductive) inference to Reichenbach’s “illata” was by no means captured by his account of probability. So, the step from the observable to the theoretical was still in need of justification. Or, as Psillos has put it, Reichenbach’s probability theory of meaning “requires the realist framework and cannot be a proof of it” (Psillos 2011a: 37).

11 Matthias Neuber

3.2 Feigl’s semantic/pragmatic realism This was exactly the point at which Herbert Feigl entered the stage.9 Feigl had already given a talk on “Sense and Nonsense in Scientific Realism” at the Congrès international de philosophie scientifique in 1935 in Paris (see Feigl 1936). His mature position concerning realism is to be found in his influential 1950 paper “Existential Hypotheses.” There, Feigl, like Reichenbach, intended to provide an affirmative (or constructive) treatment of the realist idea. Furthermore, Feigl, again like Reichenbach, based his argumentation on semantics. By taking semantics seriously, he maintained, “[t]he glib and easy dismissal of the issue as a pseudo-problem will no longer do” (Feigl 1950a: 36). However, by arguing for what he called “semantic realism” (ibid.: 50), Feigl saw himself as arguing against the Reichenbachian variant of realism, namely “probabilistic realism” (52). Like Nagel, Feigl rejected Reichenbach’s frequentist interpretation of probability. More generally, he repudiated the entire probabilistic approach. According to Feigl, realism, and especially scientific realism with its “existential hypotheses” concerning theoretical entities, could not be justified inductively. Quite the other way round:

Instead of justifying the surplus meaning of existential hypotheses and hypothetical constructs (Reichenbach’s “illata”) by means of inductive probability, I suggest that we justify the conceptual frame of the realistic language by its entailed consequence; viz. by showing that only within such a frame it makes sense to assign probabilities to existential hypotheses. (54)

Thus, in Feigl’s view, we first have to establish the realist framework, and then we are in a position to raise questions about the probability of specific existential hypotheses concerning theoretical entities. Or, as he argued elsewhere:

The customary probabilistic realism in trying to justify “transcendent” hypotheses on the basis of experimental findings has put the cart before the horse. Only after the introduction of the realistic frame can we legitimately argue inductively either from the theory to the outcome of as yet unperformed experiments; or vice versa from the results of experiments to specific postulates of the theory. (Feigl 1950b: 195)

So, according to Feigl, it is “the presupposed introduction of the realistic frame, i.e. the semantic- realistic interpretation of the theory,” which “furnishes the very possibility of a theory that is inductively fruitful” (ibid.). But how, then, can the adaption of the realist framework itself be motivated? As Psillos has correctly observed, answering this question from the perspective of Feigl “is, ultimately, a mat- ter of convention” (Psillos 2011b: 308). That is, for Feigl, realism is dependent on a forego- ing conventional decision. Consequently, realism cannot be justified naturalistically but only in a quasi-transcendental manner. By ‘quasi-transcendental’ I mean the assumption that we cannot directly refer to theoretical (unobservable) entities but that we first have to reflect on the ‘con- ditions of the possibility’ of drawing inductive-probabilistic inferences. As in the case of Feigl, this assumption is only quasi-transcendental because it is not related to (cognitively relevant) statements in the sense of Kant but only to (action- and decision-relevant) conventions in the sense of Poincaré. It is rather ‘regulative’ than ‘constitutive’ in the original Kantian sense. As such, it serves as the basis of an essentially pragmatic argument. Thus, in his 1950 reply to critiques

12 Realism and logical empiricism brought forward by his logical empiricist fellows Philipp Frank, Carl Gustav Hempel, and Ernest Nagel, Feigl stated clearly:

The various arguments that I adduced against [. . .] syntactical positivism and in favor of a semantic (or perhaps, as I had better call it, ‘pragmatic’) realism simply amount to the claim that when we fully and justly explicate the way in which we use the language of science (or homologously, the language of common sense) we cannot do without a set of designata that are in principle beyond the reach of direct experience. (Feigl 1950b: 191)

Feigl’s spelled-out solution to the realism problem is, as he correctly remarks, indeed rather more pragmatic than semantic. For, in the ultimate analysis, he ends up with matters of methodological fruitfulness:

The introduction of new basic and irreducible concepts (as, for example, in electromag- netics during the last century) may be reconstructed as an expansion of the empirical language. Only after our language has thus been enriched, can we significantly assign probabilities (degrees of confirmation) to specific predictive or explanatory hypotheses. The step of expansion of language cannot itself be justified on grounds of probability, except perhaps in the sophisticated pragmatic sense of the question: Will this expansion be methodologically fruitful? (Feigl 1950a: 57)

After all, it is the insight regarding the “need for definitional or conventional stipulation” ibid.( : 54) by which, according to Feigl, the realist enterprise is motivated in the first place. The ‘con- dition of the possibility’ of the realist program lies outside the reach of the realist program itself. Or as Psillos has aptly put it, for Feigl, there is “no ultimate argument for the adoption of the realist framework” (Psillos 2011b: 303).10 Ontic questions are pragmatic “framework-questions” (ibid.), and framework-questions must be decided by convention before any specific existential hypothesis concerning theoretical entities can be evaluated.11 The problematic aspects of this quasi-transcendental, convention-based justification of the realist stance are fairly obvious. To put it in a nutshell, Feigl’s approach seems not to go beyond the later Rudolf Carnap’s ontological ‘neutralism’ (see, in this connection, Carnap [1950] 1956). On the other hand, it must be seen that Feigl’s contribution formed an autonomous variety of ‘realistic claims in logical empiricism.’ Especially his contention that theoretical terms have “fac- tual reference” (Feigl 1950a: 48) distinguished his ‘semantic realism’ as a remarkable deviation from early, verificationist accounts of logical empiricism. However, as Hempel (1950: 172–173) pointed out in his critique of Feigl’s view, the very conception of factual reference fell victim to other restrictions within the logical empiricist agenda. After all, Feigl’s semantic realism boiled down to the charge that theoretical statements be “indirectly confirmable” (Feigl 1950a: 57). Their ‘factuality’ was tied to directly confirmable observation statements, the systematic function of which, Feigl maintained, provided “a maximum of nomological coherence by means of a minimum of hypothetical construction” (ibid.). No doubt that an instrumentalist (or operation- alist) would have embraced this point of view, all the more since Feigl repeatedly claimed that the realist framework itself was nothing but a “basic convention” (ibid.) that could be “justified only instrumentally” (Feigl 1950b: 195). It was for this reason that Hempel did, as he concluded, “not feel convinced that reliance on the problematic concept of the factual referents of theoretical constructs is necessary or even helpful in an attempt to achieve a comprehensive and coherent

13 Matthias Neuber theoretical account of scientific method and scientific knowledge” (Hempel 1950: 173). In fact, also at a larger scale, Feigl’s plea for a scientific realist position did not come to fruition. The ‘pragmatic argument’ he offered could hardly convince the philosophy of science community.12 Hempel’s critique of Feigl’s interpretation of the status of theoretical concepts formed the point of departure of another version of ‘semantic’ realism. In his guiding paper “The Theoreti- cian’s Dilemma: A Study in the Logic of Theory Construction,” first published in 1958, Hempel offered what might be called the ‘indispensability argument’ for realism (see Neuber 2015: 27). However, as I can only indicate here, Hempel remained, despite his own contention, within the realm of language. Like Feigl, he finally had to qualify his ‘realism’ by the empiricist restriction that theoretical statements – and with them theoretical concepts – must be essentially tied to the foundation of the respective observational evidence base and thus to ‘direct’ observation sentences.13

3.3 Kaila’s invariantism So it appears as if the various logical empiricist attempts to stake out a realist approach to science indeed were doomed to failure. However, there remains the view defended by the Finnish logical empiricist Eino Kaila.14 According to Kaila – who stood in close contact both to Reichenbach and to the members of the Vienna Circle since the mid- and late 1920s (see Manninen 2012) – the realism issue is definitelynot a problem of language. In his view, the problems of philosophy concern, ultimately, scientifically described reality rather than (the quasi-transcendental) ques- tions of ‘language engineering.’ Thus, as early as in 1930, in his Logistic Neopositivism (a critique particularly of Carnap’s Aufbau), Kaila declared that “the ‘realist language’ of science is actually far more than a mere manner of speaking: it is the expression of the living soul of science” (Kaila [1930] 1979: 4).To be sure, this could be interpreted as the articulation of (a certain variant of ) ‘bad metaphysics.’ However, what Kaila intends to clarify is that a mere reflection on the language of science is not enough in order to account for the empirical content of scientific theories. In other words, according to Kaila it is impossible – or, better, irresponsible – to ignore the ‘material mode of speaking’ (inhaltliche Redeweise) and to restrict philosophical analysis to the ‘formal mode of speaking’ (formale Redeweise). This – essentially Carnapian – strategy would, in Kaila’s view, not be empiricist at all. It would rather amount to an empirically empty (almost ‘scholastic’) formalism. On the other hand, Kaila does not go so far as to embrace the idiom of speculative metaphysics. As he stresses in his recently translated Inhimillinentieto (Human Knowledge, published in Finnish first in 1939), “the assumption that ‘behind’ experience there is another, intellectually more perfect world” (Kaila [1939] 2014: 29) is empirically not confirmed and hopelessly “imprecise” (ibid.).15 Thus, it is empirical scientific realism rather than metaphysical realism that drives Kaila’s point of view. Or, as Ilkka Niiniluoto put it:

Kaila had high respect for the exact philosophical method of the Vienna Circle. There- fore, he strived for a careful formulation of the realism issue, one that would satisfy the critical demands of the new logical empiricism. But it was clear that Kaila – the philosopher of nature who wanted to solve the riddle of reality – could not follow the “linguistic turn” of Analytical Philosophy: For him the deepest problems of philosophy concern reality rather than language. (Niiniluoto 1992: 103)

As Kaila stressed in the preface to his book on human knowledge, for him “the logical empiri- cist conception of knowledge is the culmination of two and a half millennia of development in

14 Realism and logical empiricism human ideas” ([1939] 2014: xxvi). Exaggerated or not, it is important to see that this assessment referred to a specific understanding of the logical empiricist project. This specific understanding was focused on two principles: the principle of testability (see esp. Kaila [1936] 1979: 62–63) and the principle of invariance (see esp. Kaila [1941] 1979: 149–161). While Kaila characterized the first principle as the “principle of logical empiricism” ([1936] 1979: 62), the second principle served, as it were, as his criterion of reality. Accordingly, Kaila’s point of view should be conceived of as a third variety of ‘realist claims in logical empiricism.’ As such, it can be best characterized as being based on an ‘invariantist’ – directly science-related (and thus not metaphysically motivated) – argument. Hence it is, I claim, appropriate (and justified) to see Kaila as the most explicit proponent of a realistically inspired variant of logical empiricism. So let us first briefly consider Kaila’s principle of testability. As he points out in his monograph On the Concept of Reality in Physical Science: Second Contribution to Logical Empiricism (1941), it is “measurement statements” by which the theoretical hypotheses of physics are empirically tested. Kaila writes,

[T]he principle of physical testability, which defines empirical statements as ‘physi- cal,’ states that the real content of any physical statement [. . .] consists in the set of measurement statements which are derivable from the statement (in connection with given data). A statement which does not have any such real content is by definition not a physical statement. This principle is implied by the requirement that the singu- lar empirical statements of physics (the basic statements) be exclusively measurement statements. (Kaila [1941] 1979: 184)

Kaila had in his earlier writings demanded that theoretical statements be translatable into the language of observation (see Niiniluoto 2012: 79–80), but now he felt content with their being testable by executing measurements: “[T]he assumption of translatability is not necessary [. . .]; testability would suffice” (Kaila [1941] 1979: 143). Moreover, Kaila clearly saw the need for ide- alization in scientific theory construction and therefore contended that “no theory is decidable, verifiable or falsifiable, in the strict sense; there is decidability only in a certain ‘relaxed’ or ‘weak- ened’ sense; this, however, is testability” (ibid.: 162). Kaila’s second principle, the principle of invariance, may be portrayed as the very core of his entire philosophical conception (see von Wright 1992: 80–81). In a nutshell, this principle implies that whenever we talk about (both scientific and everyday) reality, we refer to ‘invari- ances.’ “There is knowledge only,” Kaila maintains, “when some similarity, sameness, uniformity, analogy, in brief, some ‘invariance’ is found and given a name. In knowledge, we are always concerned with ‘invariances’ alone” (Kaila [1941] 1979: 131). As Kaila further points out, the discovery of invariances always goes along with the establishment of a certain structural identity (or isomor- phism). In his own words:

[I]f one succeeds, e.g., in giving for some domain an account which is in some sense ‘unified,’ then we have the discovery of an ‘invariance’; some characteristic or other of higher or lower conceptual level will then have been shown to be invariant with respect to a permutation of the places of the domain. Likewise, e.g., in any formal analogy, structural identity, isomorphism between two different domains, there is also some logically or mathematically definable ‘structure,’ e.g., an equation, that is invariant with respect to the interchange of these domains. ([1941] 1979: 151)

15 Matthias Neuber

All of this amounts to a ‘structural realist’ account of science and scientific theory con- struction. According to Kaila, it is invariant structures that are captured and described by our best-corroborated theories of physical reality. One can even go as far as to say that, for Kaila, physical reality is nothing but invariant structures. “The ‘real,’” Kaila declares, “is what is in some respect (relatively) invariant” ([1941] 1979: 185). It is relatively invariant because, in Kaila’s view, we have – according to the respective degree of invariance – different layers of reality. Thus Kaila provides us with some sort of ontological hierarchy which extends from perceptual reality to (thing-like) everyday reality and eventually to what is called by him ‘physico-scientific reality’ (see Kaila [1941] 1979: 185; see further Kaila [1936] 1979: esp. chs. IV and V).16 So, Kaila’s invariantism delivers an independent, non-linguistic, argument for a scientific realist articulation of the logical empiricist program. According to him, invariance is not inherent in our language but an immanent feature of physical reality (of which our language systems are a part). Conceived that way, Kaila’s invariantist alternative implies that “physical and scientific objects are objective, independent of us and our perceptions” (Niiniluoto 1992: 113).17 Yet this does not imply that Kaila fell back to the idiom of speculative metaphysics. To be sure, the ‘best’ invariances we have in science (such as Noether’s theorem in classical mathe- matical physics, comparable invariant theorems in quantum mechanics, etc.) are formulated in mathematical languages (governed by certain logical and non-logical rules). Therefore Kaila’s realist, non-linguistic interpretation of the invariance concept might appear to be too far reaching in terms of ‘ontological commitment’ (at least from an empiricist point of view). However, in order to account for the empirical content of the invariance concept (and its various applications) the realist interpretation is the only plausible way to go, since invariances are nothing that can be directly observed. How, then, are the principle of invariance and the principle of testability tied to each other? Kaila’s answer to this question is given within his theory of measurement. According to this (thoroughly anti-conventionalist) theory, it is metrical relations that are subject of the applica- tion of the principle of testability. Metrical relations, in turn, are the building blocks of Kaila’s invariantist ontology. They are what, in the first place, render measurement possible and, thus, are to be seen as “elementary facts which must be present independently of measurement” (Kaila [1941] 1979: 200; emphasis added). Thus according to Kaila, the (physically) real is what can be measured, and what can be measured are invariant systems of relations, namely struc- tures. Consequently, his invariantist approach comes very close to (moderate ‘ontic’) structural realism.18 The following passage from his book on human knowledge might finally serve to corroborate this claim:

Kant argued that knowledge pertains to appearances only and not to ‘things-in- themselves.’ And yet he clearly thought that there is an isomorphic relation between appearances and things-in-themselves. That is to say, appearances are representations of things-in-themselves; they share a structure, although, according to Kant, that structure is realized in material that is completely different in the two cases. We can see therefore that it is wrong to say that we know nothing of things-in- themselves: after all, we do know their structure. And if the extreme view turned out to be correct that our knowledge is in the last analysis just a matter of mere representation, we would have to say that we know just as much about things-in-themselves as we do about appearances. (Kaila [1939] 2014: 14)

16 Realism and logical empiricism

4 The logical empiricist heritage As we have seen from our selected review of realist tendencies within the logical empiricist tradition, the interrelations between the realist and logical empiricist currents are more complex and intricate than is commonly supposed. To be sure, the views of Carnap and the later (Vien- nese) Schlick come fairly close to the received view of logical empiricism as a non- or, rather, anti-realist position. However, Schlick’s and Carnap’s primary target was metaphysical (rather than empirical) realism. Moreover, the subsequent development led to articulations of the logical empiricist program that all attempted an integration of realist elements, particularly concern- ing science. Reichenbach’s probabilistic realism, Feigl’s semantic-pragmatic realism, and Kaila’s invariantism did all contribute to an understanding of the logical empiricist program that (pace Uebel)19 laid the basis for a systematic demarcation from earlier logical positivism. It was against this background that Reichenbach’s student Hilary Putnam once declared: “It is time to deal seriously with logical empiricism as a movement and as a critical phase in the history of our own tradition, and to put to rest what may with justice be called ‘The Myth of Logical Positivism’” (Putnam 1994: 129). As everybody knows, Putnam himself became one of the most influential representatives of realism. His descent from the logical empiricist tradition should not be ignored. More generally speaking, logical empiricism must be appreciated as a pioneering movement towards the realistic tendencies in the second half of the 20th century, particularly within philos- ophy of science. Its impact on our current understanding of the debate over scientific realism is worth being explored in more detail.

Notes 1 More specifically, the logical empiricists sought for a physicalistic reconstruction of the language of sci- ence. See, in this connection, especially Neurath (1931) and Carnap (1932). 2 For a good overview of Carnap’s critique of Heidegger, see Gabriel (2009). 3 It should be noted that recent attempts to establish an ‘analytic’ metaphysics (Sider, Chalmers, Lowe, etc.) would clearly fall victim to the logical empiricist critique because of its non-verifiability and lack of scientific standing. For a comprehensive refutation of analytic metaphysics (and a corresponding plea for a return to certain logical empiricist ideas), see Ladyman and Ross (2007). 4 For the relevant details concerning the Berlin Group, see Milkov and Peckhaus (2013). 5 For a systematic critique of this (programmatically intended) demarcation, see Uebel (2013). 6 For the following, see also Neuber (2015: 29–31). 7 See, in this connection, especially Schlick (1934). For a solid historical reconstruction, see Uebel (2007). 8 For a more extended reconstruction of Reichenbach’ s realism, see Salmon (1999a) and Psillos (2011a). 9 For the following, see also the detailed discussion in Neuber (2011); see further Psillos (2011b). 10 Psillos is alluding here to Bas van Fraassen’s critique of the so-called no-miracles argument, as it can be found in the writings of Richard Boyd and Hilary Putnam. Van Fraassen characterizes this argument as “the Ultimate Argument” (van Fraassen 1980: 39) for scientific realism and rejects it in favor of his own ‘constructive empiricism.’ 11 It should be noticed that the pragmatic element in Feigl’s theory becomes enforced by his conception of ‘pure pragmatics,’ which he took over from Wilfrid Sellars. For the details of this conception, see Neuber (2017). 12 For the details of this diagnosis, see Neuber (2011); for a recent attempt at a vindication of Feigl’s prag- matic argument, see Psillos (2011b). 13 For a more extended discussion of Hempel’s point of view, see Salmon (1999b) and Neuber (2015: 33–35). 14 For the following, see also Neuber (2015: 35–38). For an overview of logical empiricism in the Nordic countries, see Manninen and Stadler (2008). For an overview of Kaila’s philosophical views, in particular, see Niiniluoto, Sintonen, and von Wright (1992) and Niiniluoto and Pihlström (2012). 15 As we have seen before, this was exactly Schlick’s critique of Planck’s metaphysical realism in Schlick (1932).

17 Matthias Neuber

16 For a similar – more recent account – see Nozick (2001). According to Nozick, “one property is more objective than another when the first is invariant under a wider range of (admissible) transformations than the second is. [. . .] Invariance thus gives rise to a partial ordering of things in terms of how objec- tive they are” (Nozick 2001: 87). The roots of this view may be traced back to Cassirer (1910). 17 For a similar account, see – again – Nozick, according to whom “[w]e can say that intersubjective agreement may be our route to discovering objectiveness, and so in that sense be epistemically prior to objectiveness, but that nevertheless objectiveness (that is, invariance under all admissible transformations) explains the intersubjective agreement and so is ontologically prior to it” (2001: 91). 18 For an overview of structural realism and ‘ontic’ structural realism, in particular, see Ladyman (1998) and Ainsworth (2010). 19 See footnote 5.

References Ainsworth, P. (2010) “What Is Ontic Structural Realism?” Studies in History and Philosophy of Science 41, 50–57. Carnap, R. ([1928a] 1968) Pseudoproblems in Philosophy (R. A. George, trans.), London: Routledge. ——— ([1928b] 1968) The Logical Structure of the World (R. A. George, trans.), London: Routledge. ———(1932) “Die physikalische Sprache als Universalsprache der Wissenschaft,” Erkenntnis 2, 432–465. ——— ([1950] 1956) “Empiricisms, Semantics, and Ontology,” in R. Carnap (ed.), Meaning and Necessity (2nd ed. with supplementary essays), Chicago: University of Chicago Press, pp. 205–221. Cassirer, E. (1910) Substanzbegriff und Funktionsbegriff, Berlin: Verlag von Bruno Cassirer. Feigl, H. (1936) “Sense and Non-Sense in Scientific Realism,” inLangage et pseudo-problèmes (Actes du Con- grès international de philosophie scientifique), vol. 3, Paris: Hermann, pp. 50–56. Fraassen, B. van (1980) The Scientific Image, Oxford: Oxford University Press. ———(1950a) “Existential Hypotheses,” Philosophy of Science 17, 35–62. ———(1950b) “Logical Reconstruction, Realism and Pure Semiotic,” Philosophy of Science 17, 185–195. Gabriel, G. (2009) “Carnap’s Elimination of Metaphysics through Logical Analysis of Language,” Linguistic and Philosophical Investigations 8, 53–70. Hempel, C.G. (1950) “A Note on Semantic Realism,” Philosophy of Science 17, 169–173. ———(1958) “The Theoretician’s Dilemma: A Study in the Logic of Theory Construction,” in H. Feigl et al. (eds.), Minnesota Studies in the Philosophy of Science, vol.2, Minneapolis: University of Minnesota Press, pp. 37–98. Kaila, E. ([1930] 1979) “Logistic Neopositivism: A Critical Study,” in R. S. Cohen (ed.), Eino Kaila: Reality and Experience: Four Philosophical Essays, Dordrecht: Reidel, pp. 1–58. ——— ([1936] 1979) “On the System of the Concepts of Reality: A Contribution to Logical Empiricism,” in R. S. Cohen (ed.), Eino Kaila: Reality and Experience: Four Philosophical Essays, Dordrecht: Reidel, pp. 59–125. ——— ([1939] 2014) Human Knowledge: A Classic Statement to Logical Empiricism (A. Korhonen, trans.), Chicago and LaSalle: Open Court. ——— ([1941] 1979) “On the Concept of Reality in Physical Science: Second Contribution to Logical Empiricism,” in R. S. Cohen (ed.), Eino Kaila: Reality and Experience: Four Philosophical Essays, Dordrecht: Reidel, pp. 126–258. Ladyman, J. (1998) “What Is Structural Realism?” Studies in History and Philosophy of Science 29, 409–424. Ladyman, J. and Ross, D. (2007) Every Thing Must Go: Metaphysics Naturalized, Oxford: Oxford University Press. Manninen, J. (2012) “Eino Kaila in Carnap’s Circle,” in I. Niiniluoto et al. (eds.), Reappraisals of Eino Kaila’s Philosophy, Helsinki: Societas Philosophica Fennica, pp. 9–52. Manninen, J. and Stadler, F. (2008) The Vienna Circle in the Nordic Countries, Heidelberg, New York, Dordre- cht, London: Springer. Milkov, N. and Peckhaus, V. (2013) The Berlin Group and the Philosophy of Logical Empiricism, Heidelberg, New York, Dordrecht, London: Springer. Nagel, E. (1938) “Review of Reichenbach’s Experience and Prediction,” The Journal of Philosophy 35, 270–272. Neuber, M. (2011) “Feigl’s ‘Scientific Realism’,” Philosophy of Science 78, 165–183. ———(2015) “Realistic Claims in Logical Empiricism,” in U. Mäki et al. (eds.), Recent Developments in the Philosophy of Science: EPSA 13 Helsinki (European Studies in Philosophy of Science 1), Heidelberg, New York, Dordrecht, London: Springer, pp. 27–41.

18 Realism and logical empiricism

———(2017) “Feigl, Sellars, and the Idea of a ‘Pure Pragmatics’,” in N. Weidtmann et al. (eds.), Logical Empiricism and Pragmatism. Dordrecht, Heidelberg, New York, London: Springer, pp. 125–138. Neurath, O. (1931) “Protokollsätze,” Erkenntnis 2, 204–214. Niiniluoto, I. (1992) “Eino Kaila and Scientific Realism,” in I. Niiniluoto et al. (eds.),Eino Kaila and Logical Empiricism, Helsinki: Societas Philosophica Fennica, pp. 102–116. ———(2012) “Eino Kaila’s Critique of Metaphysics,” in I. Niiniluoto and S. Pihlström (eds.), Reappraisals of Eino Kaila’s Philosophy, Helsinki: Societas Philosophica Fennica, pp. 71–89. Niiniluoto, I. and Pihlström, S. (eds.) (2012) Reappraisals of Eino Kaila’s Philosophy, Helsinki: Societas Philo- sophica Fennica. Niiniluoto, I., Sintonen, M. and von Wright, G.H. (eds.) (1992) Eino Kaila and Logical Empiricism, Helsinki: Societas Philosophica Fennica. Nozick, R. (2001) Invariances, Cambridge and London: Harvard University Press. Planck, M. ([1931] 1932) “Is the External World Real?” in M. Planck, (ed.), Where Is Science Going? New York: Norton & Company, pp. 64–83. Psillos, S. (1999) Scientific Realism: How Science Tracks Truth, London: Routledge. ———(2011a) “On Reichenbach’s Argument for Scientific Realism,”Synthese 181, 23–40. ———(2011b) “Choosing the Realist Framework,” Synthese 180, 301–316. Putnam, H. (1994) Words and Life, Cambridge and London: Harvard University Press. Reichenbach, H. (1931) “Review of Carnap’s Der logische Aufbau der Welt,”Kant-Studien 38, 199–201. ———(1936) “Logistic Empiricism and the Present State of Its Problems,” The Journal of Philosophy 33, 141–160. ———(1938) Experience and Prediction: An Analysis of the Foundations and Structure of Knowledge, Chicago: University of Chicago Press. Salmon, W. (1999a) “Ornithology in a Cubical World: Reichenbach on Scientific Realism,” in D. Green- berger et al. (eds.), Epistemological and Experimental Perspectives on Quantum Physics, Dordrecht: Kluwer, pp. 305–315. ———(1999b) “The Spirit of Logical Empiricism: Carl G. Hempel’s Role in Twentieth-Century Philoso- phy of Science,” Philosophy of Science 66, 333–350. Schlick, M. ([1926] 1979) “Experience, Cognition, Metaphysics,” in H. L. Mulder et al. (eds.), Moritz Schlick: Philosophical Papers (Vol. 2), Dordrecht: Reidel, pp. 99–111. ——— ([1931] 2008) “The Future of Philosophy,” in H. Rutte et al. (eds.), Moritz Schlick Gesamtausgabe (Vol. 6), Wien and New York: Springer, pp. 291–303. ——— ([1932] 1979) “Positivism and Realism,” in H. L. Mulder et al. (eds.), Moritz Schlick: Philosophical Papers (Vol. 2), Dordrecht: Reidel, pp. 259–284. ———(1934) “Über das Fundament der Erkenntnis,” Erkenntnis 4, 79–99. Uebel, T. (2007) Empiricism at the Crossroads: The Vienna Circle’s Protocol-Sentence Debate, Chicago and LaSalle: Open Court. ——— (2013) “‘Logical Positivism’–‘Logical Empiricism’: What’s in a Name?” Perspectives on Science 21, 141–160. Wright, G.H. von (1992) “Eino Kaila’s Monism,” in I. Niiniluoto et al. (eds.), Eino Kaila and Logical Empir- icism, Helsinki: Societas Philosophica Fennica, pp. 71–91.

19 2 THE REALIST TURN IN THE PHILOSOPHY OF SCIENCE

Stathis Psillos

1 Introduction The ‘realist turn’ in the philosophy of science occurred in the 1970s and marked a shift from empiricist views concerning scientific theories and their relation to the world to realist ones. It was associated with what came to be explanationistknown as defensethe of , realismnamely, the strategy of showing that the basic realist tenets offer the best explanation of the empirical and predictive successes of scientific theories. It was motivated by a move from verification and issues in semantics (how do theoretical terms get their meaning?)aka toinference abduction to the( best explanation) and issues in epistemology (do we have reasons to take scientific theories, lit- erally understood, as truthlike?). Realism initiated an era of epistemic optimism: science is in the truth business. Soon enough, however, this optimistic stance was challenged by rival views which aimed to show that, even after the collapse of instrumentalism, realism is not the only game in town concerning science (this was the key objective of van Fraassen’s [1980] constructive empir- icism); or that realism is at odds with the history of science and, in particular, with a track record of false and abandoned, but otherwise empirically successful, theories (this was Mary Hesse’s and Larry Laudan’s historical induction). In reply to the historical challenge, realists became more selective in what they are realists about. This chapter offers a narrative of the basic twists and turns of the realism debate after the realist turn. I will start with what preceded and initiated the turn, namely, instrumentalist construals of scientific theories. I will then move on to discuss the basic lines of development of the realist stance to science, focusing on one of its main challenges: the historical challenge.

2 Semantic realism The current phase of the scientific realism debate – what I call the epistemic phase – started in the middle 1960s and was based on an important consensus, namely, semantic realism. This is the view that the vocabulary of scientific theories should be treated in a uniform way on the basis of standard referential semantics. In the early 1950s, the dominant empiricist view was that the- ories are only partially interpreted, meaning that they are divided into two parts, one theoretical and one observational, such that only their observational part (expressed by means of a theory- free observational vocabulary) is fully interpreted, while the theoretical part is only partially

20 Realist turn in philosophy of science meaningful on the basis of its deductive relations to the observational part. Being only partially interpreted, theoretical terms were not taken to be putatively referring to anything in the world, and concomitant theoretical assertions were not taken to have truth-conditions. At best, t-terms and t-assertions (that is, the core of theories qua theories) were taken to be ways to systematize deductively a set of observational assertions, which could be independently testable by means of direct observations.1 This empiricist approach to the semantics of theories started to crumble when Herbert Feigl (1950) argued that in thinking about the content of theories, we should separate the issue of what makes a theory true (if it is true) and the issue of the evidence for its truth. A scientific theory can then have truth-conditions which make an essential reference to unobservable entities and their properties and relations even if the evidence for these truth-conditions is, by and large, observational. What empiricists had come to call the ‘excess content’ of theoretical discourse is captured by the fact that t-discourse is about unobservable entities. If semantic realism is taken for granted, it seems that the question of realism takes care of itself. Theories cannot be proved to be true; nonetheless, they can be confirmed by empirical evidence (as both realists and empiricists agree). Given semantic realism, if scientific theories are well confirmed, there are reasons to believe in the reality of the theoretical entities they posit. To hold a theory as well confirmed is to accept that the entities posited by the theory are part of the furniture of the world. This kind of view was captured very eloquently by Wilfrid Sellars when he said (1963: 97), “To have a good reason for holding a theory is ipso facto to have good reasons for holding that the entities postulated by the theory exist.”

3 For instrumentalism But in the middle 1950s an argument became available to the effect that t-terms are dispensable. If this were true, semantic realism would become irrelevant. The very idea that theories purport to describe the world as it is in its unobservable parts – a central realist intuition about science – would become a non-starter. The argument was based on a theorem proved by logician William Craig, the philosophical application of which led to the statement that came to be known as Craig’s theorem: for any scientific theory T, T is replaceable by another (axiomatisable) theory Craig(T), consisting of all and only the theorems of T which are formulated in terms of the obser- vational vocabulary VO (Craig 1956). The gist of Craig’s theorem is that a theory is a conservative extension of the deductive systematization of its observational consequences. This theorem was taken to capture the canonical form of instrumentalism. Though it’s hard to find philosophers who explicitly characterized themselves as instrumen- talists,2 Craig’s theorem offered a boost to instrumentalism – the view that theories should be seen as (useful) instruments for the organization, classification and prediction of observable phenom- ena; hence that the ‘cash value’ of scientific theories is fully captured by what theories say about the observable world. Craig’s theorem was taken to show that the whole body of theoretical commitments in science – those expressed by the theoretical vocabulary – were dispensable, since theoretical terms could be eliminated en bloc, without loss in the deductive connections among the observable consequences of the theory. At roughly the same time, Carnap (1958) re-invented the so-called Ramsey-sentence. The idea goes back to Frank Ramsey (1929): the content of a theory is captured by a single existential statement, in which the theoretical predicates are replaced by bound (second-order existential) quantifiers. The Ramsey-sentence RT that replaces theory T has exactly the same observational consequences as T; it can play the same role as T in reasoning; it is truth-evaluable if there are entities that satisfy it; but since it dispenses altogether with theoretical vocabulary and refers to

21 Stathis Psillos whatever entities satisfy it only by means of quantifiers, it was taken to remove the issue of the ref- erence of theoretical terms/predicates. Hence, it was taken to present a neutral ground between realism and instrumentalism. Carnap enthusiastically jumped on this idea, since he thought he could deflate the debate between realism and instrumentalism as being merely about a choice of language. At the same time, he thought he could secure the proper empirical content of theories against commitment to physical unobservable entities. What is more, as Carnap was first to note, the very theory T can be written down as a conjunction of two parts: the Ramsey-sentence RT of T and the conditional RT→ T, which came to be known as a Carnap-sentence and was taken to be a meaning postulate with no empirical content. By the end of the 1950s, then, the semantic realist project seemed to be short-lived. The realist conception of theories had to sail between the Scylla of Craig’s theorem and the Charybdis of Ramsey-sentences. The argument that theoretical discourse possesses excess content over obser- vational discourse and is putatively referential came under severe pressure since by either Craig’s theorem or Ramsey-sentences, theoretical vocabulary was rendered dispensable without loss of (empirical) content. Carl Hempel (1958) expressed this pessimist sentiment in the form of ‘the theoretician’s dilemma’. If the theoretical terms and principles of a theory do not serve their purpose of a deductive systematization of the empirical consequences of the theory, they are dispensable. But given Craig’s theorem (and Ramsey-sentences), even if they do serve their purpose, they can be dispensed with. Hence, the theoretical terms and principles of any theory are dispensable.

4 Negative and positive arguments for realism An otherwise plausible defense of semantic realism (via meaning holism and the denial of the distinction between theory-based and observation-based vocabulary) was turning against another important realist intuition: namely, that subsequent theories, as a rule, do better than their pre- decessors in representing the world. This is the context in which the first thorough defense of realism takes place in the work of Hilary Putnam.

4.1 Against instrumentalism In his writings in the 1960s, Putnam aimed to motivate and defend realism first by arguing sys- tematically against instrumentalist approaches to scientific theories. Two of his arguments stick out. The first relates to Craig’s theorem–based instrumentalism. Putnam (1965) mounted a formidable attack on the philosophical significance of Craig’s theo- rem arguing that (a) theoretical terms are meaningful, taking their meaning from the theories in which they feature and (b) scientists employ terms like ‘electron’, ‘virus’, ‘spacetime curvature’ and so on – and advance relevant theories – because they wish to talk about electrons, viruses, the curvature of spacetime and so on; that is, scientists want to find out about theunobservable world. Theoretical terms provide scientists with the necessary linguistic tools for talking about things they want to talk about. Putnam’s second argument relates to the role of theories in the confirmation of observational statements. The idea is that theories are often necessary for the establishment of inductive con- nections between seemingly unrelated observational statements. Here is Putnam’s (1963) own example. Consider the prediction H: ‘When two subcritical masses of U235 are slammed together to form a supercritical mass, there will be a nuclear explosion’. H could be re-written in an obser- vational language – that is without the t-term ‘Uranium235’ – as O1: ‘When two particular rocks are slammed together, an explosion will happen’. Consider now the available evidence, namely

22 Realist turn in philosophy of science

O2: ‘Up to now, when two rocks were put together nothing happened’. Given this, it follows that prob(O1/O2) is very low, (if it can be determined at all). But consider the posterior probability of O1 given the past evidence and the atomic theory T which entails that the uranium rocks would explode if critical mass were attained quickly enough. It is obvious that prob(O1/O2&T) is now determined and is much greater than prob(O1/O2). To the challenge of semantic holism and the implication of radical reference-variance, Put- nam replied by developing Saul Kripke’s causal theory of reference. In a number of papers in the 1970s (1973, 1974, 1975a), he extended this theory to cover the reference of natural-kind terms, physical-magnitude terms and theoretical terms. A key consequence of this causal theory is that semantic incommensurability is disposed of and the possibility of referential continuity in theory-change is safeguarded. If, for instance, the referent of the term ‘electricity’ is fixed causally, all different theories of electricity refer to and dispute over the same ‘existentially given’ magnitude, namely electricity. The causal theory makes available a way to compare theories and to allow claims to the effect that the successor theory is more truthlike than its predecessors. Besides, it tallied with Putnam’s considered view that the positive defense of realism is, by and large, an empirical (natu- ralistic) endeavour. The way the world is constituted and causally interacts with the language-users is an indispensable constraint on the theory and practice of fixing the reference (and meaning) of the language used to talk about the world: the conceptual and linguistic categories scientists use to talk about the world are tuned to accommodate the causal structure of the world. Given these arguments, the negative case for scientific realism – namely, that instrumentalism fails patently to account for the role, scope and aim of scientific theories – was hard to resist.

4.2 For realism Putnam went further by offering a positive argument for scientific realism. In his (1975: 73) he penned the most famous argument for scientific realism – which has become known as the ‘no miracles argument’ (NMA). Here is the argument in full:

The positive argument for realism is that it is the only philosophy that does not make the success of science a miracle. That terms in mature scientific theories typically refer (this formulation is due to Richard Boyd), that the theories accepted in a mature science are typically approximately true, that the same terms can refer to the same even when it occurs in different theories – these statements are viewed not as necessary truths but as part of the only scientific explanation of the success of science, and hence as part of any adequate description of science and its relations to its objects.

There has been heated debate about this argument (see Psillos 1999: ch. 4; see B. Wray, “Success of science as a motivation for realism,” ch. 3 of this volume). For now, I want to explain the reference in it to Richard Boyd. In his widely circulated and discussed but (still) unpublished manuscript Realism and Scientific Epistemology, Boyd tied the defense of scientific realism to the best (or “the only plausible”) explanation of the fact that scientific methodology has succeeded in producing predictively reliable theories. Boyd viewed scientific realism as an historical thesis about the “operation of scientific meth- odology and the relation between scientific theories and the world” (1971: 12). As such, realism is not a thesis about current science only; it is also a thesis about the historical record of science: it claims that there has been convergence to a truer image of the world, even though past theories have been known to have been mistaken in some respects. This historical dimension is necessary if the truth (or partial truth or significant truth) of scientific theories is to be admitted as thebest

23 Stathis Psillos explanation of the predictive reliability of methodology. For unless continuity-in-theory-change and convergence are established, past failures of scientific theories will act as defeaters of the view that current science is on the right track. If, however, realism aims to explain an historical truth – namely, that scientific theories have been remarkably successful in the prediction and control of natural phenomena – the defense of scientific realism can only be a posteriori and broadly empirical. Boyd should in fact be credited with the move that what came to be known as the explanationist defense of realism should be conducted within a broadly naturalistic framework.

5 The three theses of realism In light of the Putnam-Boyd understanding, scientific realism c. 1980 incorporated three theses: a REFERENCE: Theoretical terms refer to unobservable entities; b TRUTH: Theories are (approximately) true; and c CONTINUITY: There is referential continuity in theory change.

REFERENCE encapsulates semantic realism, and more specifically a certain non-verificationist reading of scientific theories – what came to be known as a ‘literal or face-value understanding’ of theories. But REFERENCE also implies a certain metaphysical image of the world: as being popu- lated by unobservable entities. REFERENCE implies that (an essential part of ) the subject matter of science is the unobservable world. By the same token, however, the metaphysical dimension of scientific realism is captured by not (much) more than the assertion that theoretical entities are real (viz., that theoretical terms genuinely refer). TRUTH takes realism beyond REFERENCE in asserting that t-entities (at least those referred to by t-terms featuring in true theories) are real: they populate the world. For both Boyd (1971, 1981) and (the 1970s) Putnam, TRUTH implies a certain understanding of truth, namely, truth as correspondence: to say that a theory is true is to say that it corresponds to reality. The chief moti- vation for such a conception of truth was explanationist. Putnam and Boyd insisted that truth (and reference) plays a key explanatory role: it explains the success of action (more particularly, the success of scientific theories and methodology, in the case of science). That truth has an explanatory function in science is the key idea behind the ‘no miracles’ argument. To be sure, it is approximate truth that at best can be attributed to scientific theories.3 But the logical point behind the ‘no miracles argument’ is that the success of scientific methodol- ogy is best explained by the fact that the theories that indispensably inform this methodology are relevantly true – that is, true in the respects that inform the employment of these methodologies. Some philosophers (e.g., Ghins 2001) have argued that it is not the truth of theory X that explains its empirical success but the fact that entities and properties posited by X are real. True enough! Yet all that is required to move from reality to truth is semantic ascent. TRUTH has notable metaphysical implications, namely, that scientific theories are answerable to the world and are made true by the world. The most congenial-to-realism way to develop this insight is by what I have come to call THE POSSIBILITY OF DIVERGENCE. The notion of correspondence is meant to capture the asymmetric dependence of the theories on the world. This asymmetry implies that though empirical success (even empirical adequacy) is a sign of truth, when truth is attributed to the theory, this is a substantive attribution which is meant to imply that the theory is made true by the world; which, in its turn, is taken to imply that it is logically possible that an accepted, successful and well-confirmed theorymight be false simply because the world might not conform to it. This POSSIBILITY OF DIVERGENCE is meant to capture a modal fact of the world and in particular a sense in which the world is independent of theories,

24 Realist turn in philosophy of science beliefs, warrants, epistemic practices and so on. It requires a conception of truth which distances truth from certain epistemic notions (even idealized ones) such as being ideally warrantedly assert- ible. Hence, TRUTH implies that realism is committed to a non-epistemic conception of truth.4 Taken together REFERENCE and TRUTH imply a certain way to view the metaphysics of scientific realism. It’s not enough for realism to argue that certain theoretical entities posited by scientific theories are real. They and their properties should be (part of ) the truth-makers of theoretical assertions, and they should be mind-independent (in the way suggested by THE POSSIBILITY OF DIVERGENCE). CONTINUITY takes scientific realism beyond REFERENCE and TRUTH by capturing the all-important notion of convergence in theory-change. This kind of thesis is necessary for convergence, since it secures that successor theories might well talk about the very same entities that their abandoned predecessors did, even though the now abandoned theories might have mischaracterized these entities. Putnam thought that the failure of CONTINUITY would lead to a disastrous “meta-induction”:

just as no term used in the science of more than fifty (or whatever) years ago referred, so it will turn out that no term used now (except may be observational terms, if there any such) refers. (1978: 25)

Then, REFERENCE and TRUTH go by the board too. Putnam took it, correctly and insightfully I think, that this kind of pessimistic argument calls for a distinctively philosophical answer, namely a theory of reference which allows for ref- erential continuity on theory-change. So the key point is not that the premise of the inductive argument is false. Rather it is that this kind of argument relies on the implicit assumption that there is radical reference variance in theory change; that is that a t-term that features in differ- ent theories necessarily refers to distinct unobservable entities. So Putnam’s diagnosis was that the historical challenge to realism he envisaged was a golden opportunity to articulate realism in a better way: realism should avoid some descriptivist and holistic theory of reference. For it is only on such a theory of reference that, as we have already noted in section 3, it becomes inevitable that every time the theory changes, the meanings of all terms change, too; and given that reference is supposed to be fixed by descriptions, meaning change is taken to lead refer- ence variance. It transpires, then, that adopting a theory of reference, such as the causal theory, which allows for referential stability in theory-change, is indispensable for CONTINUITY and scientific realism.5

6 Looking for a role for history

6.1 The principle of no privilege Things did not turn out to be very easy for realism. If realism is an historical thesis, the history of science should be called in to support it or undermine it. In her (1976), Hesse advanced what she called “a principle of no privilege”, according to which

our own scientific theories are held to be as much subject to radical conceptual change as past theories are seen to be.

Hesse (1976: 266) put forward an argument that all theories are false.

25 Stathis Psillos

Every scientific system implies a conceptual classification of the world into an ontology of fundamental entities and properties – it is an attempt to answer the question ‘What is the world really made of?’ But it is exactly these ontologies that are most subject to radical change throughout the history of science. Therefore in the spirit of the principle of no privilege, it seems that we must say either that all these ontologies are true, ie: we must give a realistic interpretation of all of them or we must say they are all false. But they cannot all be true in the same world, because they contain conflicting answers to the question ‘What is the world made of?’ Therefore they must all be false.

This argument, it should be clear, implies a substantial role for history of science. For unless there is a recognizable pattern of change in the ‘ontology of fundamental entities and properties’, it can always be argued that our current scientific theories are not subject to radical change. The rationale for the Principle of No Privilege is predominantly historical and hence its defense should be historical. As Hesse admitted, the Principle arises “from accepting the induction from the history of science” (1976: 271). But this is precisely the problem with this Principle: it should be borne out by the history of theory-change in science that all these ‘ontologies’ have been incompatible with each other; hence they cannot all be true. Showing incompatibility presupposes a theory of reference of t-terms which does not allow that same or different terms featuring in different theories can nonetheless refer to the same entity in the world. And this is precisely the position already chal- lenged by Putnam: it is simply question begging to adopt a theory of reference which makes it inevitable that there is radical-reference variance in theory-change.6 To be sure, Hesse, like almost anyone else in this debate, shares the intuition that falsity can- not genuinely explain the successes of science. Hence she goes on to argue that there is some continuity in theory-change which is not restricted to the “accumulation of true observation sentences” but includes

some theoretical sentences which are carried over fairly directly from a past theoretical framework to our own, that is, which do not depend for their truth on the existence and classification of particular hypothetical entities, but are nearer to pragmatic pre- dictive test. (1976; 274)

Interestingly, these statements include that “water is composed of discrete molecules of hydrogen and oxygen in definite proportions.” This, she says, “is true, though we are not able to specify in ultimate terms what exactly molecules and atoms of water, hydrogen, and oxygen are (Newtonian, Daltonian, quantum, and relativistic field theories tell different stories about them)” (1976: 274). The issue, then, is this. Is there a sense in which “the revolutionary induction from the history of science about theory change” (Hesse 1976: 268) can be blocked by admitting that the conti- nuity in theory change is substantial? Differently put, Hesse’s argument says nothing about false theories being such that some of them are truer (in their theoretical assertions) than others. And this is precisely the option realists came to exploit.

6.2 Getting nearer to the truth William Newton-Smith (1981) was perhaps the first to think that the history of science (better: the track record of science) could be used in defence of realism. He took realism to be committed to two theses:

26 Realist turn in philosophy of science

(1) Theories are true or false in virtue of how the world is. (2) The point of the scientific enterprise is to discover explanatory truths about the world.

He then noted that (2) is under threat “if we reflect on the fact that all physical theories in the past have had their heyday and have eventually been rejected as false” (1981: 14). And he added (ibid.):

Indeed, there is inductive support for a pessimistic induction: any theory will be discov- ered to be false within, say 200 years of being propounded. We may think of some of our current theories as being true. But modesty requires us to assume that they are not so. For what is so special about the present? We have good inductive grounds for con- cluding that current theories – even our most favorite ones – will come to be seen to be false. Indeed the evidence might even be held to support the conclusion that no theory that will ever be discovered by the human race is strictly speaking true. So how can it be rational to pursue that which we have evidence for thinking can never be reached?

It should be obvious that part of the argument that Newton-Smith aimed to neutralize is Hesse’s Principle of No Privilege, cast as a question: “What is so special about the present?” His reply to this argument was that realists should posit “an interim goal for the scientific enterprise. This is the goal of getting nearer the truth” (1981: 14). If this is the goal, Newton-Smith argued, there is no reason to bother with the preceding induction: “its sting is removed”. Accepting the pessimistic induction “is compatible with maintaining that current theories, while strictly speaking false, are getting nearer the truth.” But the role of the history of science in the defense of realism was suitably restricted to moti- vating what Newton-Smith called ‘the animal farm move’, namely, that though all theories are false, some are truer than others. He took it that what was needed to be defended was the thesis that if a theory T2 has greater verisimilitude than a theory T1, T 2 is likely to have greater obser- vational success than T1. And he advanced what he called a transcendental strategy in its defense, which, for all practical purposes I think, is a ‘best explanation’ strategy. The key argument was that there is an “undeniable fact” to be reckoned with, namely, that “in a mature science like physics, contemporary theories provided us with better predictions about the world than their predecessors and have placed us in a better position to manipulate that world” (1981: 196). The reckoning came with the claim that if the ‘greater verisimilitude’ thesis is correct (that is, if the- ories “are increasing in truth-content without increasing in falsity-content”), then the increase in predictive power would be explained and be rendered expectable. This increase in predictive power “would be totally mystifying (. . .) if it were not for the fact that theories are capturing more and more truth about the world” (1981: 196). This kind of argument, plausible though it may be, dismisses the force of the pessimistic induction all too quickly. Not because Newton-Smith is wrong about the need to focus on near or approximate truth rather than on (full and exact) truth but because the pessimistic induction, if forceful at all, undercuts the explanatory link between success and approximate truth. Hence the realists needed to do some more work to restore this link.

6.3 A confutation of convergent realism That more work was needed became obvious after the publication of Laudan’s (1981). His history-based argument against realism was precisely meant to show how the link between success and truth is undermined by taking into account the history of science. Laudan formulated his argument via reference – a point alluded to in Putnam’s formulation of realism. But he did aim to

27 Stathis Psillos block the claim that there is an explanatory connection between (approximate) truth and success – a point raised by Newton-Smith’s argument. Laudan started with granting “for the sake of argument” that if a theory is approximately true, then it will be successful. He then aimed to show that even if we granted this, “explanatory success” cannot be taken “as a rational warrant for a judgment of approximate truth”. So his aim was to show that the realist thesis is not rationally warranted. What is the structure of Laudan’s argument? There is some controversy concerning this issue, but the thought has been that if we are to take seriously Laudan’s “plethora” of theories that were “both successful and (so far as we can judge) non-referential with respect to many of their central explanatory concepts” (1981: 33), then the argument is inductive. In particular: (I) There is a plethora of theories (ratio 6 to 1)7 which were successful and yet not approx- imately true. Therefore, it is highly probable that current theories will not be approximately true (despite their success).

Yet this kind of argument has obvious flaws. Two are the most important, I think. The first is that the basis for induction is hard to assess. This does not just concern the 6:1 ratio – where does it come from? It also concerns the issue of how we individuate and count theories as well as how we judge success and referential failure. Unless we are clear on all these issues in advance of the inductive argument, we cannot even start putting together the inductive evidence for its conclusion. The second flaw of (I) is that the conclusion is too strong. It does not just undercut the connection between success and approximate truth; it yields as a conclusion that it is more likely than not that current successful theories are not approximately true. Hence it makes it the case that there is rational warrant for the judgement that current theories are not approximately true. The flaw with this kind of sweeping generalisation is precisely that it disregards totally the strong evidence there is for current theories – it renders it totally irrelevant to the issue of their likelihood of being true. Surely this is unwarranted. Not only because it disregards potentially important differences in the quality and quantity of evidence there is for current theories (differences that would justify treating current theo- ries as more supported by available evidence than past theories were by the then available evidence) but also because it makes a mockery of looking for evidence for scientific theo- ries! If I know that X is more likely than Y and that this relation cannot change by doing Z, there is no point in doing Z. If the “plethora” of theories cannot warrant an inductive conclusion, what is its role in Laudan’s argument? Note the stated aim of the argument, namely, to show “explanatory success” cannot be taken “as a rational warrant for a judgement of approximate truth”. For X to be a rational warrant for Y, X must offer good reasons to accept Y. Past experience of X being correlated with Y is a good reason to accept a future correlation (for non-inductive skeptics, anyway). And conversely, if X and Y have not been correlated in the past, we are not warranted in expecting that they will be correlated currently or in the future. Note that this kind of reasoning does not render it false that X may go with Y currently or in the future. It just undermines the warrant for this kind of judgement or expectation. An alternative way to see the issue is this. Y is supposed to explain X (approximate truth is supposed to explain success). But if X and Y have not been correlated in the past,(if X has not been associated with Y, or if [more strongly] X has been associated with not-Y), then the warrant for accepting Y as the (best) explanation of X is undercut.

28 Realist turn in philosophy of science

6.4 The divide et impera strategy If we think of Laudan’s argument as a warrant-remover argument and if we also think that the fate of (past) theories should have a bearing on what we are warranted in accepting now, we should think differently. In Psillos (1996, 1999: ch. 5) I argued that we should think of Laudan’s argument as a kind of reductio. And by this, (somewhat confusingly I must now admit), I meant to imply that it is not a proper reductio. As I noted, Laudan’s argument aimed to “discredit the claim that there is an explanatory connection between empirical success and truth-likeness” which would warrant the realist view that current successful theories are approximately true. If we view the argument this way, as a potential warrant-remover argument, then the past record of science does play a role in it, since it is meant to offer this warrant-remover. But Laudan was careful to be using the qualifier “so far as we can judge” repeatedly. Past theories are non-referential “so far as we can judge”, that is by our own lights. This implied that past theories were false, “so far as we can judge”. This means that if we accept current theories to be true, then “so far as we can judge” past theories cannot be true. All this is consistent with leaving it open that current theories are true or false. It just requires that it cannot be the case that both past theories and current ones are true. So my (Psillos 1996) reconstruction of Laudan’s argument was as follows:

(P) (A) Currently successful theories are approximately true. (B) If currently successful theories are truth-like, then past theories cannot have been. (C) These characteristically false theories were, nonetheless, empirically successful. (the ‘historical gambit’)

Hence, empirical success is not connected with truth-likeness and truth-likeness cannot explain success: the realist’s potential warrant for (A) is defeated.

(B) is critical for the argument. It is meant to capture discontinuity in theory-change, which I put thus (stated in the material mode): “Past theories are deemed not to have been truth-like because the entities they posited are no longer believed to exist and/or because the laws and mechanisms they postulated are not part of our current theoretical description of the world”. In this setting, Laudan’s ‘historical gambit’ (C) makes perfect sense. For unless there are past successful theories which are warrantedly deemed not to be truth-like “so far as we can judge”, the previous premise cannot be sustained, and the warrant-removing reductio fails. If premise (C) can be substantiated, success cannot be used to warrant the claim that current theories are true. And there is no way that this premise can be substantiated apart from looking at past successful theories and their fate. History of science is thereby essentially engaged. I still think this is the best way to make sense of the challenge Laudan had in mind in a way that (a) the fate of past theories is seriously taken into account and (b) the argument is seen as warrant-removing. To respond then to this argument, realists needed to be selective in their com- mitments. This response has come to be known as the divide et impera strategy to refute Laudan’s argument (see Psillos 1996). The focus of this strategy was on rebutting the claim that the truth of current theories implies that past theories cannot be deemed truth-like. Philip Kitcher (1993) and I (1996, 1999) have argued that there are ways to distinguish between the ‘good’ and the ‘bad’ parts of past abandoned theories and to show that the ‘good’ parts – those that enjoyed evidential support, were not idle components and the like – were retained in subsequent theories. This kind of response suggests that there has been enough theoretical continuity in theory-change to warrant the realist claim that science is ‘on the right track’.

29 Stathis Psillos

To be more precise, the realist strategy proceeds in two steps. The first is to make the claim of continuity (or convergence) plausible, namely, to show that there is continuity in theory-change and that this is not merely empirical continuity: substantive theoretical claims that featured in past theories and played a key role in their successes (especially novel predictions) have been incorporated in subsequent theories and continue to play an important role in making them empirically successful. But this first step does not establish that the convergence is to thetruth . For this claim to be made plausible a second argument is needed, namely, that the emergence of this evolving-but-convergent network of theoretical assertions is best explained by the assumption that it is, by and large, approximately true.8

7 Structural realism The selective realist trend started with the position that John Worrall (1989) dubbed ‘structural realism’. This was an attempt to capitalize on the fact that despite the radical changes at the theoretical level, successor theories have tended to retain the mathematical structure of their predecessors. Worrall’s thought was that theories can successfully represent the structure of the world, although they tend to be wrong in their claims about the entities they posit. As Worrall put it: the structural realist “insists that it is a mistake to think that we can ever ‘understand’ the nature of the basic furniture of the universe” (1989: 122). Then, in opposition to scientific realism, structural realism restricts the cognitive content of scientific theories to their mathemat- ical structure together with their empirical consequences. But, in opposition to instrumentalism, structural realism suggests that the mathematical structure of a theory represents the structure of the world (real relations between things). (See I. Votsis, “Structural realism and its variants,” ch. 9 of this volume.) Unsurprisingly, the chief argument for structural realism is a (weak) version of the ‘no mira- cles’ argument. The key idea is that though successful novel predictions suggest that the theory has latched onto the world, it is only the structure of the world (as this is expressed by the math- ematical structure of the theory) that the theory latches onto. Against the pessimistic induction, structural realism contends that there is continuity in theory-change, but this continuity is (again) at the level of mathematical structure. Hence, the ‘carried-over’ mathematical structure of the theory correctly represents the structure of the world, and this best explains the predictive success of a theory. Now, if this kind of argument is to lend any credence to structural realism, it must be the case that the mathematical structure of a theory is somehow exclusively responsible for the predictive success of the theory. But, as I have argued in detail in Psillos (1995), it is not true that the math- ematical equations alone – devoid of their physical content – can give rise to any predictions. If structural realism is to employ a version (no matter how weak) of the no-miracles argument in order to claim that retained mathematical equations reveal real relations in the world, it should also admit that some physical content – not necessarily empirical and low level – is also retained. But such an admission would undercut the claim that the predictive success vindicates only the mathematical structure of a theory; by the same token, it would undercut the epistemic dichot- omy between the structure and the content of a physical theory. Structural realism was independently developed in the 1970s by Grover Maxwell (1970, 1970a) in an attempt to show that the Ramsey-sentence approach to theories need not lead to instrumentalism. He called ‘Structural realism’ the view that: (i) scientific theories issue in existential commitments to unobservable entities and (ii) all non-observational knowledge of unobservables is structural knowledge, that is, knowledge not of their first-order (or intrinsic) prop- erties but rather of their higher-order (or structural) properties. The key idea here was that a

30 Realist turn in philosophy of science

Ramsey-sentence satisfies both conditions (i) and (ii). So we might say that, if true, the Ramsey-sentence RT gives us knowledge of the structure of the world: there is a certain structure which satisfies the Ramsey-sentence, and the structure of the world (or of the relevant worldly domain) is isomorphic to this structure. It should be noted that Maxwell’s point against Carnap was that the Ramsey-sentence approach to theories was amenable to a realist construal more than to an instrumentalist one. Though initially Worrall’s version of structural realism was different from Maxwell’s, being focused on – and motivated by – Henri Poincaré’s argument for structural continuity in theory-change,9 in later work Worrall came to adopt the Ramsey-sentence version of structural realism (see appendix IV of Zahar 2001). So what I (2006) have called Maxwellian-Worrallian structural realism asserts that the world has excess structure over the appearances, but this excess structure can be captured (hypothetico-deductively) by the Ramsey-sentence of an empirically adequate theory. Recall from section 3 that Carnap’s insight in the 1950s was that a scientific theory T is logically equivalent to the following conjunction: RT & (RT →T), where the Ramsey-sentence captures the factual content of the theory, and the conditional RT →T captures its analytic content (it is a meaning postulate). Precisely because the so-called Carnap-sentence is analytic, Carnap thought that characterizing the excess content of a theory over its Ramsey-sentence as talking about certain unobservable entities (e.g., electrons) or as talking indifferently about whatever satisfied the theory (even if these were taken to be numbers and sets thereof ) was a matter of linguistic choice. But for a realist this cannot be a matter of choice of language. In any case, it turns out that if the Ramsey-sentence RT is true, the theory T must be true: it cannot fail to be true. Is there a sense in which RT can be false? A Ramsey-sentence may be empirically inadequate. Then it is false. But if it is empirically adequate (if, that is, the structure of observable phenomena is embedded in one of its models), then it is bound to be true. For, as Max Newman (1928) first noted in relation to Russell’s structuralism, given some cardinality constraints, it is guaranteed that there is an interpretation of the variables of RT in the theory’s intended domain.10 Though Carnap felt at home with this result since empiricism could thus accommodate the claim that theories are true, without going much beyond empirical adequacy, reducing truth to empirical adequacy is a problem for those who want to be realists, even if just about structure. For it is no longer clear what has been left for someone to be realist about. This is a pretty damaging objection to structural realism. The only way out is for structural realism to abandon pure structuralism and to treat structure as being defined by real or natural relations. Having first specified these natural relations, one may abstract away their content and study their structure. But if one begins with the structure, then one is in no position to tell which relations one studies and whether they are natural or not.11

8 Concluding thoughts Four decades after the ‘no miracles argument’ and the ‘pessimistic induction’, where does the realism debate stand? It seems fair to say that a key realist claim, namely, that science does offer knowledge of the unobservable part of nature, has been vindicated. Currently, all sides of the debate – with the exception of constructive empiricism – admit that science does offer epistemic access to some unobservable parts of reality. Hence, the unobservable is not, ipso facto, epistemically inaccessible. Old empiricism-motivated claims that scientific knowledge is restricted to whatever is given in immediate experience and observation hold no weight anymore.12

31 Stathis Psillos

By the same token, however, the rivalry to scientific realism has now shifted to the general point that there is a sharp epistemic division to be drawn within the unobservable – that is, between those aspects of the unobservable that are epistemically accessible and those that are not. Structural realists, for instance, draw the division between the knowable (unobservable) structure of nature and whatever is left to ‘fill in’ the structure – objects, entities, natures and the like. Contextual instrumentalists (Stanford 2006) draw the division between those entities to which there is an independent route of epistemic access (mediated by theories that cannot be subjected to serious doubt) and those entities to which all supposed epistemic access is mediated by high- level theories. The former are epistemically accessible, while the latter are said to be impenetrable. Semi-realists (cf. Chakravartty 2007) draw the division between detection properties and auxil- iary properties of particulars; and so on. The common denominator of all these dichotomous positions is this: there is a principled limit to the scientific knowledge of the world. (The limit is different in different positions, but it is always principled and definite.) The realist victory is that this division is within the realm of the unobservable. But the realist defeat is that some aspect of the unobservable is, for principled reasons, inaccessible. In my own work I have tried to argue that there is no good reason (either a priori or a pos- teriori) to think that there is a principled epistemic division between what can be known of nature and what cannot. There might be parts of nature that science might never be able to map out, but these do not fall nicely within a conceptual category which captures one side of a sharp epistemic dichotomy (the unknown X: the noumena; the non-structure; the intrinsic properties; the auxiliary properties; whatever-there-is-only-thin-epistemic-access-to; whatever-there-is-only- theory-mediated-access-to; and the like). Though the epistemic debate still goes on, the focus of attention has been shifting from epistemology towards metaphysics and ontology. The key question seems to be the following: if we take science seriously and if we take scientific theories as true, or approximately true, are we thereby committed to a certain way to understand the deep structure of the world? Are we com- mitted to substantive accounts of causation, laws, necessity, properties and other key metaphysical categories? Or are deflationary accounts good enough? As I have put it in Psillos (2013), the key contrast is between a neo-Aristotelian scientific realism and a neo-Humean one. I have personally sided with the neo-Humeans, but currently lots of interesting work is done on this front. The current enthusiasm for ontic structuralism is a case in point.

Notes 1 Earlier empiricist approaches understood the truth-conditions of theoretical assertions reductively, but this is a different story. For a detailed discussion, see Psillos (1999: ch. 1). 2 A notable exception is Philipp Frank (1932), whose instrumentalism, in modern terminology, is a form of non-cognitivism: theories are symbolic tools that do not (aim to) represent anything which is not antecedently given in experience. 3 The ‘story’ of approximate truth and truth-likeness (note: these are distinct concepts) is long and com- plex. For an account see Psillos (1999: ch. 11). See also Niiniluoto (1987), Kuipers (2000) and G. Schurz, “Truthlikeness and approximate truth,” ch. 11 of this volume. 4 For a different take on the relation between theory and truth, see Devitt (1984). See also J. Asay, “Real- ism and theories of truth,” ch. 30 of this volume. 5 The pure causal theory of reference fails for various reasons, as I noted in Psillos (1999: ch. 12). There, I articulated a causal-descriptive theory of reference as part of the realist toolbox. For more on this, see Psillos (2012). 6 Hesse, like almost everyone else at that time, made a connection between reference and ontology in that the ontological commitments of the theory are reflected in the (putative) reference of its theoretical

32 Realist turn in philosophy of science

terms. Hence, whether or not there is continuity in ontology among successive theories was taken to be the same as the existence or not of referential continuity. In her argument, Hesse relied precisely on the possibility, “emphasized by revolutionaries,” that, as she put it, “all our theoretical terms will, in the natural course of scientific development, share the demise of phlogiston” (1976: 271). 7 Laudan (1981: 35) noted the famous 6-to-1 ratio: “I daresay that for every highly successful theory in the past of science which we now believe to be a genuinely referring theory, one could find half a dozen once successful theories which we now regard as substantially non-referring.” 8 The divide et impera strategy has generated considerable discussion. For some recent takes on it see Cordero (2011) and Vickers (2013). See also P. Vickers, “Historical challenges to realism,” ch. 4 of this volume. 9 For a detailed analysis of Poincaré’s structural realism in relation to his conventionalism, see Psillos (2014). 10 For more on this see Psillos (1999, 2001, 2006). See also Demopoulos (2003). 11 For more on this see Psillos (2009), Ainsworth (2009), Cruse (2005) and Cruse and Papineau (2002). Partly because of the failures of the standard (so-called ‘epistemic’) version of structural realism and partly because of independent reasons, an ‘ontic’ version of structuralism has acquired currency. I won’t go into the debates around ontic structural realism. For an overview and recent developments see Lady- man and Ross (2007), French (2014) and Psillos (2009, 2016). 12 It must be stressed, however, that logical empiricists defended some anti-metaphysical version of scien- tific realism – see Psillos (2011), and also M. Neuber, “Realism and logical empiricism,” ch. 1 of this volume.

References Ainsworth, P. (2009) “Newman’s Objection,” The British Journal for the Philosophy of Science 60, 135–171. Boyd, R. (1971) Realism and Scientific Epistemology, unpublished typescript. ——— (1981) “Scientific Realism and Naturalistic Epistemology,” in P. D. Asquith and T. Nickles (eds.), PSA 1980 (Vol. 2), East Lansing: Philosophy of Science Association, pp. 613–662. Carnap, R. (1958) “Beobachtungssprache und Theoretische Sprache,” Dialectica 12, 236–248. Trans as “Observation Language and Theoretical Language,” in J. Hintikka (ed.), Rudolf Carnap, Logical Empiricist (1975), Dordrecht: Reidel. Chakravartty, A. (2007) A Metaphysics for Scientific Realism: Knowing the Unobservable, Cambridge: Cambridge University Press. Cordero, A. (2011) “Scientific Realism and the Divide et Impera Strategy: The Ether Saga Revisited,” Philos- ophy of Science 78, 1120–1130. Craig, W. (1956) “Replacements of Auxiliary Assumptions,” The Philosophical Review 65, 38–55. Cruse, P. (2005) “Ramsey-Sentences, Structural Realism and Trivial Realisation,” Studies in History and Phi- losophy of Science 36, 557–576. Cruse, P. and Papineau, D. (2002) “Scientific Realism without Reference,” in M. Marsonet (ed.),The Problem of Realism, Aldershot: Ashgate, pp. 174–189. Demopoulos, W. (2003) “On the Rational Reconstruction of Our Theoretical Knowledge,” The British Journal for the Philosophy of Science 54, 371–403. Devitt, M. (1984) Realism and Truth (Second Revised edition 1991), Oxford: Blackwell. Feigl, H. (1950) “Existential Hypotheses: Realistic versus Phenomenalistic Interpretations,” Philosophy of Science 17, 35–62. Fraassen, B. C. van (1980) The Scientific Image, Oxford: Clarendon Press. Frank, P. (1932) The Law of Causality and its Limits (M. Neurath and R. S. Cohen, trans.), Dordrecht: Kluwer. French, S. (2014) The Structure of the World: Metaphysics and Representation, Oxford: Oxford University Press. Ghins, M. (2001) “Putnam’s No-Miracle Argument: A Critique,” in S. P. Clarke and T. D. Lyons (eds.), Recent Themes in the Philosophy of Science. Norwell, MA: Kluwer Academic Publishers, pp. 121–137. Hempel, C. (1958) “The Theoretician’s Dilemma: A Study in the Logic of Theory Construction,” in Min- nesota Studies in the Philosophy of Science, Minneapolis: University of Minnesota Press, pp. 37–98. Hesse, M. B. (1976) “Truth and Growth of Knowledge,” in F. Suppe and P. D. Asquith (eds.), PSA 1976 (Vol. 2), East Lansing: Philosophy of Science Association, pp. 261–280. Kitcher, P. (1993) The Advancement of Science, Oxford: Oxford University Press. Kuipers, T.A.F. (2000) From Instrumentalism to Constructive Realism, Dordrecht: Reidel. Ladyman, J. and Ross, D. (2007) Every Thing Must Go: Metaphysics Naturalised, Oxford: Oxford University Press.

33 Stathis Psillos

Laudan, L. (1981) “A Confutation of Convergent Realism,” Philosophy of Science 48, 19–49. Maxwell, G. (1970) “Theories, Perception and Structural Realism,” in R. Colodny (ed.), The Nature and Function of Scientific Theories, Pittsburgh: University of Pittsburgh Press, pp. 3–34. ——— (1970a) “Structural Realism and the Meaning of Theoretical Terms,” in Analyses of Theories and Methods of Physics and Psychology (Minnesota Studies in the Philosophy of Science, Vol. 4), Minneapolis: University of Minnesota Press, pp. 181–192. Newman, M.H.A. (1928) “Mr. Russell’s ‘Causal Theory of Perception’,” Mind 37, 137–148. Newton-Smith, W. H. (1981) The Rationality of Science, London: RKP. Niiniluoto, I. (1987) Truthlikeness, Dordrecht: D. Reidel Publishing Company. Psillos, S. (1995) “Is Structural Realism the Best of Both Worlds?” Dialectica 49, 15–46. ——— (1996) “Scientific Realism and the ‘Pessimistic Induction’,” Philosophy of Science 63, S306–S314. ——— (1999) Scientific Realism: How Science Tracks Truth, London and New York: Routledge. ——— (2001) “Is Structural Realism Possible?” Philosophy of Science 68, S13–S24. ——— (2006) “Ramsey’s Ramsey-Sentences,” in M. C. Galavotti (ed.), Cambridge and Vienna: Frank P Ram- sey and the Vienna Circle (Vienna Circle Institute Yearbook 12), New York: Springer, pp. 67–90. ——— (2009) Knowing the Structure of Nature, London: Palgrave-MacMillan. ——— (2011) “Choosing the Realist Framework,” Synthese 190, 301–316. ——— (2012) “Causal-Descriptivism and the Reference of Theoretical Terms,” in A. Raftopoulos and P. Machamer (eds.), Perception, Realism and the Problem of Reference, Cambridge: Cambridge University Press, pp. 212–238. ——— (2013) “Semirealism or Neo-Aristotelianism?” Erkenntnis 78, 29–38. ——— (2014) “Conventions and Relations in Poincaré’s Philosophy of Science,” Methode-Analytic Perspectives 3, 98–140. ——— (2016) “Broken Structuralism,” Metascience 25, 163–171. Putnam, H. (1963) “‘Degree of Confirmation’ and Inductive Logic,” in P. Schilpp (ed.), The Philosophy of Rudolf Carnap, La Salle, IL: Open Court. Reprinted in Mathematics, Matter and Method, pp. 270–292. ——— (1965) “Craig’s Theorem,” Journal of Philosophy 62. Reprinted in Mathematics, Matter and Method, pp. 228–236. ——— (1973) “Explanation and Reference,” in G. Pearce and P. Maynard (eds.), Conceptual Change, Dor- drecht: Reidel, pp. 196–214. ——— (1974) “Philosophy of Language and Philosophy of Science,” in R. S. Cohen and Marx W. War- tofsky (eds.), PSA 1974, Dordrecht: Springer, pp. 603–610. ——— (1975) Mathematics, Matter and Method (Philosophical Papers, Vol. 1), Cambridge: Cambridge Uni- versity Press. ——— (1975a) “The Meaning of ‘Meaning’,” in K. Gunderson (ed.), Language, Mind and Knowledge (Minnesota Studies in the Philosophy of Science, Vol. 7), Minneapolis: University of Minnesota Press, pp. 131–193. ——— (1978) Meaning and the Moral Sciences, New York: Routledge and Kegan Paul. Ramsey, F. ([1929] 1931) “Theories,” in R. B. Braithwaite (ed.), The Foundations of Mathematics and Other Essays, London: Routledge and Kegan Paul, pp. 212-236. Sellars, W. (1963) Science, Perception and Reality, (re-issued 1991) Atascadero, CA: Ridgeview Publishing Company. Stanford, P. K. (2006) Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives, Oxford: Oxford University Press. Vickers, P. (2013) “A Confrontation of Convergent Realism,” Philosophy of Science 80, 189–211. Worrall, J. (1989) “Structural Realism: The Best of Both Worlds?” Dialectica 43, 99–124. Zahar, E. (2001) Poincaré’s Philosophy: From Conventionalism to Phenomenology, La Salle, IL: Open Court.

34 PART II Classic debate Core issues and positions

3 SUCCESS OF SCIENCE AS A MOTIVATION FOR REALISM

K. Brad Wray

1 Introduction Science is undeniably very successful. All those involved in the debate, realists and anti-realists, are struck by this fact. Science is successful in a number of ways. Our scientific theories enable us to make very accurate predictions of observable phenomena. For example, from- Isaac Newton’s the ory, Edmond Halley predicted the return of a comet decades in advance. The precision with which scientists have been able to predict some phenomena is quite startling. Urbain Le Verrier predicted the location of the hitherto unseen planet Neptune within 55’ of arc on the basis of Newtonian mechanics and observed anomalies in the orbit of Uranus (see Grosser 1979 [1962]: 118). Realists marvel at the success of science and see it as providing compelling grounds for believ- ing that the claims our theories make about unobservable entities are likely approximately true and that the unobservable entities posited by our best theories likely exist and likely have the properties that our theories attribute to them. Anti-realists, though, are skeptical about theoret- ical knowledge. They question whether the success of our current best theories offers adequate support for scientific realism. All arguments in support of realism involve, in one way or another, some aspect of the success that we attribute to science. The key issues I will discuss here include:

1 What sense of success can the realist appeal to? 2 How are the various kinds of successes meant to support realism? 3 How convincing are the realists’ arguments? 4 What are the anti-realist responses to these arguments?

2 The No Miracles Argument The No Miracles Argument (NMA), one of the key realist arguments, is motivated by the desire to explain the success of science. The No Miracles Argument notes that if the success of our current scientific theories is not due to the fact that they are true or at least approximately true with respect to what they say about unobservables, then the success of these theories is a miracle (see Putnam 1978). No one is claiming that the success of science is a miracle. So the realist urges us to accept that the success of science is best explained by the hypothesis that our current best theories are probably true or approximately true.

37 K. Brad Wray

This explanation for the success of science is presented as an argument for scientific realism. It seems that because the realist offers the best explanation for the success of science, we have reason to believe that our theories are in fact approximately true. This pattern of reasoning is non-deductive. It is often referred to as “abductive reasoning” or alternatively as an “inference to the best explanation,” or IBE for short (see J. Saatsi, “Realism and the limits of explanatory reasoning,” ch. 16 of this volume). The best explanation for the success of science, the fact that our theories enable us to make very accurate predictions, is that they are approximately true. Alternative explanations, for example the miracle explanation, are far less plausible. Though it is not logically impossible that our theories misrepresent the world and their success is just a happy accident, realists claim that it is quite unlikely that this is the case.

2.1 Refining the No Miracles Argument More recently, Alan Musgrave has suggested that a revised version of the No Miracles Argu- ment would strengthen the case for realism (Musgrave 1988: 239–240). Musgrave believes that what needs to be explained are not just any successes of a theory but rather a specific class of successes, the successful predictions of novel phenomena. “A predicted fact is a novel fact for a theory if it was not used to construct the theory” (Musgrave 1988: 232). Musgrave refines the No Miracles Argument further. He suggests that it is not enough to merely show that the best explanation for the success of a particular scientific theory is that it is approxi- mately true. Rather, the explanation that appeals to the truth must pass some sort of thresh- old of plausibility. The exact details of when an explanation does pass the threshold are not specified, and Musgrave recognizes that this issue needs to be addressed (see Musgrave 1988: 239). But if a theory does in fact predict some otherwise unpredictable phenomena that it was not designed to account for, then it seems there is some reason to believe the theory is likely approximately true. Though Musgrave suggests that this is a strong argument in support of scientific realism, he does raise a concern for the Ultimate Argument, as he calls it. Contemporary realists and anti-realists are divided on the goals of science. Realists believe that scientists seek to explain the phenomena (see Musgrave 1988: 246). Some contemporary anti-realists, on the other hand, sug- gest the goal of science is merely to “save the phenomena,” that is, account for the observables. As a result, anti-realists are not likely to be persuaded by an explanationist defense of realism, the sort of defense offered by the No Miracles Argument (see Musgrave 1988: 249; see also Lipton 2004: 186). Anti-realists, after all, do not think explanation is a goal of science. Larry Laudan raises a more serious concern for the No Miracles Argument. He argues that inferring the truth of a theory from its success is an instance of the fallacy of affirming the con- sequent (see Laudan 1981: 45). And Greg Frost-Arnold (2010) argues that the scientific realism that appears in the No Miracles Argument makes no new predictions and thus fails as a scientific theory about the success of science (see also Doppelt 2005: 1080).1 Eric Barnes (2002), though, argues that a version of the No Miracles Argument can be defended provided it can be shown that “the rate at which empirically successful theories emerge is significantly higher than one can attribute to chance” (Barnes 2002: 117).

2.2 Novel predictions As mentioned, one thing that makes the scientific realists’ No Miracles Argument seem com- pelling is the fact that our best theories often enable scientists to predict novel phenomena, phenomena that our theories were not initially designed to explain or predict.

38 Success of science as a motivation

Jarrett Leplin (1997) believes that novel predictions provide the best defense of scientific realism. It is certainly impressive when a scientist is able to derive some hitherto-unnoticed observable consequence from a theory and then discovers that the world is as she predicts. Gali- leo, for example, predicted that Venus would exhibit the full range of phases like the moon if the Copernican theory were correct. The Ptolemaic model for Venus’s orbit, on the other hand, was incompatible with this prediction.2 Importantly, the phases of Venus can only be seen with the aid of a telescope, and over the course of many months. In order to secure his priority in making the discovery, Galileo sent the prediction encoded in an anagram in a letter to Johannes Kepler. When Galileo’s observations confirmed that Venus does exhibit the range of phases, Copernicus’ theory did attract the attention of more astronomers.3 One can see why the realist is tempted to say that such successes are inexplicable if they are not a consequence of the fact that the theory from which the predictions were derived is in fact approximately true. But throughout the history of science there are numerous cases in which vindicated predic- tions of novel phenomena were derived from false theories. Peter Vickers (2013) and Timothy Lyons (2002) have identified a number of cases in which various novel phenomena were suc- cessfully predicted on the basis of theories that were subsequently discovered to be false (see also Saatsi and Vickers 2011). Lyons provides a long list of novel successes from 13 different theories that we now regard as false, including predictions derived from the caloric theory, Newtonian mechanics, Fresnel’s wave theory of light, and Dalton’s atomic theory (see Lyons 2002: 70–72). Some of these theories supported a number of successful predictions of novel phenomena. Even the phlogiston theory enabled scientists to generate vindicated predictions of novel phenomena. For example, “Priestley predicted and confirmed that heating a calx with ‘inflammable air’ (hydrogen) would turn it into a metal” (see Vickers 2013: 191). These examples of vindicated predictions of novel phenomena derived from false theories suggest that the realists’ appeal to novel predictions does not settle the issue in favor of the realists’ explanation for the success of science.

3 Rethinking the evidence of success Realists who appeal to the No Miracles Argument assume that success is a reliable indicator of the truth of a theory. A number of philosophers, though, have suggested that the realists’ No Miracles Argument commits the base-rate fallacy. In the No Miracles Argument the relevant base-rate is the rate of approximately true theories in the population of successful theories. If there are many successful but false theories and few approximately true successful theories, then it is fallacious to infer that a successful theory is likely true (see Howson 2000: 54; Magnus and Callender 2004: §§2–4; Lipton 2004: 197–198; Worrall 2012: §4.3). Colin Howson (2013) expresses the point in the following way. The “fallacy . . . consists in ignoring the dependence of the posterior probability on the prior [probability]” (2013: 206). The prior probability of a successful theory being true is quite low if in fact there are many successful but false theories. For success to be a reliable indicator, it would have to be the case that most successful theories are true (see Wray 2013). Laudan has developed a sustained attack against the realists’ claim that success and truth are connected in a way that would make success a reliable indicator of the truth of our theories. Laudan (1981) argues that the realist is mistaken (i) in regarding success as a reliable indicator of the truth of a theory, and (ii) in regarding success as a reliable indicator that the theoretical terms in a theory genuinely refer, that is, that the theoretical entities posited by our successful theories really exist. Laudan argues that there are a number of successful theories in the history of science that, by the lights of today’s best theories, we regard as false. Hence, not all successful

39 K. Brad Wray theories are even approximately true. Similarly, Laudan argues that many successful theories of the past have turned out not to have genuinely referring theoretical terms. For example, the phlogiston theory had a number of successes, but contemporary scientists believe there is no such substance as phlogiston. Hence, not all successful theories have genuinely referring theoretical terms. Laudan’s general point is that theoretical truth in science is not systematically tied to empir- ical success, as realists assume. But realists generally assume that success is a reliable indicator of truth. Laudan has extended this attack by suggesting that other theoretical virtues, like scope and simplicity, are not tied to the truth in any reliable or systematic way. The fact that one theory purports to explain more things than a competitor theory does not give us adequate reason to believe the theory with the larger scope is true or even more likely true than the competitor (see Laudan 2004). Eric Winsberg (2006) also argues that success and truth are not linked in the strong way that realists assume in the No Miracles Argument. Winsberg is specifically concerned with simulation models that scientists design and use that incorporate “contrary-to-fact principles” that make the models more accurate than they would otherwise be (Winsberg 2006: 1). Such models pose a serious challenge to any form of realism that takes success to be a reliable indicator of the truth of scientific theories and models. It is only with the aid of the contrary-to-fact principles that the models are able to generate the successes they do. Peter Lewis (2001) has understood the connection between success and truth differently from Laudan. Lewis does not think that the realist needs to show that most successful theories are true in order to claim that success is a reliable indicator of the truth. Rather, Lewis believes that success is a reliable indicator of truth provided a higher proportion of successful theories are true than are unsuccessful theories. Thus, even if most successful theories are false, success can be a reliable indicator.4 K. Brad Wray (2013) has challenged Lewis on this claim. Wray argues that in order to support the realists’ claim about the approximate truth of our current best theories, the realist needs to argue that most successful theories are approximately true. Otherwise, the fact that a theory is successful provides little reason for believing it is approximately true.

4 Abductive inference The No Miracles Argument has given rise to a number of related debates, including a debate about the power and scope of abductive reasoning. Richard Boyd (1984) argues that abductive reasoning figures in the strongest defense for real- ism. Boyd believes that a key challenge facing realists and anti-realists is to explain the instrumen- tal reliability of the methods of science. Realists and anti-realists believe that the methods of science are instrumentally reliable; they enable scientists to develop theories that generate and continue to generate true predictions. Boyd argues that the best explanation for the reliability of our current best methods is that they are conducive to leading scientists to develop theories that are approx- imately true. These methods, Boyd argues, were developed with the aid of background theories that are themselves approximately true. The anti-realist, he suggests, provides no explanation for the instrumental reliability of our current best methods. Boyd claims that the reliability of scientific methods explains why the history of science is marked by ever-increasingly more accurate theories (see Boyd 1984: 59, 64). Further, Boyd believes that the anti-realists’ “rejection of abduction . . . would place quite remarkable strictures on intellectual inquiry” (Boyd 1984: 67). Without abduction as a resource, our scientific knowl- edge would be extremely limited. It would amount to a form of skepticism far beyond the sort of skepticism of theoretical knowledge that contemporary anti-realists aim to defend.

40 Success of science as a motivation

Eric Barnes, though, argues that Boyd’s explanation for the success of science just pushes the need for an explanation of the success of science further back. According to Barnes, Boyd needs to explain why it is that scientists were able to develop approximately true background theories (see Barnes 2002: 100–101). Peter Lipton (2004) provides an extensive analysis and defense of abductive reasoning in science and everyday life, including its relevance to the realism/anti-realism debate. He is less sanguine about the power of abductive reasoning. People and scientists in particular use abductive reasoning on a regular basis. For example, a person may “infer that the fuse is blown . . . because none of the lights or electrical appliances in the kitchen are working” (Lipton 2004: 202). This is a typical example of abductive reasoning. Positing a hitherto-unseen blown fuse explains the phenomena and does so better than alternative explanations. Scientists also use abductive reason- ing. An archaeologist may find the skeletal remains of two humans at an excavation, where one is buried 30 centimeters below the other. She then concludes that the best explanation for the relative proximity of the two sets of human remains is that the one found deeper in the ground died at an earlier time. Clearly, the archeologist could be mistaken, but, other things being equal, this is a reasonable inference. But Lipton thinks that the No Miracles Argument involves an illegitimate application of abductive reasoning. Lipton believes that unlike the Inferences to the Best Explanation that scientists make, the inference in the No Miracles Argument is not a causal Inference to the Best Explanation. Consequently, it does not provide warrant for the conclusion (see Lipton 2004: 196). Further, Lipton claims that the underdetermination of theory choice by data poses a serious threat to the No Miracles Argument. This makes an Inference to the Best Explanation unwarranted in this context. Theory choice is underdetermined when there are two or more theories that can account for the available data equally well. Often the competing theories cited are merely logically possible alternative theories. But sometimes in the history of science scien- tists have encountered transient underdetermination, a choice between two real theories that are equally well supported given the available data. The existence of competing theories that are as successful as the currently accepted theory undermines the inference from the success of a theory to its truth (Lipton 2004: 196). Bas van Fraassen (1989) has developed a more wide-ranging attack on abduction than Lip- ton’s, one that has significant implications for the realists’ appeal to abductive inference. Van Fraassen argues that there is no principle of abductive reasoning that is binding on all rational agents. There is no rule of reasoning that says it is irrational not to believe “the best explanation.” Van Fraassen notes that unless we have good reason to believe that we are choosing between a set of alternative explanations that includes the truth in it, then reasoning abductively can be detri- mental. When the set of alternative explanations or theories we are choosing between does not include the true explanation or theory, then we will be led to believe the best of a bad lot (see van Fraassen 1989: §4). Van Fraassen’s position is frequently misunderstood, so it is worth emphasizing that he is not saying that it is irrational to reason abductively. His claim is weaker than this. He claims that the canons of rationality do not demand that one believe an explanation that is merely supported by an inference to the best explanation. In fact, the canons of rationality do not demand or prescribe belief in any scientific theory. It is enough for scientists to accept the best supported theory, where acceptance involves a commitment to work with a theory, which is something less than belief in the truth of a theory. Van Fraassen also argues that the Ultimate Argument for Scientific Realism considers a rather narrow set of explanations, specifically (i) our theories are likely true, or (ii) the success of our theories is due to a miracle. These options hardly exhaust the possibilities. Van Fraassen argues

41 K. Brad Wray that there is bound to be an alternative explanation for the success of science, and quite likely a more plausible one (van Fraassen 1980: 38–39; see also Wray 2007). Van Fraassen’s preferred alternative is discussed in detail in what follows.

5 Selective realism: isolating the cause of success Insofar as realists are driven by the need to explain the success of science, they only need to be realists about those parts of theories that are responsible for the successes of our theories (see P. Vickers’s “Historical challenges to realism,” ch. 4 of this volume). Some realists have attempted to defend realism by isolating those parts of theories that are responsible for their success. These realists recognize that no theory is likely completely true. But they also insist that it is quite prob- able that those theories that have been successful are either approximately true or have truth-like qualities. This form of realism is called “selective realism” in recognition of the fact that the realist can be selective about which of the claims implied by her theory she is committed to. Alterna- tively, this position is called “deployment realism” or “preservative realism.” Both Philip Kitcher (1993) and Stathis Psillos (1999) have defended a form of selective realism. Kitcher distinguishes between the working posits of a theory, those elements that are causally respon- sible for a theory’s successes, and the presuppositional posits of a theory. The latter are the sorts of things that are abandoned over time, as scientists refine their theories. The former, on the other hand, are retained through theory change. With this distinction, Kitcher believes he can reconcile realism with revolutionary changes of theory, which seem to be an undeniable part of science. Similarly, Psillos recognizes that scientists are apt to (i) develop better theories in the future and (ii) discover that their predecessors held false beliefs. But he argues that through a change of theory some elements of the theory that is being replaced are retained and that these are the parts of the theory that were causally responsible for the successes of the theory. Consequently, Psillos argues that theory change need not threaten the growth of theoretical knowledge. Selective realism has some prima facie plausibility. It is a modest form of realism and acknowledges that in the future scientists are apt to develop even better theories, even in those fields in which scientists are able to make extremely precise predictions. But some critics have suggested that it is only retroactively that the selective realist can identify which elements in a theory are the working posits. Selective realists are in no position to indicate which posits in a currently accepted theory are the genuine working posits that will be retained in the future and which are merely presuppositional and thus apt to be abandoned in the future. Its explanatory power is post hoc (see Stanford 2003: 569; Chakravartty 2007: 46). In this respect, selective realism is of limited value. Gerald Doppelt raises a different concern for selective realists. He believes that selective realists, Psillos in particular, fail to realize that “realists are committed to accounting for the explanatory success of theories, not their mere empirical ade- quacy or instrumental reliability” (Doppelt 2005: 1076). That is, “what the . . . realist needs to but cannot explain is the explanatory success of theories; why theories succeed in pro- ducing simple, consilient, intuitively plausible, unifying, and complete accounts of observed phenomena” (2005: 1084). Despite these criticisms, selective realism is still regarded as a viable position. David Harker (2010), for example, argues that a stronger selective realism can be defended that focuses on the comparative achievements of successor theories. If a new theory is more accurate than its pre- decessor, we have reason to believe that the new theory is relatively closer to the truth and that the greater accuracy is due to the elements of the new theory that are different from those of its predecessor. Harker, though, rightly resists the temptation to speculate how close our current theories are to the truth.

42 Success of science as a motivation

6 The varieties of success in science In recent decades there has been a proliferation of types of scientific realism. These various positions are developed in response to some of the concerns discussed already. Common to all of them is a recognition that realists may need to be more modest than they have been in the past. The ways in which our theories latch onto reality are more circumscribed than previously assumed. Some realists have conceded that changes of theory have been ubiquitous in the history of science and are likely to continue into the future. But some insist that through radical theory change progress is still possible. This line of defense has taken a variety of forms. Some claim that scientists often retain the theoretical equations used in earlier theories. The fact that a theoretical equation is retained through a change of theory suggests that the equation is getting at the underlying structure of reality. This position is called structural realism and is defended by John Worrall (1989) and James Ladyman (1998), among others.5 Structural realists claim that the theoretical equations describe the formal structure of the world, and this is where scientific knowledge is growing. James Clerk Maxwell, for example, was able to retain Augustin- Jean Fresnel’s theoretical equations despite the fact that Maxwell made radically different assump- tions about the nature of light. Whereas Fresnel regarded light as a wave moving through the ether, Maxwell regarded light as a periodic disturbance “in the ‘disembodied’ electromagnetic field” (see Worrall 1989: 116). As far as Worrall is concerned, the mistake of many is to think that we are deepening our understanding of the theoretical entities posited by our theories. Despite the fact that Fresnel’s and Maxwell’s beliefs about the nature of light were mistaken, the persistence of the theoretical equations through radical changes of theory suggests that scientists really have latched onto an understanding of the underlying structure of phenomena related to light. Alternatively, both Ian Hacking and Anjan Chakravartty emphasize the fact that scientists are able to manipulate the world in very precise and subtle ways. This is a different type of success than prediction, the focus of the arguments discussed earlier. Hacking (1983) defends a view called “entity realism,” which he developed as he worked on scientific experimentation while he studied the scientists at the Stanford Linear Accelerator Center (SLAC; see Hacking 1983: ch. 16). Hacking notes that when scientists can use a particular theoretical entity in their investigations of other purported entities, they have strong evidence that the entity they use as an instrument exists, even if they may be mistaken about some of the properties of that type of entity. For example, Hacking notes that scientists studying photons use electron guns as tools in their investigations. Hacking claims that the predictable way in which scientists manipulate electrons as instruments provides strong evidence that electrons exist. Importantly, Hacking notes that the various scientists involved in such experiments may have very different views about the properties of electrons (Hacking 1983: 264). Axel Gelfert (2003) argues that Hacking’s suggested criterion of manipulative success is not a sufficient condition for determining the existence of entities. So-called quasi-particles satisfy the criterion despite the fact that they are not entities.6 Hacking’s entity realism suggests that there is no plausible global defense of realism. Rather, we are required to consider the case for each type of theoretical entity independently. Similarly, Magnus and Callender argue that the only defensible arguments in support of realism are local arguments pertaining to specific entities and scientific specialties. Thus, in order to make any progress in the realism/anti-realism debate, the battle needs to be fought on a case-by-case basis (Magnus and Callender 2004; see also Achinstein 2002). More recently, Chakravartty (2008) has argued that scientists are accumulating knowledge of the properties they study rather than the theoretical entities they posit. In fact, Chakravartty believes that the structure that is captured in the equations that persist through theory change

43 K. Brad Wray captures the relations between properties. Chakravartty’s view can be regarded as a form of Structural Realism. Importantly, his view provides an answer to a criticism that is frequently raised against Structural Realism: what exactly is structure? Kitcher (2001) develops what he calls a “Galilean strategy” in defense of realism, an argu - ment aimed at showing that anti-realists are overly skeptical about the claims that scientists make about unobservable entities. As the name suggests, Kitcher draws inspiration from Galileo. In Starry Messenger, Galileo reported his first telescopic discoveries: the uneven surface of the moon, the fact that there are many more stars in the sky than can be seen with the naked eye, and the hitherto unseen moons of Jupiter. When challenged about the reliability of the telescope as an instrument for astronomical discoveries, Galileo argued that because we can confirm the relia- bility of the telescope on earth, and there is no reason to believe that the telescope is unreliable when looking at celestial objects, there is no reason to believe that the various things he reported seeing through the telescope are not in fact as they appear through the telescope. Galileo thus shifted the burden of proof to his skeptical opponents. Similarly, Kitcher argues that unless there is some independent argument against extending the reliability of a scientific instrument from what we can observe directly to what we can only detect indirectly, there is no reason not to be realists about the sorts of entities that figure in our successful theories. The anti-realist, Kitcher claims, owes us an argument for why we should not trust our instruments and theories that have proved successful with respect to directly observable entities. Unlike Kitcher, P. D. Magnus thinks that the onus for proof is on the realist to supply some independent grounds for thinking that the reliability of instruments can be extended into realms beyond our direct perception (see Magnus 2003). Magnus grants that in the case of Galileo’s tel- escopic observations, there was reason to believe that optical laws did not change from one locale to another. But Magnus doubts that realists are in a similar position with respect to many of the other sorts of unobservable entities postulated by scientific theories. 7 On the one hand, the proliferation of varieties of realism shows the ingenuity of realists in meeting the challenges posed by anti-realists. On the other hand, the lack of consensus amongst realists seems to suggest that they are, in some respects, on the defensive. Realism may be more threatened than is commonly thought.

7 Anti-realist alternatives Realists often claim that anti-realists are unable to explain the success of science. If we reflect on the nature of the No Miracles Argument we realize that the realist is claiming that there really is no alternative to the realists’ explanation for the success of science. To attribute the success to a miracle really amounts to no explanation at all. No one seriously believes that the success of our current theories is a result of a miracle. This has put anti-realists on the defensive. If anti-realism is to be at all plausible, then anti-realists will need to give some indication of how it is that our best theories are able to deliver successes on a regular basis, and often with great precision. Van Fraassen (1980) suggests that the anti-realist can explain the success of science by appeal- ing to a selection mechanism, a mechanism comparable to natural selection in the biological world. Van Fraassen argues that the success of our current best theories is a consequence of the fact that theories that fail to generate accurate predictions are abandoned. Importantly, van Fraas- sen’s selectionist explanation does not assume that successful theories are likely approximately true with respect to what they say about unobservables. Van Fraassen (1980) compares the selec- tion of successful theories to the fact that all the mice we encounter run from cats. Those mice that do not run from cats are less apt to survive long enough to breed and leave offspring. Other

44 Success of science as a motivation things being equal, those mice that do run from cats are more likely to survive and produce off- spring. As van Fraassen notes, we need not assume the mice that run from cats perceive cats to be their natural enemies. All the mice need to do is run from cats. Successful theories are similar. All a theory needs to do in order to survive is to be successful, to generate true predictions of observable phenomena. Strictly speaking, even less is required. All a theory needs to do is be more successful than existing competing theories (see Wray 2010). Realists are critical of this anti-realist explanation for the success of science, noting that it is unlikely that a false theory would continue to remain successful over time. Despite the common- sense plausibility of this criticism, as a matter of fact we know from the history of science that false theories have enjoyed successes, and often for long periods of time. Wray (2007, 2010) has defended van Fraassen’s selectionist explanation for the success of sci- ence against a series of criticism. Wray notes that scientific success is a relative term. Standards of success, including standards of accuracy, have changed over time. And a successful theory is just one that is better than the available alternatives. Wray also argues that provided a theory is applied to problems and phenomena similar to the problems and phenomena it was initially designed to address, it should not surprise us that a successful theory continues to be successful. This is not the only anti-realist response to the realists’ challenge for an explanation for the success of science. Kyle Stanford (2000) argues that the most we can reasonably infer from the predictive successes of a theory is that either (i) it is approximately true or (ii) it is predictively similar to the truth. Psillos objects to Stanford’s argument, claiming that his appeal to predictive similarity is parasitic on the truth (see Psillos 2001: 352). Psillos argues that insofar as predictive success is distinct from the truth, the predictive success of one theory in a specific scientific spe- cialty cannot explain the predictive successes of successor theories in that specialty.

8 Concluding remarks Those who appeal to the success of science as support for scientific realism face some serious challenges:

1 They need to clarify the notion of success relevant to their arguments. Though science is successful in many ways, maybe only some successes provide evidence that our theories are latching onto reality. And the traditional appeal to explanatory success seems threatened by evidence from the history of science. 2 They need to be more precise about the correlation between the success of a theory and the nature of their realist commitments. And it seems clear that the base-rate fallacy poses a threat to probabilistic versions of the No Miracles Argument. 3 They may also want to explore ways in which local arguments for realism can be developed. A global argument like the traditional No Miracles Argument seems doomed.

Further reading Stathis Psillos’s Scientific Realism: How Science Tracks Truth is the most useful introduction to scientific realism in general. Chapter 4 provides an analysis of the No Miracles Argument, and Chapter 5 provides a clear and accessible introduction to his selective realism. Alan Musgrave’s “The Ultimate Argument for Scientific Realism” (in R. Nola, ed.,Relativism and Realism in Science) also provides an accessible and useful introduction to the No Miracles Argument. Peter Lipton’s Inference to the Best Explanation (2nd edition) is a useful guide to Inferences to the Best Explanation and includes an analysis of the Miracles Argument, as he calls it, in Chapter 11. John Worrall’s “Structural Realism:

45 K. Brad Wray

The Best of Both Worlds?” (in Dialectica) remains one of the best introductions to Structural Real - ism, which, though an elusive position, is still a very influential view in the contemporary debates. Larry Laudan’s “A Confutation of Convergent Realism” (in Philosophy of Science) provides a sus- tained attack on the alleged connection between empirical success and theoretical truth.

Acknowledgement I benefited enormously from critical feedback from Juha Saatsi and Lori Nash. Part ofthe research for this article was completed while I was a Visiting Scholar in the Department of Lin- guistics and Philosophy at the Massachusetts Institute of Technology. I gratefully acknowledge the support of MIT. I also thank the State University of New York, Oswego, for my sabbatical leave in the 2015–2016 academic year, providing me with the time to work on this project.

Notes 1 Psillos argues that it is a mistake to treat scientific realism as a scientific hypothesis. Psillos explains that “scientific realism is not a theory; it’s a framework which makes possible certain ways of viewing the world” (Psillos 2011: 33). 2 In order to account for the fact that Venus never appears more than 45° from the sun, in devising a model for Venus Ptolemy stipulated that the center of Venus’s epicycle must always lie on a line running from the center of the earth to the sun. Given this restriction, Venus would not exhibit the full range of phases as the moon does. 3 The phases of Venus were compatible with Tycho Brahe’s theory. Consequently, Galileo’s vindicated prediction did not persuade all astronomers to accept the Copernican theory. 4 Lewis’s notion of a reliable test or indicator is adopted from reliable medical tests. A test for cancer, for example, is regarded as reliable if it has a low rate of false positives and a low rate of false negatives. False positives are test results that indicate cancer when in fact the subject does not have cancer. False negatives are test results that indicate no cancer when in fact the subject has cancer (see Lewis 2001: 374–375). 5 Worrall suggests that the position can be traced back to Henri Poincaré (see Poincaré 2001 [1903]). 6 Matthias Egg distinguishes between causal warrant and theoretical warrant and argues that there is causal warrant for realism about particle physics (see Egg 2012). Physicists, Egg claims, are sometimes warranted in making an inference to the most likely cause of the effects they study in the lab, as they did with the discovery of neutrinos. 7 Kitcher and Magnus misrepresent the historical details in this case. Well into the early 1600s, it was commonly believed in scientific circles that the terrestrial realm and the celestial realm were made of radically different substances. The terrestrial realm was made of four basic elements: earth, air, water, and fire. The celestial realm, on the other hand, was made of quintessence or ether, an indestructible element. On the basis of this distinction, it seems reasonable to doubt the veracity of the telescope with respect to observations of celestial objects. So the burden of proof may have been on Galileo.

References Achinstein, P. (2002) “Is There a Valid Experimental Argument for Scientific Realism?” Journal of Philosophy 99, 470–495. Barnes, E. C. (2002) “The Miraculous Choice Argument for Realism,” Philosophical Studies 111, 97–120. Boyd, R. N. (1984) “The Current Status of Scientific Realism,” in J. Leplin (ed.), Scientific Realism, Berkeley and Los Angeles: University of California Press, pp. 41–82. Chakravartty, A. (2007) A Metaphysics for Scientific Realism: Knowing the Unobservable , Cambridge: Cambridge University Press. ——— (2008) “What You Don’t Know Can’t Hurt You: Realism and the Unconceived,” Philosophical Studies 137, 149–158. Doppelt, G. (2005) “Empirical Success or Explanatory Success: What Does Current Scientific Realism Need to Explain?” Philosophy of Science 72, 1076–1087.

46 Success of science as a motivation

Egg, M. (2012) “Causal Warrant for Realism about Particle Physics,” Journal for General Philosophy of Science 43(2), 259–280. Fraassen, B. C. van (1980) The Scientific Image, Oxford: Clarendon Press. ——— (1989) Laws and Symmetry, Oxford: Clarendon Press. Frost-Arnold, G. (2010) “The No-Miracles Argument for Realism: Inference to an Unacceptable Explana- tion,” Philosophy of Science 77(1), 35–58. Gelfert, A. (2003) “Manipulative Success and the Unreal,” International Studies in the Philosophy of Science 17(3), 245–263. Grosser, M. (1979 [1962]) The Discovery of Neptune, New York: Dover Publications, Inc. Hacking, I. (1983) Representing and Intervening: Introductory Topics in the Philosophy of Natural Science, Cam- bridge: Cambridge University Press. Harker, D. (2010) “Two Arguments for Scientific Realism Unified,”Studies in History and Philosophy of Science 41, 192–202. Howson, C. (2000) Hume’s Problem: Induction and the Justification of Belief, Oxford: Oxford University Press. ——— (2013) “Exhuming the No-Miracles Argument,” Analysis 73(2), 205–211. Kitcher, P. (1993) The Advancement of Science: Science without Legend, Objectivity without Illusions, Oxford: Oxford University Press. ——— (2001) “Real Realism: The Galilean Strategy,” Philosophical Studies 110(2), 151–197. Ladyman, J. (1998) “What Is Structural Realism?” Studies in History and Philosophy of Science 29, 409–424. Laudan, L. (1981) “A Confutation of Convergent Realism,” Philosophy of Science 48, 19–49. ——— (2004) “The Epistemic, the Cognitive, and the Social,” in P. Machamer and G. Wolters (eds.), Science, Values and Objectivity, Pittsburgh: University of Pittsburgh Press, pp. 14–23. Leplin, J. (1997) A Novel Defense of Scientific Realism, Oxford: Oxford University Press. Lewis, P. (2001) “Why the Pessimistic Induction Is a Fallacy,” Synthese 129(3), 371–380. Lipton, P. (2004) Inference to the Best Explanation (2nd ed.), London: Routledge. Lyons, T. D. (2002) “Scientific Realism and the Pessimistic Meta-Modus Tollens,” in S. Clarke and T. D. Lyons (eds.), Recent Themes in the Philosophy of Science, Dordrecht: Kluwer Academic Publishers, pp. 63–90. Magnus, P. D. (2003) “Success, Truth and the Galilean Strategy,” British Journal for the Philosophy of Science 54(3), 465–474. Magnus, P. D. and Callender, C. (2004) “Realist Ennui and the Base Rate Fallacy,” Philosophy of Science 71(3), 320–338. Musgrave, A. (1988) “The Ultimate Argument for Scientific Realism,” in R. Nola (ed.), Relativism and Realism in Science, Dordrecht: Kluwer Academic Publishers, pp. 229–252. Poincaré, H. ([1903] 2001) Science and Hypothesis, in H. Poincaré’s The Value of Science: Essential Writings of Henri Poincaré, New York: The Modern Library. Psillos, S. (1999) Scientific Realism: How Science Tracks Truth, London: Routledge. ——— (2001) “Predictive Similarity and the Success of Science: A Reply to Stanford,” Philosophy of Science 68, 346–355. ——— (2011) “The Scope and Limits of the No Miracles Argument,” in D. Dieks et al. (eds.), Explana- tion, Prediction, and Confirmation (The Philosophy of Science in a European Perspective 2), Dordrecht: Springer, pp. 23–35. Putnam, H. (1978) Meaning and the Moral Sciences, London: Routledge and Kegan Paul. Saatsi, J. and Vickers, P. (2011) “Miraculous Success? Inconsistency and Untruth in Kirchhoff’s Diffraction Theory,” British Journal for the Philosophy of Science 62, 29–46. Stanford, P. K. (2000) “An Anti-Realist Explanation of the Success of Science,” Philosophy of Science 67, 266–284. ——— (2003) “Pyrrhic Victories for Scientific Realism,” Journal of Philosophy C(11), 553–572. Vickers, P. (2013) “A Confrontation of Convergent Realism,” Philosophy of Science 80, 189–211. Winsberg, E. (2006) “Models of Success versus the Success of Models: Reliability without Truth,” Synthese 152, 1–19. Worrall, J. (1989) “Structural Realism: The Best of Both Worlds?” Dialectica 43, 99–124. ——— (2012) “Miracles and Structural Realism,” in E. Landry and D. Rickles (eds.), Structural Realism: Structure, Object, and Causality, Dordrecht: Springer, pp. 77–95. Wray, K. B. (2007) “A Selectionist Explanation for the Success and Failures of Science,” Erkenntnis 67(1), 81–89. ——— (2010) “Selection and Prediction,” Erkenntnis 72(3), 365–377. ——— (2013) “Success and Truth in the Realism/Anti-Realism Debate,” Synthese 190, 1719–1729.

47 4 HISTORICAL CHALLENGES TO REALISM

Peter Vickers

1 Preliminaries When a scientific theory achieves striking explanatory and/or predictive successes, should we conclude that it is probably a trueapproximately theory? Ortrue, at least? Scientific realists have often urged that we should. Predictive successes have been held in particularly high regard, with various realists insisting that it would be tantamount to a miracle for a radically false theory to accurately predict some new result or phenomenon. Realists often point to current theories generally regarded as true and draw lessons concerning the relationship between scientific suc- cess and truth. For example, realists might claimknow that the weuniverse now started with a ‘big bang’, in part because Big Bang theory correctly predicted the 3° Kelvin cosmic microwave background radiation.1 And realists might claim thatknow wethat now the theory of evolution is true, given the extraordinary depth and breadth of the explanations provided by the theory. A success-to-truth inference is apparently warranted in such cases. However, sceptics insist that such a success-to-truth inference is seriously undermined by the history of science. The basic problem is the sheer number of examples from the history of science of scientific success whichnot bornwas of (even approximately) true theories. This thought has been turned into an argument by various individuals going back more than one hundred years, at least as far as Poincaré (1905). Many such arguments differ in significant ways, but most of them take the form of a inductionpessimistic, since an important part of the argument concerns making an inference from a significant number of past failures to pessimism concerning our current theories. This form of argumentation was made especially explicit in Laudan (1981). In this well-known paper a list of successful, radically false (not even approximately true) theories is presented. Laudan urges that for any one of these cases scientific realists of the day would have concluded that the theory in question was true or at least approximately true. And yet they would have been mistaken. What grounds, Laudan asks, do we have for making just that sort of inference concerning our current theories given these historical lessons? Of course, one might insist that we are in a privileged position now: science has moved on dramatically, even since the 19th century, never mind the 18th century or earlier still. But is con- temporary science different from pastin sciencesuch a thatway the sceptic’s inductive inference is undermined? The realist might want to answer ‘yes’, on the grounds that current theories are much more successful than past theories. This has led to some thorny issues concerning how to

48 Historical challenges to realism measure success (see below). The realist might also want to answer ‘yes’ on the grounds that current theories are much more ‘mature’ than past theories. This has led to some thorny issues concerning what constitutes ‘mature’ science or theory (cf. Rohrlich and Hardin 1983; Psillos 1999: 107). For many sceptics, an appeal to ‘maturity’ is an ad hoc move on the part of the realist. After all, at any time in the history of science just such a move could have been made to justify a realist attitude towards the favoured theories of the day. But later on those theories were abandoned. But even if the realist has an answer to give here, the sceptic can present the pessimistic induc- tion in another way, with a different inductive base. In recent years one particularly influential way to do this has been to focus on theorists rather than on theories (Stanford 2006: 47). This (new) induction can be put as follows. We start with the realist claim that scientists are often justified in believing their current best theories given the successes of those theories, combined with the fact that nobody can conceive of any other theory that could do the same job as well or better. But the sceptic insists that time and time again in the history of science we were in just that position, and yet future scientific developmentsdid produce an alternative theory that came to be accepted as a replacement for the antecedent theory. This sort of argument has come to be known as the problem of unconceived alternatives. Its inductive base is said to be stronger than that of the traditional pessimistic induction on the grounds that although scientific theories might be significantly different today compared with those advocated many years ago (e.g. in terms of the amount of empirical evidence accumulated), there is no reason to suppose that current theorists are better able to come up with possible alternatives than past theorists were. Overall, then, the historical challenges ask some serious questions of scientific realism. Already since Laudan (1981) the realist has been forced to adjust her position in certain ways in response to such challenges. With Stanford’s challenge this process of adjustment is further continued.2

2 Questions concerning realist intuitions and ‘historical evidence’ It is important to note that realists (typically) do not claim that scientific success is always cor- related with truth. If that were the claim, then the sceptic wouldn’t need to make use of any inductive argument; a single counterexample would be enough. Instead realists (typically) claim that success is strongly correlated with truth, that a scientific theory exhibiting a certain degree of success is likely to be true. Considering her inductive base the sceptic disagrees, preferring to conclude that the truth of the theory is unlikely. An influential recent argument suggests that by framing the argument in these terms both sides of the debate are committing a ‘base rate fallacy’ (e.g. Howson 2000: 52–54). The basic problem is this: suppose there is a 95% chance of a true theory being successful and a 5% chance of a false theory being successful (cf. Dicken 2013: 565). Then suppose we identify a successful theory. Is it likely to be true? Perhaps not, since it depends on the base rate of true theories in the population compared to the number of false theories. If there are very few true theories and a vast number of false theories, then the successful theory at hand is much more likely to be a false theory, despite the fact that only 5% of false theories are successful. The problem now is that there appears to be no way to judge the ratio of true-to-false theories without begging the question at issue. Thus Magnus and Callender (2004) argue that the realism debate largely boils down to comparing indefensible intuitions. This sort of meta-level worry about the realism debate runs even deeper when one considers another issue. This concerns the very idea of using historical ‘data’ as evidence for/against a phil- osophical position. Some have argued that philosophical theses cannot be (even partly) tested by history (e.g. Pitt 2001; Schickore 2011). It is said that all history requires a perspective, all ‘histor- ical data’ is theory laden, and there is no such thing as ‘pure history’. For example, consider the

49 Peter Vickers contrast between Psillos (1994) and Chang (2003): both apparently provide a careful historical analysis of the caloric theory of heat, but Psillos finds here support for realism, whereas Chang finds instead a threat to the realist position. Psillos (p. 162) even writes, “I do not deny that my use of historical evidence is not neutral – what is? – but rather seen in a realist perspective”. Is Chang’s use of history also not neutral, and biased by an anti-realist perspective? Again we reach a concern that the realism debate largely boils down to comparing intuitions. This time the intuitions affect the way one reads the history. These are important challenges. There are, however, various promising avenues of response for realists. On the base-rate issue the intuition persists for realists and anti-realists alike that a large number of very challenging historical cases would (and should!) have a significant impact on the realism debate. Those in the debate might accept that it is difficult to articulate why that should be the case, given the base-rate concern, and yet insist that it is (or even must be) the case. One line of argument concerns the weight put on the concept scientific theory when the base rate challenge is articulated. Some philosophers (e.g. Mark Wilson) decry the long-standing ‘Theory T’ paradigm of debates within the philosophy of science, and I have argued that the concept scientific theory should be eliminated from much philosophy of science, with debates transformed accordingly (Vickers 2013a, 2014). If the realism debate is transformed so that ‘theories’ do not take centre stage as units of analysis (and arguably this is already happening – see below), then the base-rate challenge, as it is currently articulated in the literature, fails to hit the target. Turning more briefly to the issue of historical bias, the realist might well accept that there is no such thing as ‘pure’ history and that every historical reconstruction requires a perspective, and yet insist that history can be unbiased vis-à-vis what is at stake in the realism debate. Or at least, it can be unbiased vis-à-vis certain questions in the realism debate. For example, realists and anti-realists alike might accept that it is very difficult or impossible to find any support for realism from the history of science, and yet insist that it is in principle possible to build a strong case against realism from the history of science. As Kinzel (2015: 54) has recently argued, it is hardly guaranteed that any scientific realist position will always be compatible with all (reasonably reported) historical cases.

3 Contemporary selective scientific realism So much for the meta-level issues; what of the actual historical challenges? It’s not straightforward to give a list of historical challenges, since what counts as a challenge depends on the particular way realism is articulated. Laudan (1981) includes in his list, for example, the humoral theory of med- icine, which is surely not approximately true despite its (perceived)3 explanatory achievements. If scientific realists claimed that only approximately true theories could enjoy such explanatory achievements, then this case would constitute strong evidence against realism. But contemporary realists don’t make that claim. Indeed, this case is now irrelevant on at least two counts: (i) contem- porary realists have a ‘high bar’ for the sort of scientific success that should elicit a realist commit- ment, and often novel predictive successes are required, and (ii) realists now, almost universally, make a doxastic commitment only to certain parts or aspects of the scientific theory which has achieved sufficient success, meaning that the theory in question taken as a whole might well fall short of approximate truth. This latter condition gives rise to the term ‘selective realism’.4 With these conditions in place, an initial stab at defining Contemporary Selective Scientific Realism (CSSR) would be as follows:

CSSR: When scientists achieve very significant successes (successful predictions of novel phenomena, and perhaps ‘deep’ or ‘unifying’ explanations), then we have (very) good

50 Historical challenges to realism

grounds for making a doxastic commitment to the theoretical elements which did the scientific work to bring about those successes (where that doxastic commitment consists in believing those theoretical elements to be at least approximately true).

This position then fractures into a range of more specific contemporary realisms, depending on (i) precisely how the realist likes to think about ‘success,’ and (ii) precisely how the realist likes to think about what ‘doing scientific work’ amounts to and the theoretical constituents which are said to be doing that work. Thus some historical cases will threaten some contemporary realists but not others. But without getting into these fine details we can already update Laudan’s (1981) list with cases which are at least prima facie relevant to the general CSSR position.

4 Historical challenges to realism What are the really serious historical challenges to CSSR? Vickers (2013b) presents a list of 20 examples which are potentially relevant for the contemporary debate, since they are examples of novel predictive success issuing from (broadly speaking) false theories. However, at least some of these cases are not serious challenges, and can be dismissed very quickly for one reason or another. For example, Velikovsky’s theory of the solar system enjoyed at least one novel predictive success (it predicted that the surface of Venus would be found to be ‘hot’), but it performed very poorly in all sorts of ways such that no realist worth her salt would ever have dreamt of entertaining anything but a very low degree of belief in the rel- evant assumptions. Such examples are not serious challenges, but they do show that CSSR, as defined earlier, really is only an initial stab at articulating the realist’s position, and further qualifications are in order. After all, Velikovsky did achieve novel predictive success, so this case shows that the realist cannot state that novel predictive success by itself is sufficient for a realist commitment. By contrast the most serious historical challenges will be those in which it seems likely that a contemporary realist who had lived through that historical episode would have made a serious doxastic commitment to something that was later completely rejected as a candidate for truth. The more cases we have like this, the less confidence we will have in such a realist when she advises us that we should make certain doxastic commitments concerning our current best scien- tific theories. And how history bears on our attitude towards our current best scientific theories is what really matters in all this. Given the current state of play, serious historical challenges to CSSR include the following:

(1) Influential explanations and also successful predictions (e.g. the speed of sound in air) of phe- nomena concerning temperature, based on not-even-approximately-true (NEAT)5 assump- tions concerning ‘caloric’. (Psillos 1999: ch.6; Chang 2003; Votsis and Schurz 2012)

(2) Influential explanations, and also successful predictions (e.g. the prediction that heating a calx with ‘inflammable air’ would turn it into a metal), of phenomena concerning combus- tion, based on NEAT assumptions concerning an ‘element’ known as ‘phlogiston’. (Ladyman 2011; Schurz 2011)

(3) Influential explanations, and also successful predictions (e.g. the famous ‘white spot’) based on the NEAT assumption that light is a wave in a medium (the luminiferous ether). (Psillos 1999: ch. 6; Saatsi 2005; Stanford 2006: ch. 7; Cordero 2011)

51 Peter Vickers

(4) Dirac’s prediction of the positron based on NEAT assumptions concerning holes in the ‘Dirac sea’ of negative-energy electrons. (Pashby 2012)

(5) Kirchhoff’s superbly accurate predictions concerning the diffraction of light through an aperture based on a NEAT account of the behaviour of the light within the aperture plane. (Saatsi and Vickers 2011; Vickers 2016)

(6) Sommerfeld’s predictions of the hydrogen fine structure spectral line frequencies – perceived at the time to be in excellent agreement with experiment – based on NEAT assumptions concerning the behaviour of the hydrogen electron in its ‘orbit’ around the proton nucleus. (Vickers 2012)

In each of these six cases we have at least two of the ingredients we need for a serious challenge to contemporary realist positions: (i) very significant scientific successes and (ii) NEAT assump- tions which are apparently responsible (when combined with some other assumptions) for said successes. Which realist positions can respond to these six challenges? First of all, one might examine the successes and consider whether any of the cases can be dismissed simply on the grounds that the success is not significant enough for realist commitment (Cf. Psillos 1999: 104–108: ‘Success too-easy-to-get’). For example, some realists insist that novel predictive success is necessary for realist commitment, and cases (1) and (2) are more commonly associated with explanatory successes. But this response does not seem sufficient, since cases (1) and (2) are now recognised to actually feature the required sort of predictive success. This is especially obvious once the realist admits ‘use novel’ predictive success: predictions of phenomena already known to exist, but (roughly speaking) where knowledge of the phenomenon is not used to construct/adjust the hypotheses in question (ibid.: 106ff ). Furthermore, it is not even obvious that novel predictive success is neces- sary for realist commitment. For example, I think it is fair to say that every realist makes a doxastic commitment to (the central claims of ) evolutionary theory. And it seems that it is predominantly (if not exclusively) the extraordinary explanatory achievements of the theory which motivate such realist commitment. As Stanford (2009: 383) puts it, if the realist were to completely ignore explanatory success in science she would thereby “give up realism concerning theories in the many domains of science for which confirmation seems to come by way of broad explanatory scope and unifying power (important parts of the biological sciences, for example)”. Of course, the realist could raise the bar for ‘sufficient success’ still higher, such that all of the examples (1) through (6) are irrelevant vis-à-vis realist commitment. This is where Stanford’s ‘threshold problem’ comes in (Stanford 2009: 384, fn. 3). For one thing, this seems like a very ad hoc way for the realist to respond to the historical threat. For another thing, if these examples are not successful enough then the so-called ‘realist’ will hardly ever make a realist commitment. This sort of move from the realist would have to be an absolute last resort. Of course, anti- realists such as Stanford may think that the realist has already made this move and has already turned ‘realism’ into something of a joke. However, the realist might respond that the emphasis on predictive success as opposed to explanatory success has been motivated independently of the historical challenges: there is a difference in kind between successfully predicting something and cooking up a theory to explain a known phenomenon. This doesn’t help, however, with examples (1) through (6), since these are all cases of novel predictive success. To dismiss one or more of these cases on the grounds of ‘success too-easy-to-get’ would mean making a distinc- tion between impressive and unimpressive novel predictive successes (or similar). This might

52 Historical challenges to realism be possible, following Fahrbach (2011) and others who insist on introducing degrees of success into the realism debate. But then Stanford’s threshold problem comes back to haunt the realist: what (non–question-begging) motivation could there be for insisting on one standard of novel predictive success over another?

5 Confirmation theory and overall evidence The realist might reply by bringing in degrees of belief and an interpretation of the No Miracles argument which utilises one or another theory of confirmation (e.g. Bayesianism). Here the No Miracles argument is no longer about making a doxastic commitment based on an isolated novel predictive success. It is, rather, a case balancing any such success with:

(i) any non-empirical evidence for/against the theory, which will bear on the prior probability we assign to the relevant hypotheses, and also, (ii) all available empirical confirmationsand disconfirmations of the theory.

Applying consideration (i), cases (1) through (6) could be deemed irrelevant if the prior prob- ability of the theory could reasonably be considered so low that even a ‘miraculous’ empirical success would not engender a high degree of belief.6 However, this would be a desperate move for the realist in cases (1) through (6), where each theory was (more or less) well motivated when the empirical successes materialised.7 More promising here is for the realist to apply consideration (ii). Cases (1) through (6) might feasibly be dismissed on the grounds of success too-easy-to-get if, notwithstanding the novel predictive successes, one can also identify dramatic disconfirmations of the assumptions involved. But this is awkward for a number of reasons. There are umpteen examples in the history of science of a mismatch between theory and evidence, but where ultimately we found that the theory was fine and the mismatch was due to some experimental error or mathematical error or similar.8 In addition, disconfirmations often mount up gradually only some time after the novel predictive successes. For how long after a significant confirmation does one search around for disconfirmations before one forms a (very) high degree of belief? It seems, on the one hand, that a knee-jerk doxastic response to a novel predictive success is not appropriate (cf. Harker 2013: fn. 24), but neither is a response which is delayed for many years. There is also the question of how to judge the significance of confirmations and disconfir- mations in order to balance them and translate the result into a doxastic attitude, especially since confirmation and disconfirmation raise such different issues. How can one possibly hopeto judge whether a quantitative novel prediction successful to four significant figures should raise our degree of belief from a prior of 0.5 to a posterior of 0.7, 0.8, or 0.9? It is widely agreed that formal methods such as those central to Bayesian confirmation theory can only ever hope to assist us in our reasoning and will always leave a lot of important questions unanswered (see e.g. Forster 1995; Milne 2003; Brössel and Huber 2015). An abundance of literature on Bayesian epistemology (and confirmation theory more gener- ally) now becomes directly relevant to the scientific realism debate. Indeed, it is remarkable just how long the realism debate and the Bayesian literature have remained quite separate, with little mutual influence or cross-fertilisation. Milne (2003) presents them as two opposing epistemolo- gies. By contrast, Dorling (1992) and Howson (2013) are serious attempts to try to bring them together. Howson sees significant difficulties for this project, but (at the very least) since Bayesians have formulated various approaches to the complications noted, realists might well want to make use of some of the answers that have been put forward.

53 Peter Vickers

Coming back to the issue immediately at hand, the realist might hope to move forward without getting embroiled in the technicalities of Bayesianism. Perhaps we can keep things quite intuitive and still hope to identify at least some cases in which the disconfirmation is so significant (however you measure it) that any novel predictive success is outweighed, and the realist isn’t motivated to make a doxastic commitment to the theoretical assumptions in play. Are any of the cases (1) through (6) like this? Case (2) is instructive here. Phlogiston theory ran into serious problems over the course of the 18th century. For example, phlogiston needed to have negative mass to account for the fact that certain metals gain mass when they are burned. Lavoisier read a paper to the Royal Academy of Sciences of Paris in 1783, which discussed various serious problems and was intended as a coup de grâce for the phlogiston theory (Best 2015). But the theory had been dominant since the early 18th century; a serious disconfirmation in the mid- to late 18th century would not in itself prevent a realist commitment to the theory in the early 18th century. However, this assumes that there were major successes of the theory in the early 18th century – the sorts of successes that would warrant realist commitment according to contemporary scientific realism. Is that the case? Perhaps not. Harker (2013: 80) discusses one major success of phlogiston theory as relevant for the realism debate, “a result of seemingly profound significance”. However, the result in question was first published in Priestley (1783), the very same year (!) that Lavoisier dealt his coup de grâce. Thus a realist in 1783 considering available evidence for and against the relevant hypotheses might well have ended up with a middling degree of belief at best. But this doesn’t seem promising as a realist response. It might be true that no CSSR realist living through the 18th century would ever have made a realist commitment to phlogiston. But that is little comfort to the realist if it depends on a highly contingent chronology of the successes and failures of the theory. Absent the required detailed history, suppose for a moment it is true that by the time there were enough successes for realist commitment to phlogiston theory there were also enough disconfirmations to prevent realist commitment. Even if that is true, it seems reasonable to suppose that the chronology could have been different , such that the relevant successes (e.g. Priestley’s 1783 result) could have come earlier, and the relevant disconfirmations could have come later. If that is accepted, then the realist has to admit that it is largely down to luck that a contemporary realist living through the 18th century would never have made a doxastic com- mitment to phlogiston. If things had worked out slightly differently, the realist would have made a commitment and then later would have been embarrassed by that commitment when phlogiston was completely rejected. And of course we don’t know how the chronology of successes and failures in our current best theories will work out: perhaps all the big successes have come first, and the big failures are waiting round the corner.

6 The selective realist strategy If the realist cannot respond to cases (1) through (6) by drawing on confirmation theory, then what are the other options? Most obvious, given the current emphasis on selective realist com- mitment, is to try to show that the ‘working posits’ in each case actually are approximately true. However, cases (1) through (6) have been selected specifically because the significant successes in each case do seem to stem from assumptions which are NEAT (not even approximately true), contrary to CSSR. Thinking about case (1), Chang (2003) acknowledges Psillos’s selective realism vis-à-vis the caloric theory but argues that “the historical record of the caloric theory reveals that beliefs about the properties of material caloric, rejected by subsequent theories, were indeed central to the suc- cesses of the caloric theory” (p. 902). Turning to case (2), Ladyman (2011: 95) argues, “It is not

54 Historical challenges to realism plausible to claim that ‘phlogiston’ was not central to phlogiston theory”. Of course, Ladyman is a realist, and he does think phlogiston theory is compatible with realism, but only with a certain sort of structural realism (see below). Turning to case (3), several authors have argued that even selective realists can’t avoid realist commitment to the ether without making a mockery of realism (e.g. Stanford 2006: section 7.2). Turning to case (4), Pashby (2012) is certainly sensitive to the selective realist strategy, writing, “accounting for this case with the ‘divide and conquer’ strategy of contemporary scientific realism proves particularly troublesome, even for the structural real- ist” (p. 440) and arguing in particular that “the assumption that all the negative energy states of hole theory were filled by electrons was essential to predict the existence of positrons” (p. 462). Turning to case (5), Saatsi and Vickers (2011) are themselves selective realists, and yet they state that at least one of Kirchhoff’s assumptions is (i) certainly doing work, and (ii) definitely not approximately true. Finally, turning to case (6), Vickers (2012) argues that “the selective realist strategy . . . fails for Sommerfeld’s prediction of the fine-structure formula” (p. 16). I have of course cherry-picked the literature in the previous paragraph. The point is just that some philosophers have come to the conclusion that the selective realist strategy doesn’t work for cases (1) through (6). Or at least that it is very difficult to see how it could work. There are two parts to the claim: (i) that the assumptions are NEAT and (ii) that the assumptions are gen- uinely ‘doing work’. The first claim is the easiest to establish, even without delving into theories of approximate truth. For example, nobody tries to argue that claims such as ‘temperature is the density of caloric’ and ‘caloric is a self-repelling material substance’ are (somehow) approx- imately true. Similarly with claims such as ‘the luminiferous ether is an all-pervading medium that supports light waves’ and ‘the electron orbits the hydrogen nucleus in relativistically adjusted elliptical orbits’. A more promising avenue for selective realists concerns the second claim: realists might put forward a sophisticated account of what it means for a theoretical claim to be ‘doing work’, such that sometimes although an assumption very much seems to be doing work to bring about the relevant scientific success, there is an important sense in which the assumption is nevertheless idle. For example, when Bohr derived the Rydberg constant and the ionised helium spectral lines using his 1913 atomic theory, he drew on the assumption that electrons in atoms sit in contin- uous-worldline orbital trajectories around the nucleus. Arguably, this is a NEAT assumption. However, although Bohr used this assumption, there is an important sense in which it wasn’t doing work vis-à-vis his successes. As Norton (2000: 86–87) puts it, “no assumption is . . . needed that these stationary states are elliptical orbits of some definite size and frequency of localised elec- trons”. That is, Bohr was making an assumption which can be removed from the derivations of his stand-out successes without affecting the result in any way (cf. Vickers 2012: 10). Thus the realist is not motivated to make a realist commitment to the NEAT assumption in question. If this is accepted as a valid realist response, then the fact that a scientist makes considerable use of an assumption does not mean the realist must commit to it (pace Lyons 2006).9 Can this sort of move save the realist from any of the cases (1) through (6)? Take the case of the luminiferous ether. Can a realist insist that although scientists in the 19th century assumed the ether exists, made regular reference to it, and made assumptions concerning its properties to derive certain results, it can nevertheless be eliminated from those derivations without affecting the results? This move has been attempted by more than one selective realist, including Kitcher (1993: 145–149) and also Saatsi (2005: 525ff.), who argues that although assumptions concerning the ether were “heuristically crucial”, they were nevertheless dispensable because Fresnel’s deri- vation depends only on certain “theoretical properties” which can be associated with a material ether but which can also be associated with electromagnetic fields. However, Stanford’s (2006: section 7.2) response to Kitcher (1993) also stands as a response to Saatsi (2005): perhaps logically

55 Peter Vickers speaking the realist needn’t be committed to the ether, but in practice for scientists living at the time it was ‘unintelligible’ to suppose that light could behave exactly like a wave but without needing any medium to wave in. As Stanford (2006: 171) puts it,

If the alternative that eschews appeal to a given theoretical posit and thereby under- mines its confirmation by the successes of the theory that posits it need not even be intelligible to the scientists who hold that theory, then the realist seems to lose any hope whatsoever of insulating any theoretical posits of current science from idleness.

The realist might reply at this point that (un)intelligibility should not affect our realist com- mitments, especially when it comes to fundamental physics. But Stanford ( ibid.) pre-empts this response and insists that this move would render ‘idle’ many assumptions of contemporary science that realists often insist we should believe in, including atoms.

7 Further options for the realist This is hardly the end of possible realist responses to cases (1) through (6). One option is to be a structural realist, in which case giving up commitment to the existence of atoms (where this is interpreted non-structurally) is not so dramatic. Votsis and Schurz (2012) argue that there is preservation at the level of ‘structure’ across theory change in case (1), Ladyman (2011) argues for a structural realist response to case (2), Worrall (1989) argues for a structural realist response to case (3), and Vickers (2012) draws on Biedenharn (1983) to argue that there may be a way to see continuity across theory change of ‘highly abstract structure’ in case (6). On the other hand, Saatsi and Vickers (2011) argue that a structural realist response to case (4) is far from obvious, and Pashby (2012) similarly sees no convincing structural realist response to case (5). Perhaps there can be a realist response to case (4): Vickers (2013b, 2016) expresses some reser- vations concerning some of the claims in Saatsi and Vickers (2011), and goes some way towards providing a (realist?) response to this particular historical challenge. Although the debate around cases (4) and (5) is far from settled, we can ask, bearing these examples in mind, the following important question: could the CSSR realist be happy with just one or two problematic cases from the history of science? Here opinions are very much divided. Psillos (1999: 108) and Ladyman (2002: 244; 2011: 94) are two realists who state that just one or two problematic cases would be too many, whereas Chang (2003) is a non-realist who states, “One case, of course, does not have much force as empirical evidence” (p. 910). Harker (2013: 101) sees phlogiston theory as problematic for his particular brand of selective realism but agrees with Chang on this issue, writing, “a single counterexample is inadequate to undermine my thesis”. One thing seems certain: the threat to realism intensifies as the number of serious historical challenges increases. The question remains whether there are indeed a large number of very challenging cases, as (1) through (6) – and especially perhaps (4) and (5) – appear to be. Here it is important to note that the realism debate has only just scratched the surface on the history of science, and there may well be lots of other problematic examples hiding in the woodwork. This is particularly obvious given all the new cases introduced to the debate just within the past 10 years, including cases (4) through (6), the Ptolemaic astronomy case (Díez and Carman 2015), cases introduced by Kyle Stanford (e.g. J. F. Meckel’s prediction of gill slits in the human embryo – see Vickers 2015), cases introduced by Timothy D. Lyons (e.g. Lyons 2006, 2016), a case recently introduced by Dana Tulodziecki (Tulodziecki 2016), and other possible cases listed

56 Historical challenges to realism in Vickers (2013b). Thus one or two known problematic cases might well suggest that there are many problematic cases when the whole history of science is considered. Overall, then, is the realist faced with a serious historical challenge? It is important to note that one can maintain the realist spirit while adopting a position even more modest than the realist positions considered here. Such is the focus of Saatsi (2015), who argues for a position he dubs ‘minimal realism’, focusing on what he calls ‘fundamental science’ (especially fundamental physics). The main difference between minimal realism and the realisms considered earlier is that one believes in theoretical progress despite scientific change. According to this sort of realism, one may only be able to see in hindsight which parts/aspects of the rejected theory were ‘doing work’ and are appropriately related to the successor theory. Thus the sort of ‘prospective identification of working posits’ sought after by most realists (e.g. Votsis 2011a) and demanded by most anti- realists (e.g. Stanford 2006: 169) is rejected. One cannot say in advance of the next revolution just which parts of our current best scientific theories (concerning fundamental science) deserve our doxastic commitment. Minimal realism is of course an extremely modest realism, and no doubt many antirealists would see a move to this position as a victory for antirealism. But Saatsi counters this idea: just because various forms of realism may be undermined by the historical record does not mean one should jump to empiricism or instrumentalism. More modest realist positions are available, motivated, and (perhaps!) consistent with all known historical evidence.

8 Conclusion What we want to find out is which positions are more plausible given the historical cases. We can’t expect, of course, to usefalsify history realist to positions. Instead – and just as in science – no ‘falsification’ will be possible, but at the same time it goes without saying that in philosophy we never expect absolutely conclusive results. Since Laudan (1981) we’ve already seen significant progress in the way the historical record has helped realists to adjust their position. And cases introduced more recently, such as (4) through (6), are now helping us to see how realism must evolve still further if it is to have a future. The historical challenge becomes a more informed challenge as more of the relevant history is uncovered, helping us to see more clearly which realist positions are in significant tension with the historical record and which positions have a fighting chance. When it comes to progress in philosophy, one cannot ask for much more.

Notes 1 See F. Azhar and J. Butterfield, ‘Scientific realism and primordial cosmology,’ ch. 24 of this volume. 2 See P. Kyle Stanford, ‘Unconceived alternatives and the Strategy of Historical Ostension,’ ch. 17 of this volume. 3 In its heyday, the humoral theory of medicine was widely taken to explain a range of phenomena. If we think it didreally not explain any of those phenomena, that doesn’t affect its significance for the realism debate, since the same fate could await any contemporary theory. 4 ‘Selective’ realism comes in a wide variety of forms, e.g. Worrall (1989), Psillos (1994, 1999), Saatsi (2005), Ladyman (2011), Votsis (2011b), Harker (2013), Vickers (2013b), Peters (2014), and Egg (2016) drawing on Chakravartty (1998). 5 By ‘not even approximately true’ (NEAT) here and in what follows I simply mean an assumption which cannot reasonably be described as approximately true, for any plausible theory of ‘approximate truth’. 6 Howson (2000: 57, 2013) and Magnus and Callender (2004) make this point and provide further discussion. 7 The distinction between subjective and objective Bayesianism becomes important here.

57 Peter Vickers

8 For one extremely pertinent example see Kuhn (1962: 81). 9 There may even be an important sense in which an assumption is ‘doing work’ without it thereby mer- iting realist commitment. See Vickers (2017).

References Best, N. W. (2015) “Lavoisier’s “Reflections on Phlogiston” I: Against Phlogiston Theory,” Foundations of Chemistry 17, 137–151. Biedenharn, L. C. (1983) “The ‘Sommerfeld Puzzle’ Revisited and Resolved,” Foundations of Physics 13(1), 13–34. Brössel, P. and Huber, F. (2015) “Bayesian Confirmation: A Means with No End,” British Journal for the Philosophy of Science 66(4), 737–749. Chakravartty, A. (1998) “Semirealism,” Studies in the History and Philosophy of Science 29(3), 391–408. Chang, H. (2003) “Preservative Realism and Its Discontents: Revisiting Caloric,” Philosophy of Science 70, 902–912. Cordero, A. (2011) “Scientific Realism and the Divide et Impera Strategy: The Ether Saga Revisited,” Philosophy of Science 78(5), 1120–1130. Dicken, P. (2013) “Normativity, the Base-Rate Fallacy, and Some Problems for Retail Realism,” Studies in History and Philosophy of Science 44, 563–570. Díez, J. A. and Carman, C. (2015) “Did Ptolemy Make Novel Predictions? Launching Ptolemaic Astron- omy into the Scientific Realism Debate,” Studies in the History and Philosophy of Science 52, 20–34. Dorling, J. (1992) “Bayesian Conditionalization Resolves Positivist/Realist Disputes,” Journal of Philosophy 89(7), 362–382. Egg, M. (2016) “Expanding Our Grasp: Causal Knowledge and the Problem of Unconceived Alternatives,” British Journal for the Philosophy of Science 67(1), 115–141. Fahrbach, L. (2011) “Theory Change and Degrees of Success,” Philosophy of Science 78(5), 1283–1292. Forster, M. (1995) “Bayes and Bust: Simplicity as a Problem for a Probabilist’s Approach to Confirmation,” British Journal for the Philosophy of Science 46, 399–424. Harker, D. (2013) “How to Split a Theory: Defending Selective Realism and Convergence without Prox- imity,” British Journal for the Philosophy of Science 64(1), 79–106. Howson, C. (2000) Hume’s Problem, New York: Oxford University Press. ——— (2013) “Exhuming the No Miracles Argument,” Analysis 73(2), 205–211. Kinzel, K. (2015) “Narrative and Evidence: How Can Case Studies from the History of Science Support Claims in the Philosophy of Science?” Studies in the History and Philosophy of Science 49, 48–57. Kitcher, P. (1993) The Advancement of Science: Science without Legend, Objectivity without Illusions , Oxford: Oxford University Press. Kuhn, T. (1962) The Structure of Scientific Revolutions, Chicago: University of Chicago Press. Ladyman, J. (2002) Understanding Philosophy of Science, London: Routledge. Ladyman, J. (2011) “Structural Realism versus Standard Scientific Realism: The Case of Phlogiston and Dephlogisticated Air,” Synthese 180(2), 87–101. Laudan, L. (1981) “A Confutation of Convergent Realism,” Philosophy of Science 48, 19–48. Lyons, T. D. (2006) “Scientific Realism and the Stratagema de Divide et Impera,” British Journal for the Philosophy of Science 57, 537–560. ——— (2016) “Structural Realism versus Deployment Realism: A Comparative Evaluation,” in C. Haufe (ed.), Studies in History and Philosophy of Science Part A , special issue ‘Testing structural realism’. URL: http://dx.doi.org/10.1016/j.shpsa.2016.06.006 Magnus, P. D. and Callender, C. (2004) “Realist Ennui and the Base Rate Fallacy,” Philosophy of Science 71(3), 320–338. Milne, P. (2003) “Bayesianism v. Scientific Realism,” Analysis 63(280), 281–288. Norton, J. (2000) "How we know about electrons," in R. Nola & H. Sankey (eds.) After Popper, Kuhn and Feyerabend: Kluwer, pp. 67–97. Pashby, T. (2012) “Dirac’s Prediction of the Positron: A Case Study for the Current Scientific Realism Debate,” Perspectives on Science 20(4), 440–475. Peters, D. (2014) “What Elements of Successful Scientific Theories Are the Correct Targets for ‘Selective’ Scientific Realism?” Philosophy of Science 81, 377–397. Pitt, J. C. (2001) “The Dilemma of Case Studies,” Perspectives on Science 9, 373–382.

58 Historical challenges to realism

Poincaré, H. ([1905] 1952) Science and Hypothesis, reprint of first English translation, originally published as La Science et L’Hypothèse (Paris, 1902), New York: Dover. Priestley, J. (1783) “Experiments Relating to Phlogiston,” Philosophical Transactions of the Royal Society of London 73, 398–434. Psillos, S. (1994) “A Philosophical Study of the Transition from the Caloric Theory of Heat to Thermo- dynamics: Resisting the Pessimistic Meta-Induction,” Studies in History and Philosophy of Science 25(2), 159–190. ——— (1999) Scientific Realism: How Science Tracks Truth, London and New York: Routledge. Rohrlich, F. and Hardin, L. (1983) “Established Theories,” Philosophy of Science 50, 603–617. Saatsi, J. (2005) “Reconsidering the Fresnel-Maxwell Case Study,” Studies in History and Philosophy of Science 36, 509–538. ——— (2015) “Historical Inductions, Old and New,” Synthese. doi:10.1007/s11229-015-0855-5 Saatsi, J. and Vickers, P. (2011) “Miraculous Success? Inconsistency and Untruth in Kirchhoff’s Diffraction Theory,” British Journal for the Philosophy of Science 62(1), 29–46. Schickore, J. (2011) “More Thoughts on HPS: Another 20 Years Later,” Perspectives on Science 19(4), 453–481. Schurz, G. (2011) “Structural Correspondence, Indirect Reference, and Partial Truth: Phlogiston Theory and Newtonian Mechanics,” Synthese 180(2), 103–120. Stanford, P. K. (2006) Exceeding Our Grasp, Oxford: Oxford University Press. ——— (2009) “‘Author’s Response’, in ‘Grasping at Realist Straws’, a Review Symposium of Stanford (2006),” Metascience 18, 379–390. Tulodziecki, D. (2016) “From Zymes to Germs: Discarding the Realist/Anti-Realist Framework,” in T. Sauer and R. Scholl (eds.), The Philosophy of Historical Case Studies, volume 319 of Boston Studies in the Philosophy and History of Science, Dordrecht: Springer, pp. 265–283. Vickers, P. (2012) “Historical Magic in Old Quantum Theory?” European Journal for Philosophy of Science 2(1), 1–19. ——— (2013a) Understanding Inconsistent Science, Oxford: Oxford University Press. ——— (2013b) “A Confrontation of Convergent Realism,” Philosophy of Science 80(2), 189–211. ——— (2014) “Scientific Theory Eliminativism,”Erkenntnis 79, 111–126. ——— (2015) “Contemporary Scientific Realism and the 1811 Gill Slit Prediction,” a blog post for Auxil- iary Hypotheses, the blog of the British Journal for the Philosophy of Science. URL: http://thebjps.typepad. com/my-blog/2015/06/ ——— (2016) “Why Kirchhoff’s Approximation Works,” in K. Hentschel and Ning Yan Zhu (eds.), Gustav Robert Kirchhoff’s Treatise “On the Theory of Light Rays” (1882). Singapore: World Scientific, pp. 125–142. ——— (2017) “Understanding the Selective Realist Defence against the PMI,” Synthese 194, 3221–3232. Votsis, I. (2011a) “The Prospective Stance in Realism,” Philosophy of Science 78(5), 1223–1234. ——— (2011b) “Structural Realism: Continuity and Its Limits,” In A. Bokulich and P. Bokulich (eds.), Scientific Structuralism (Boston Studies in the Philosophy and History of Science), New York: Springer, pp. 105–117. Votsis, I. and Schurz, G. (2012) “A Frame-Theoretic Analysis of Two Rival Conceptions of Heat,” Studies in History and Philosophy of Science 43, 105–114. Worrall, J. (1989) “Structural Realism: The Best of Both Worlds?” Dialectica 43, 99–124.

59 5 UNDERDETERMINATION

Dana Tulodziecki

1 Introduction The simple idea underlying discussions of underdetermination is that the available empirical evi- dence is always compatible with more than one theory and so can never single out a particular scientific theory as (approximately) true, justified, or more believable than others, depending on the version one endorses. For example, seeing my friend John go through airport security is compatible with his going to either New York or Rhode Island, with his changing his mind and turning back around, and even with his engaging in an elaborate ruse to mislead me. In philoso- phy of science, a rough and general version of the underdetermination argument runs as follows: since the available observable evidence always supports (at least) two scientific theories, only one of which can be true, and since our only reason for believing our scientific theories to be true is the observable evidence on which they are based, we can never have any epistemic reason to choose one of these theories over the others. The observable evidence underdetermines theo- ry-choice. Moreover, since, typically, scientific theories are what is supposed to give us scientific knowledge, it follows that the prospects for this are quite dire as well, since, without theories, there is no such knowledge. Underdetermination arguments have been used to argue for a variety of conclusions; here, however, I will restrict myself to discussing their role in1 There,the realism-debate.a standard and more precise way of arguing for underdetermination has been in terms of the following two premises (this presentation is based on Kukla 1998; Psillos 1999): first, the Empirical Equivalence Thesis (EET), which states that any theory has logically incompatible and empirically equiva- lent rivals; second, the Entailment Thesis (ET), which states that entailment of the evidence is the only epistemic constraint on theory-choice. Anti-realists conclude from these two premises that belief in any particular scientific theory, including current ones, is epistemically illegitimate since, according to the argument, we have no epistemic reason to prefer any one theory to any other: the empirical evidence can never help us make such a choice – although, of course, we may choose for pragmatic or aesthetic reasons – and the empirical evidence is all that mat- ters.2 The reason this conclusion is worrying specifically for scientific realists is that it targets directly the realist’s claim that what our best scientific theories tell us is approximately true, with respect to both observables and unobservables. However, if we cannot get empirical evidence for claims involving unobservables, and empirical evidence is the only factor that could justify us in

60 Underdetermination believing such claims, then we cannot be realists about the unobservables they feature – and that was the whole point of realism in the first place. Further, anti-realists typically seek to provide a general argument for underdetermination, applicable to any scientific theory at any time and not contingent on specific cases: they want to show that realists can never, even in principle, be justified in believing theories making reference to unobservables – as most scientific theories do – because the weaker claim that only some theories are underdetermined is, as such, no threat to realism. Realists are happy to admit that, if the circumstances are right, there may be underde- termination and that in those circumstances we ought to withhold judgement – it’s just that they don’t think this phenomenon is either ubiquitous or widespread. In this chapter, I will argue that there is little hope for wholesale underdetermination argu- ments and that the real threat of underdetermination is visible only locally, through specific cases. Further, I will show that the various discussions about underdetermination, individually and jointly, support this conclusion. I will begin, in section 2, by discussing the Empirical Equivalence Thesis. Section 3 focuses on the Entailment Thesis and is followed, in section 4, by a discussion of some recent work that does not fall straightforwardly into either one of these categories. I conclude in section 5.

2 The Empirical Equivalence Thesis As we have seen, according to the Empirical Equivalence Thesis (EET), any theory always has logically incompatible and empirically equivalent rivals. That the theories in question be logically incompatible is clearly a requirement for underdetermination: if they were not, there would be no need to choose between them; they might simply be versions of the same theory or mere notational variants of each other. Thus, this requirement immediately rules out as worrisome competitors theories in which, for example, some terms have been permuted, a theory’s Ramsey sentence, or versions arrived at by Craig’s method.3 A common (but not the only) way of char- acterising empirical equivalence is in terms of empirical indistinguishability. According to this, two (or more) theories are empirically equivalent just in case there is no piece of observational data capable of distinguishing between them (for some different senses of indistinguishability, see Earman 1993). Restricting this solely to actual data gives rise to so-called transient underdeter- mination; extending it to all possible data gives rise to permanent underdetermination. The temporary empirical equivalence of transient underdetermination occurs when two the- ories are currently empirically indistinguishable but when there exists a (possibly unknown) piece of observational data compatible with only one of the theories. If this were obtained, it would thereby single out one of the candidates as empirically superior, and underdetermination would be broken. Transient underdetermination of this kind is uncontroversial from the history of science. It is also familiar from, if not characteristic of, everyday science, in the sense that much of the scientific enterprise consists of attempts to decide among competing hypotheses, for example by designing and performing experiments specifically for this purpose. And while this kind of underdetermination might pose severe practical problems (and for all practical purposes be permanent), there is nothing particularly epistemologically puzzling or difficult about it. It is clear, at least in principle, how to escape it – all we need to do is come by the relevant data. As a result, this version of underdetermination is usually not taken to be strong enough to undercut the realist’s epistemic thesis, since it cannot establish the in-principle epistemological arbitrariness of theory-choice that was supposed to be the conclusion of the original argument (but see the discussions of Stanford and Turner in section 4). It is for this reason that what is typically at stake in the realism debate is permanent underde- termination. According to this, two theories are empirically equivalent just in case they either

61 Dana Tulodziecki share the same observational consequence classes (on a syntactic view of theories) or the same class of empirical models (on a semantic view).4 Thus, no matter how much time and effort is invested, there never has been, is not, and never will be a piece of observational data that is capable of distinguishing among the various rivals, even in principle: they either stand together or fall together.5 A classic example of this has been that of Newtonian mechanics with the center of the universe at absolute rest on the one hand and with the center of the universe moving with an absolute constant velocity on the other. This example, a staple in the contemporary literature (see van Fraassen 1980; Laudan and Leplin 1991; Earman 1993), goes back to Newton himself. Even though Newton believes that “the center of the system of the world is immovable” (1687/1966: 419), he also acknowledges that “[t]he motions of bodies included in any given space are the same among themselves, whether that space is at rest, or moves uniformly forwards in a right line without any circular motion” (1687/1966: 20). Since adding a constant factor to all velocities does not change the differences between absolute motions, as Newton himself points out, any theory that claims that the center of the universe is at rest with respect to absolute space is empir- ically equivalent to another theory that claims that the center of the universe is moving with an absolute constant velocity v. In fact, since v can take any one of an infinite number of values, there are infinitely many rivals to the original theory, corresponding to the possible different absolute 6 uniform velocities vi. Other examples are given in Earman (1993: 31), Manchak (2009), who has argued that there is a “robust sense in which the global structure of every cosmological model is underdetermined” (53), and Fraser (2009), who has argued that the variants of quantum field the- ory are empirically indistinguishable and constitute a genuine example of underdetermination. There are two main lines of argument supposed to guarantee that such permanently empiri- cally equivalent competitors are always available for any theory: the first seeks to generate rivals by relying on the Duhem-Quine Thesis, the second by relying on various kinds of algorithms. According to the Duhem-Quine Thesis, theories entail their empirical consequences only with the help of auxiliary assumptions or theories.7 This is often taken to establish the EET, since it is a consequence of the Duhem-Quine Thesis that, given the “right” auxiliaries, theories can accommodate any evidence whatsoever and, in particular, evidence that is entailed by a theory in need of an empirically equivalent rival. Put more precisely, the claim is that for any theory T and its associated body of empirical data E and set of auxiliary assumptions A, it is only the conjunc- tion T&A that entails E. However, for any rival theory T*, there is always a set of auxiliaries A* (specifically engineered for this purpose, if need be) such that the conjunction T*&A* also entails E and is therefore empirically equivalent to the original conjunction T&A. As a result, there is now no observation that can ever decide between T&A and T*&A*, just as is claimed in the EET. This way of generating empirical equivalence has been disputed by Laudan and Leplin (1991), who have argued that, even if two theories are empirically equivalent at present, there is no guarantee that they will remain so in the future. The first of their two arguments to this effect trades on what they call the “variability of the range of the observable” (451ff.): since science and technology constantly extend the range of what is observable at any given point in time, entities that are now unobservable may become observable. As a result, theories’ empirical consequence classes may diverge and so render currently empirically equivalent theories empirically distin- guishable. A significant problem with Laudan and Leplin’s argument, however, is that it relies on a notion of observability rejected by many anti-realists. According to Laudan and Leplin (and most realists), observability amounts to, roughly, what is detectable by means of scientific instru- ments, which is a much broader notion of observability than the one anti-realists are willing to grant. Van Fraassen, for example, explicitly restricts observability to what can be observed by the unaided senses (1980: 15ff.), and so Laudan and Leplin’s argument simply does not apply to him, since extending the range of what is detectable does not change what is observable.

62 Underdetermination

Further, it has been argued that, even if we accept Laudan and Leplin’s argument, we can still formulate the underdetermination argument at the level of total science, where the unit of underdetermination is the entire state of science at a given time (see Kukla 1998: 63–66; for a response, see Leplin 1997; Okasha 2002). Since total science includes claims about what is and is not observable in principle, what is observable does not change. The most that could happen is that we were mistaken about what we thought constituted empirical equivalence in the first place: instead of the range of the observable becoming extended, we were simply wrong about what was observable, misled by relying on a restricted low-level notion of observability when we should have adopted a high-level one. An analogous argument can also be made for complete theories, that is theories exhausting all physical facts about the world, including those about the limits of observability. Laudan and Leplin’s second argument against empirical equivalence turns the Duhem-Quine Thesis on its head (452ff., see also Ellis 1985). Drawing attention to the “instability of auxiliary assumptions,” they argue that, far from guaranteeing empirical equivalence, the Duhem-Quine Thesis actually shows that doing so is impossible. Since theories entail their observational con- sequences only with the help of auxiliaries, Laudan and Leplin point out that what the obser- vational consequences of any given theory are also depends on what auxiliaries are in play. It is precisely the fact that by varying the auxiliaries one can change a theory’s empirical consequences at will that was supposed to guarantee empirical equivalence, since it was this manoeuvre that allowed calibration of T*&A* in just the right way. Laudan and Leplin then go on to argue that as science progresses, a theory’s auxiliaries change and so, even if two theories have the same empirical consequences in conjunction with their current auxiliaries, there is no way of knowing, much less a way of guaranteeing, that they will still have the same empirical consequences once their current auxiliaries are superseded. As before, however, underdetermination at the level of total science or complete theories is immune to Laudan and Leplin’s point, since by definition total science and complete theories do not have auxiliaries. Moreover, at lower levels the Duhem- Quine Thesis still ensures that two theories that may have become empirically distinguishable by different sets of auxiliaries can be made empirically equivalent again, simply by modifying the loser’s auxiliaries in the required way. Indeed, one might argue that it was precisely this fact – that this can be pulled off on any theory at any time – that made this strategy such a good candidate for establishing the EET to begin with. The second main strategy in establishing the EET has been the invocation of algorithmically constructed rivals. For example, Kukla, a champion of this approach, thinks that universal algo- rithms of some sort or other are both necessary and sufficient for this purpose (1998: ch. 5). Among the contenders Kukla considers are the view that the world is as we think it is when observed but behaves differently when it is not (1993), the view that the world behaves as if a certain theory T were true even though it isn’t, and the view that the empirical consequences of T are true while T itself is not (Kukla attributes this last example to van Fraassen [1980]). A more elaborate candidate is Kukla’s story of the Makers, beings that created a universe containing the same observable and unobservable entities as ours but that is dependent on a Maker-created machine periodically turned off (1998: 75–76). Clearly, whatever our theory T is, all of the fore- going will be, by design, empirically equivalent to it. The standard realist rejoinder to this approach has been to counter that none of the discussed cases ought to be regarded as proper rivals or even scientific theories and that, to the extent that these strategies count at all, they provide at most a trivial and entirely non-threatening proof of the EET. The main problem here, however, has been the realists’ inability to articulate just what ought to count as a proper rival. Laudan and Leplin propose that rivals must not be parasitic on the original, the way all of the named cases clearly are, but Kukla, in response, has argued that

63 Dana Tulodziecki the parasitism-criterion is too strong in also excluding rivals that we would consider perfectly scientifically credible (for the full exchange, see Laudan and Leplin 1993; Kukla 1993, 1998: 68ff.). Not only has this debate not been settled, it might, moreover, turn out that there is no ‘propriety- criterion’ applicable across different kinds of cases. The criteria that make a competitor serious for the heavily abstract theories of fundamental physics might look very different from those that make good competitors for the much more concrete theories of the life sciences; for example, while the Everett interpretation may be taken seriously in quantum mechanics, its analogue might well be rejected as a reasonable contender in medicine. Regardless of how and whether this discussion gets resolved, there are independent reasons for rejecting both algorithms à la Kukla and the artificial constructions of the Duhem-Quine method. Recall that anti-realists want to target specifically the realist’s claim that we can be justi- fied in believing what our theories say about unobservables. And while the algorithms do attack that claim, the trouble is that they also attack the corresponding claim about observables and even the observed, both of which, of course, were never under debate in the first place (van Fraassen is aware of the sceptical challenge; see 1980: 71–73). The problem with the kinds of algorithms Kukla offers is that they support a much more generalised version of the EET and a corre- spondingly general underdetermination argument that, instead of targeting solely and specifically scientific knowledge, targets all empirical knowledge (see Stanford 2001). As a result, the only conclusions it can sustain are those resulting from the general argument, and these do not carry any force against scientific knowledge in particular. What’s worse, it is not clear how defenders of the generalised EET can avoid this predicament. They have only two options: either they admit that we are justified in believing in our scientific theories, just as we are justified in believing in other empirical knowledge claims, or else they remain sceptical about the scientific case and keep insisting that we are not justified in believing whatever it is that our scientific theories tell us about unobservables. The cost of the first is realism; the cost of the second is that – by their own criteria – we are no longer justified in believing anything empirical at all. Thus, while the second option has the advantage of undermining scientific realism about our current theories, it comes at the expense of delivering the same conclusion for realism about everyday, macroscopic objects. Moreover, it makes the underdetermination argument completely superfluous, since the old and familiar general sceptical argument already gets the job done, and quite thoroughly at that. It is for this reason that the generalised version of the EET is untenable for anti-realists. What they require is a domain-specific argument against scientific knowledge-claims involving unobservables, without this argument also affecting other kinds of knowledge. But algorithms, because of their lack of directedness, fail – and, indeed, must fail – to provide the kind of support for the EET that anti-realists need. The argument needs to be specific enough to shed some light on why scientific knowledge in particular is vulnerable in the intended way. This leaves the anti-realist with moving away from the generalised EET and with imposing some kind of restriction on what constitutes an appropriate empirically equivalent candidate in the context of the argument as a whole. However, this doesn’t look promising, either: the whole point of premise 2, the Entailment Thesis, is precisely to establish that a theory’s observational consequences are all that matters for its epistemic evaluation. As a result, if the applicability of the first premise is now restricted to include only theories that also fulfil certain other conditions, whatever they may be, this restriction will entail a corresponding violation of the second premise, without which the argument doesn’t go through, either. A last point to note is that even if the EET is granted and all worries about what counts as a proper rival are set aside, assessing the strength of a given rival inevitably involves some local, case-specific considerations, even for anti-realists. No matter what the proof of the EET, it is impossible to determine in advance just how bad a given case of underdetermination is. Even

64 Underdetermination with a principled argument for the EET, not all cases of underdetermination will be of the same caliber, since a given rival need not affect different theories equally. To give a quick illustration, rival R may have the same empirical consequences as both T1 and T2 yet underdetermine T1 to a greater degree than T2, as would be the case, for example, when R and T2 enjoy a great degree of non-observational overlap, whereas R and T1 are different in almost every respect. It is exactly this fact that makes the Newtonian case so unexciting. The upshot here is that, in order to assess the extent and severity of a given case of underdetermination, the relationship between the under- determined and underdetermining theories clearly matters. But this is something that cannot be determined without examining the relevant theories in some detail. In response, anti-realists may try to use an algorithm designed to underdetermine every single part of a given theory. Since this is easy to accomplish with suitably modified sceptical hypotheses, however, it would be a consequence of this manoeuver that sceptical hypotheses could be worse underdeterminers (i.e. constitute worse “threats”) than serious and very real scientific rivals that only partially under- determine a theory. To sum up: even if every theory has empirically equivalent rivals, this tells us nothing about how badly underdetermined any particular theory is. Contra Kukla, one cannot simply prove the EET once and for all and never worry about it again, because the answer to the question of how concerned we should be about the rivals in question can only be determined on a case-by-case basis.

3 The Entailment Thesis Let’s set aside worries about empirical equivalence and move on to the Entailment Thesis (ET). According to this, as we have seen, entailment of the empirical data is the only epistemic con- straint on theory-choice, or, put slightly differently, empirically equivalent theories are equally believable. It is worth stressing that both realists and anti-realists agree that what is required for underdetermination is epistemic equivalence – they just disagree about what this consists of. And since, for anti-realists, there is nothing more to it than empirical equivalence, for them the main role of the ET is to explicitly rule out that there may be other criteria capable of making an epistemic difference in theory-choice. Despite this, anti-realists are of course happy to make choices; it’s just that when they do, they claim they are making pragmatic or aesthetic decisions and not epistemic ones. Responses to the ET fall into two main categories: first, argue that, even if we grant the ET and restrict ourselves solely to empirical data, underdetermination does not follow; second, deny the ET and argue that there are factors other than the empirical data that are epistemically significant. The classic version of the first response also goes back to Laudan and Leplin (1991), who argue that “being an empirical consequence of a hypothesis is neither necessary nor sufficient for being evidentially relevant to [it]” (460–461). It is not sufficient, because a theory may receive indirect support from data outside its observational consequence class, and it is not necessary, because a theory might have empirical consequences without those consequences thereby lending it sup- port. Laudan and Leplin use a number of historical examples to illustrate just how a theory may receive evidential support from data outside its observational consequence class, but the general schema is as follows: assume a theory T entails two statements, H1 and H2. Further, evidence e, which is entailed by H2 but not by H1, turns out to confirm H2. Laudan and Leplin now argue that, since e supports H2, H2 supports T, and T also supports H1, H1 receives indirect confirmation from e, despite the fact that it does not entail e. Moreover, if there was some third hypothesis G, empirically equivalent to H1 but not entailed by T, H1 would beat out G, since H1 receives indi- rect confirmation thatG does not (for a discussion of underdetermination in the case of empir- ically equivalent models and the role of indirect evidence in this context, see Werndl 2013). On

65 Dana Tulodziecki the flipside, not all members of a theory’s observational consequence class are confirmatory. To use Laudan and Leplin’s own example, the hypothesis that scripture reading induces puberty in young males is not supported by the fact that test subjects, after several years of scripture reading, are found to be pubescent. The upshot here is the well-known point that not all of a theory’s positive instances are confirming instances, and Laudan and Leplin end up concluding that

[N]o philosopher of science is willing to grant evidential status to a result e with respect to a hypothesis H just because e is a consequence of H. That is the point of two cen- turies of debate over such issues as the independence of e, the purpose for which H was introduced, the additional uses to which H may be put, the relation of H to other theories, and so forth. (466)

Despite its initial plausibility, Okasha (1997) has argued that Laudan and Leplin’s response to the ET commits them to both Hempel’s (1945) converse consequence condition and special consequence condition, which jointly imply that any piece of data confirms any hypothesis – obviously an unwelcome conclusion, especially in the context of underdetermination. This result may be avoided; but, to do so, one needs to articulate a fuller picture of empirical support relations and to provide an account of just when empirical consequences do and do not lend support to a given theory. One may add to Laudan and Leplin’s response by pointing out that even if two theories are confirmed by exactly the same observational consequences, it does not follow from this that these consequences confirm the theories to the same degree. Here, too, how good a response this is depends on the details of the more fully worked-out picture of empirical support that goes along with it. (Earman [1993] explores this issue in the context of Bayesianism, and Massimi [2004] uses the case study of 1920s spectroscopy to show how this might be done in the framework of demonstrative induction; see also Mayo [1997], who has argued that not all hypotheses are tested equally severely by a given piece of evidence.) A prominent response to the ET falling into the second category has been the invocation of the so-called theoretical virtues, allegedly truth-conducive properties that lend epistemic weight to theories possessing them and that, as a result, are capable of breaking empirical ties. Proper- ties frequently mentioned in this context include coherence with other (established) theories, unifying power, consilience, generation of novel predictions, explanatory power, simplicity, ele- gance, parsimony, lack of ad hoc features, and fruitfulness (Churchland 1985; Psillos 1999: 165). While realists think that theories possessing (a subset of ) the virtues are more likely to be true than theories lacking them, anti-realists, instead, believe that the virtues are merely pragmatic or aesthetic properties that “cannot rationally guide our epistemic attitudes and decisions” (van Fraassen 1980: 87). One problem for the virtue-response is that, although there are some notable discussions of individual virtues (see, for example, Kelly [2007] for simplicity, or Glymour [1985]) for explana- tory power), there is no agreed-upon list of what the right virtues are or how they are supposed to work. Worse, Tulodziecki (2012) has shown that, even if there were such a list, it would not help realists. Spelling out a new version of the underdetermination argument that explicitly takes the virtues and other epistemic criteria into account, she shows that epistemic equivalence may come about in a number of significantly different ways. More specifically, Tulodziecki argues that it is a result of this that, even if realists had a complete agreed-upon list and ranking of theoretical virtues – already unlikely – the details and nature of epistemic equivalence are so complex that it makes epistemic comparisons among rival theories nigh impossible. One may add that, even if we did have a precise way of comparing different virtues to each other, there is no reason to

66 Underdetermination think that the epistemic impact of a given virtue is the same across different theories or contexts. In the same way in which the same observation could confirm different theories to different degrees, the same virtue might confer more epistemic merit on one theory in one context than it does on another in a different context. While this demonstrates how hard it would be to always guarantee the existence of epistemically equivalent rivals and, hence, favour the realist, it also means that realists cannot generally articulate just when one theory is epistemically preferable to another. Tulodziecki goes on to argue that this gives rise to a new kind of underdetermina- tion that is the result not of epistemic equivalence but of the fact that its complexity leads to an epistemic impasse in which realists are unable to choose one theory over others for epistemic reasons. This, however, is exactly the conclusion of the original argument, and so it turns out that, even if anti-realists grant that theoretical virtues have epistemic import, they can still maintain underdetermination. A second, bigger problem for the virtue response has been the lack of an account of how exactly the virtues are linked to truth. Psillos hints at a solution by proposing “a combination of the insights of Boyd and Salmon” (1999: 165) but never goes on to develop this in any detail. However, Tulodziecki (2014) has recently suggested how a robust connection between virtues and truth might be established. Engaging in a number of detailed case studies from the history of medicine, she shows concretely how specific virtues were used to argue for the epistemic superiority of one hypothesis over another and so used to break actual empirical ties. She suggests that the question of what virtues are, as a matter of fact, epistemically significant is an empirical question that can be settled by examining historical episodes of the kind she considers. While her limited number of cases doesn’t show that particular virtues are truth-conducive (although her data is suggestive of this), she does show how the empirical question can be settled in principle by showing (i) what sort of data is required in order to do this and (ii) that the required data can be obtained through the type of case study she engages in. Tulodziecki (2013) has also argued that we ought to expand our conception of what counts as an epistemically significant factor in theory-choice to beyond the traditional theoretical vir- tues. She shows how such an expansion works for methodological rules and principles, but her general framework is not limited to these and extendible to other potentially epistemic criteria, such as our experimental practices, the ways in which we engage with our scientific instruments, and so on. One obstacle to this approach is the anti-realist’s restricted notion of observabil- ity. Since anti-realists think that the difference between observables and unobservables signifies an epistemic boundary, it is not enough to show that specific methodological principles are truth-conducive in the case of observables, since, according to the anti-realist, we are not, on that basis, justified in inferring that those principles are similarly epistemically relevant with respect to unobservables. However, through examining how specific methodological principles were deployed in the period leading up to Koch’s discovery of Mycobacterium tuberculosis, Tulodziecki (2007) shows that, even on their own restricted terms, anti-realists have to accept that such prin- ciples can carry epistemic weight in cases involving unobservables. On a more general level, Tulodziecki (2013) argues that hypotheses about the epistemic signif- icance of the virtues, methodological principles, and other potential criteria are empirically test- able and confirmable and disconfirmable in just the same way other empirical hypotheses are. It is a consequence of this empirical approach that, in order to establish two theories’ epistemic parity, one needs to show that they are singled out by methodological principles and possess virtues that are equally well supported in similar contexts. But since one cannot know in advance what cri- teria are relevant in what cases, and since epistemic relevance can only be assessed by examining specific justificatory contexts, such parity is not something that can be established in advance. In fact, since this strategy undermines the hopes for an in-principle argument, the anti-realist’s

67 Dana Tulodziecki general underdetermination argument is undercut, even if it turns out that, as a matter of fact, the entailment thesis is correct, and there are no epistemic criteria besides the empirical evidence (although this seems unlikely). The upshot with respect to the ET as a whole is that here, too, the underdetermination argument is most profitably explored on a local level, regardless of whether one is a realist or an anti-realist.

4 Recent discussions A recent variation on the classic argument also suggests this conclusion. Stanford (2006) has argued that underdetermination can be worrying even if one grants the realist’s richer notion of evidence and rejects the demand for empirical equivalence. He agrees with realists that our cur- rent theories are the best confirmed theories we have but argues that they are threatened not by “live” competitors but by presently unconceived alternatives that are confirmed by the evidence just as well as our own. This kind of underdetermination is merely transient, yet Stanford thinks it ought to worry realists, because we have reason to think it systematically recurs. Stanford provides a number of detailed historical examples to argue that the predicament he lays out is typical and that, therefore, we are justified in applying a “new induction” to the history of science: since sci- entists regularly fail to conceive of equally well-confirmed alternatives, we have reason to believe that our present and future theories will be similarly underdetermined. As a result, the absence of live empirically equivalent competitors is cold comfort to realists. Realists, however, might grant that there are always unconceived alternatives – indeed, they might argue that such alternatives are precisely what scientific progress is about – but hold that this is not a (new) problem for scientific realism. Chakravartty (2008), for example, has argued that the problem of unconceived alternatives (PUA) is a “novel red herring” (153) and a prob- lem for realism only to the extent that the pessimistic meta-induction (PMI) is. In this vein, one might argue that either our current theories are completely false or else continuous with some unconceived alternatives by which they will be replaced. If the former, the problem is that our current theories are regularly replaced by completely distinct successors. This, however, is just the PMI. If the latter, then our current theories are continuous with their successors, and realists can give the same response to the PUA that they have given to the PMI all along. Stanford has argued that the selective realist accounts of Kitcher, Worrall, and Psillos cannot escape either the PUA or the PMI; however, realists such as Chakravartty (2008) and Harker (2008) have argued that recent, more sophisticated selective realist accounts are not similarly vulnerable to Stanford’s arguments. Either way, realists can maintain that the PUA is a problem for them exactly to the same extent that the PMI is. Note, however, that arguments about both the PUA and the PMI inevitably involve historical examples and so are, by their nature, case-based. Thus, to the extent that these discussions are connected to underdetermination, they, too, suggest that this debate ought to be moved to a local level. While many of the discussions of underdetermination have focused on cases from physics, it is worth stressing that this is not the only domain for which there are examples. Turner (2005), for instance, discusses a number of cases from various historical sciences, where, he argues, underde- termination is widespread. The kind of underdetermination at issue, however, is not the result of strong empirical equivalence in which there is no possible evidence capable of deciding among rivals but rather of a weaker kind in which there is, in principle, such a piece of evidence – it’s just that it’s in the past. Further, these historical rivals are perfectly scientifically credible, and they may even have the same virtues as do our hypotheses. The problem is, Turner argues, that back- ground theories in the historical sciences provide us with information about the ways in which historical processes destroy the very evidence we are looking for and so give us reason to think

68 Underdetermination that we will never be in a position to obtain it. This, Turner believes, is in contrast to many of the experimental sciences, in which background theories frequently suggest new and profitable avenues for research, often designed specifically to break empirical ties. One open question that arises at this point is whether different kinds of underdetermination are endemic to different scientific domains. In addition to Turner’s suggestion that the historical sciences are particularly prone to the species of local underdetermination he describes, Belot (2015) has argued that certain types of problems arising in the case of geophysics are problematic for the mathematical sciences more generally (see also Miyake, “Scientific realism and the earth sciences,” ch. 26 of this volume). Further, one might ask whether theories involving specific kinds of mathematical formalisms are especially prone to generating rivals of a certain type, such as ones with the high degree of overlap that we saw in the Newtonian case. More generally, it would be fruitful to explore the ways in which the rivals of highly abstract theories compare to those of more concrete ones and what role experiments can and cannot play, as well as the extent to which experimental design itself is underdetermined.

5 Conclusion All of these discussions – both those of the individual premises and those dealing with specific cases and sciences – suggest that there is no general, in-principle argument in- favor of underdetermina tion but that its real threat can only be assessed on a case-by-case basis. This does not preclude the possibility of some domain-specific generalizations, but even this would involve substantial amounts of on-the-ground research. Assessing actual theories’ epistemic status, regardless of how exactly this is done, is necessary in order for the underdetermination argument to get off the ground. What these discussions show is that the endeavor of using abstract arguments to do so ought to be abandoned. Underdetermination might still haunt realists, but however bad a problem it might turn out to be, it is not the knock-down argument anti-realists hoped for. Just how bad of a problem it is, however, is an empirical question, just like that of determining the extent of underdetermination in individual cases itself. Many thanks to Martin Curd for comments on an earlier draft.

Notes 1 For a discussion of the argument in the general rationality of science debate, see Okasha (2000); for a discussion of its role in the broader debate about sciences and values, see, for example, Longino (1996) and Potter (1996). Note also that there are many different sense and versions of underdetermination, and so it is, in some sense, misleading to speak of “the” underdetermination argument. For some popular varieties, see Laudan (1990) and Gillies (1993). For a novel sense and one that has not yet been explored in the context of the realism debate, see Dawid (2006), who discusses its implications through the example of string theory (see also his “Scientific realism and high-energy physics,” ch. 22 of this volume). 2 Note that the focus in the first premise is really on data, not evidence. This is important, since the rela- tionship between being data and being evidence is a large part of what is controversial about the argument and its conclusions. Sometimes this difference is not made clear, especially in informal versions of the argument, such as this one. It is usually obvious from context which is intended. 3 For a discussion of the last two in the context of underdetermination specifically, see English (1973); see also Frost-Arnold and Magnus (2010), who examine this kind of response and the question of when it is appropriate, more generally. 4 For ease of exposition, I will often use just one of these ways of talking; this should not be understood as suggesting that the underdetermination argument somehow depends on one of them. 5 This is sometimes put as the requirement that there is no possible such evidence, but it is unclear what exactly “possible” means in this context: Possible for human beings? Consistent with the laws of nature? Confined to the inside of our light cone?

69 Dana Tulodziecki

6 There is also the question of whether an infinite number of rival theories poses a greater problem than a finite number. For a defence of the view that it doesn’t, see Hoefer and Rosenberg (1994); for a reply to Hoefer and Rosenberg and a defence of the opposing view, see Kukla (1998). 7 Despite the name, Duhem and Quine actually held quite different views, in particular with respect to what the thesis is supposed to apply to and also with respect to what counts as a reasonable modification in response to unfavourable evidence. For more details, see Gillies (1993).

References Belot, G. (2015) “Down to Earth Underdetermination,” Philosophy and Phenomenological Research 91(1), 456–464. Chakravartty, A. (2008) “What You Don’t Know Can’t Hurt You: Realism and the Unconceived,” Philo- sophical Studies 137, 149–158. Churchland, P. (1985) “The Ontological Status of Observables: In Praise of Superempirical Virtues,” in P. Churchland and C. Hooker (eds.), Images of Science, Chicago: University of Chicago Press. Dawid, R. (2006) “Underdetermination and Theory Succession from the Perspective of String Theory,” Philosophy of Science 73, 298–322. Earman, J. (1993) “Underdetermination, Realism, and Reason,” Midwest Studies in Philosophy 18, 19–38. Ellis, B. (1985) “What Science Aims to Do,” in P. M. Churchland and C. A. Hooker (eds.), Images of Science, Chicago: University of Chicago Press. English, J. (1973) “Underdetermination: Craig and Ramsey,” Journal of Philosophy 70, 453–462. Fraassen, B. van (1980) The Scientific Image, Oxford: Clarendon Press. Fraser, D. (2009) “Quantum Field Theory: Underdetermination, Inconsistency, and Idealization,” Philosophy of Science 76(4), 536–567. Frost-Arnold, G. and Magnus, P. D. (2010) “The Identical Rivals Response to Underdetermination,” in P. D. Magnus and Jacob Busch (eds.), New Waves in Philosophy of Science, London: Palgrave-Macmillan. Gillies, D. (1993) “The Duhem Thesis and the Quine Thesis,” in Philosophy of Science in the Twentieth Century, Oxford: Blackwell Publishers. Glymour, C. (1985) “Explanation and Realism,” in P. Churchland and C. Hooker (Eds.), Images of Science, Chicago: University of Chicago Press, pp. 99–117. Harker, D. (2008) “P. Kyle Stanford: Exceeding Our Grasp: Science, History, and the Problem of Uncon- ceived Alternatives,” Philosophy of Science 75, 251–253. Hempel, C. (1945) “Studies in the Logic of Confirmation,” Mind 54, 1–26, 97–121. Hoefer, C. and Rosenberg, A. (1994) “Empirical Equivalence, Underdetermination, and Systems of the World,” Philosophy of Science 61, 592–607. Kelly, K. (2007) “A New Solution to the Puzzle of Simplicity,” Philosophy of Science 74, 561–573. Kukla, A. (1993) “Laudan, Leplin, Empirical Equivalence, and Underdetermination,” Analysis 53, 1–7. ——— (1998) Studies in Scientific Realism, Oxford: Oxford University Press. Laudan, L. (1990) “Demystifying Underdetermination,” in C. Savage (ed.), Scientific Theories, Vol. 14 of Minnesota Studies in the Philosophy of Science, Minneapolis: University of Minnesota Press. Laudan, L. and Leplin, J. (1991) “Empirical Equivalence and Underdetermination,” Journal of Philosophy 88, 449–472. ——— (1993) “Determination Undeterred: Reply to Kukla,” Analysis 53, 8–16. Leplin, J. (1997) “The Underdetermination of Total Theories,” Erkenntnis 47, 203–215. Longino, H. E. (1996) “Cognitive and Non-Cognitive Values in Science: Rethinking the Dichotomy,” in L. Hankinson Nelson and J. Nelson (eds.), Feminism, Science, and the Philosophy of Science, Dordrecht: Kluwer Academic Publishers. Manchak, J. B. (2009) “Can We Know the Global Structure of Spacetime?” Studies in History and Philosophy of Modern Physics 40, 53–56. Massimi, M. (2004) “What Demonstrative Induction Can Do against the Threat of Underdetermination: Bohr, Heisenberg, and Pauli on Spectroscopic Anomalies (1921–24),” Synthese 140, 243–277. Mayo, D. G. (1997) “Severe Tests, Arguing from Error, and Methodological Underdetermination,” Philo- sophical Studies 86, 243–266. Newton, I. ([1687] 1966) Principia, Berkeley and Los Angeles: University of California Press, Sixth printing, Motte’s translation, revised. Okasha, S. (1997) “Laudan and Leplin on Empirical Equivalence,” British Journal for the Philosophy of Science 48, 251–256.

70 Underdetermination

——— (2000) “The Underdetermination of Theory by Data and the ‘Strong Programme’ in the Sociology of Knowledge,” International Studies in the Philosophy of Science 14(3), 283–297. ——— (2002) “Underdetermination, Holism and the Theory/Data Distinction,” The Philosophical Quarterly 52, 303–319. Potter, E. (1996) “Underdetermination Undeterred,” in L. Hankinson Nelson and J. Nelson (eds.), Femi- nism, Science, and the Philosophy of Science, Dordrecht: Kluwer Academic Publishers. Psillos, S. (1999) Scientific Realism: How Science Tracks Truth, London: Routledge. Stanford, P. K. (2001) “Refusing the Devil’s Bargain: What Kind of Underdetermination Should We Take Seriously?” Philosophy of Science 68 [Proceedings], S1–S12. ——— (2006) Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives, New York: Oxford University Press. Tulodziecki, D. (2007) “Breaking the Ties: Epistemic Significance, Bacilli, and Underdetermination,”Studies in History and Philosophy of Science Part C 38(3), 627–641. ——— (2012) “Epistemic Equivalence and Epistemic Incapacitation,” British Journal for the Philosophy of Science 63(2), 313–328. ——— (2013) “Underdetermination, Methodological Practices, and Realism,” Synthese 190(17), 3731–3750. ——— (2014) “Epistemic Virtues and the Success of Science,” in A. Fairweather (ed.), Virtue Epistemology Naturalized: Bridges between Virtue Epistemology and Philosophy of Science, Synthese Library, Dordrecht: Springer. Turner, D. (2005) “Local Underdetermination in Historical Science,” Philosophy of Science 72, 209–230. Werndl, C. (2013) “On Choosing between Deterministic and Indeterministic Models: Underdetermination and Indirect Evidence,” Synthese 190, 2243–2265.

71 6 KUHN, RELATIVISM AND REALISM

Howard Sankey

1 Introduction The historian of science, Thomas S. Kuhn (1922–1996), was one of the most influential figures in the history and philosophy of science in the latter half of the twentieth century. His model of scientific theory change contains a number of relativistic and anti-realist elements. In this chapter, my aim is to explore the relationship between Kuhn’s views about the nature of science and the position of scientific realism. I will understand scientific realism in what I take to be a classical sense. According to scientific realism, the aim of science is to discover the truth about the objective reality which we inhabit. Progress in science consists in advance toward this aim. Scientific investigation is not restricted to observable phenomena. It extends beyond the domain of the observable to include dimen- sions of reality that are not observable by unaided human sense perception. Theoretical claims are to be taken at face value as claims which purport to be about real unobservable features of reality. They are not to be reduced to discourse about observables. Truth is to be understood in a non-epistemic correspondence sense. It is the way things stand in the mind-independent, objective world that makes scientific claims about the world true or false. When a claim about the world is true, it is because it corresponds to the way the world actually is. (For more detail on scientific realist views of truth, see J. Asay, “Realism and theories of truth,” ch. 30 of this volume.) Before I turn to the relativist and anti-realist themes in Kuhn, I will present a brief overview of the key features of Kuhn’s account of science.

2 Kuhn’s model of scientific change Kuhn’s masterwork, The Structure of Scientific Revolutions, was first published in 1962 (reference here will be to the fourth edition, published in 2012). Structure, as the book is widely known, is the signal work in the historical turn which took hold in the philosophy of science dur- ing the 1960s and 1970s. Earlier empiricist philosophers of science explored such topics as empirical verification, formal relations between evidence and theory and the methodology of science. By contrast, advocates of the historical approach to the philosophy of science chose to concentrate on the actual processes of scientific change found in the historical record of past science.

72 Kuhn, relativism and realism

Rather than offer a theory of scientific method, Kuhn proposed a model of scientific change. In the early stages of the development of a science, research in a particular field tends to be dis- unified as individual scientists pursue a range of competing approaches to their field. Writing of physical optics before Newton, Kuhn says that “though the field’s practitioners were scientists, the net result of their activity was something less than science”; as such, they were unable to take any “common body of belief for granted” (2012: 13). Eventually, however, consensus does form around a shared body of evidence, basic theoretical principles and strategies for the conduct of research. The focus of such consensus is what Kuhn referred to as a “paradigm”. Paradigms are “universally recognized scientific achievements” (2012: xlii) which serve as the basis for “coher- ent traditions of scientific research” (2012: 11). A science which acquires a paradigm thereby arrives at a state of maturity. Mature scientific research based on a shared paradigm is characterized by a unity of opinion that is substantive and normative in nature. Acceptance of a paradigm brings with it agreement on law and theory, as well as an agenda of research “puzzles” that are to be addressed on the basis of the paradigm. Puzzles, which may be empirical or theoretical, include precise determination of significant fact, rigorous comparison of prediction with observation and extension of the paradigm to new phe- nomena and domains. The paradigm provides scientists with methodological standards which take the form of rules of puzzle-solving adequacy that acceptable puzzle-solutions must satisfy. The rules include the basic laws of the paradigm, such as Newton’s laws, which “help to set puzzles and to limit acceptable solutions” (2012: 40). Low-level rules govern the use of instru- mentation and the conduct of experiment. Underlying metaphysical commitments of a paradigm (e.g. corpuscularism) serve as higher level rules which tell scientists not only what exists but what “fundamental explanations must be like” (2012: 41). Kuhn describes science devoted to puzzle-solving on the basis of a shared paradigm as “nor- mal science”. It is characterized by routinized solution of puzzles on the basis of the agreed rules of puzzle-solving adequacy. Progress within a paradigm consists in an increase in solved puzzles and is highly cumulative in nature. Normal scientific puzzle solving is the activity most typical of the sciences. As Kuhn notes, it is “the activity in which most scientists inevitably spend almost all their time” (2012: 5). But the success of normal science may lead to its own undoing. In time, the paradigm may encounter “anomalies”, which, unlike the puzzles of normal science, stubbornly resist solution. If these anomalies proliferate or become particularly acute, scientists may lose confidence in the ability of the paradigm to resolve all of the problems in their field. Such lack of confidence may generate a “crisis” in which scientists actively raise doubts about the paradigm. During such a crisis period, scientists develop and propose alternative candidates for paradigm. A debate may ensue between those scientists who continue to support the reigning paradigm and those who favour one of the emerging candidates for paradigm. Some of Kuhn’s most controversial ideas have to do with paradigm debate. Adherents of competing paradigms are at “least slightly at cross-purposes” and are “bound partly to talk through each other”, since their paradigms are “incommensurable” (2012: 147). Though in his later work the notion is restricted to semantic relations between theories, in Structure Kuhn outlined a number of different aspects of the incommensurability of competing paradigms. The first has a methodological dimension. Competing paradigms employ different methodological standards and seek to address different sets of puzzles. The second aspect is broadly semantic. Each paradigm adopts a different conceptual framework, which gives rise to semantic variation between the vocabularies employed by the competing paradigms. The third aspect relates to perception but may have an ontological dimension. As Kuhn puts the point, “the proponents of competing paradigms practice their trades in different worlds. . . . Practicing in different worlds, the two groups of scientists see different things when they look from the same point in the

73 Howard Sankey same direction” (2012: 149). With this third aspect of incommensurability, Kuhn may simply be understood as making the point that observation is theory dependent and so varies with para- digm. But, as will be noted later, it may also serve as the basis of an anti-realist interpretation of Kuhn’s model. Despite incommensurability, scientists do ultimately choose between paradigms. If they reject the old paradigm in favour of a new one, science may return to normal science under the auspices of the new paradigm. When this happens a “scientific revolution” has taken place. For Kuhn, revolution occurs when an old paradigm is rejected by the members of a scientific community and a new paradigm is adopted as the basis for a further period of normal science. There is no reason to expect such a pattern of repeated transition between periods of normal science to go on forever. Still, Kuhn’s model does suggest that scientific change is to some extent cyclical. This overview of Kuhn’s model of scientific change will serve as a basis on which to intro- duce the topics of the following sections. Because the standards of theory appraisal vary with paradigm, Kuhn’s model seems to carry with it a relativistic view of methodological standards. In section 3, I will bring this aspect of Kuhn’s model into contact with relevant aspects of scientific realism. I will then turn to the question of progress and truth. Kuhn’s model is a problem-solving model that proceeds by way of puzzles and anomalies rather than progress toward truth. In section 4, I will explore Kuhn’s views about the nature of scientific progress in connection with scientific realist views about truth and the nature of scientific progress. This will lead in section 5 to consideration of Kuhn’s views about the incommensurability of paradigms, as well as brief consideration of an anti-realist interpretation of his talk of world- change. In section 6, I will conclude by indicating that the scientific realist may endorse some aspects of Kuhn’s view.

3 Kuhn’s epistemic relativism and scientific realism In this section, I will explore the relationship between epistemic relativist aspects of Structure and the position of scientific realism. Relativism is often contrasted with realism. But not all forms of relativism are opposed to all forms of realism. The question therefore arises of whether and how scientific realism enters into conflict with epistemic relativism as it appears in Kuhn. Scientific realism as characterized here is not formulated as a theory about the nature of norms of theory appraisal. As such, it does not enter into immediate conflict with Kuhn’s views about the vari- ation of such norms. However, as we shall see, it is possible to bring out a conflict between the positions if an account of the norms of theory appraisal is added to scientific realism. Epistemic relativism is relativism about epistemic justification and knowledge. It is the view that justified belief and knowledge are relative to epistemic norms which are subject to variation with context (e.g. historical time period, socio-cultural milieu, scientific paradigm). My con- cern in this section will be with epistemic justification rather than knowledge. The reason for this restriction is that knowledge requires truth as well as epistemic justification. I will focus on epistemic relativism only insofar as it entails relativism about epistemic justification rather than truth. I narrow the focus in this way because I wish to postpone consideration of Kuhn’s views about truth until the next section. According to Kuhn, the primary concern of normal science is to solve puzzles that arise in the application and development of a paradigm. The paradigm provides scientists with stand- ards of puzzle-solving adequacy which they use to determine whether the puzzles that arise in normal-scientific research are satisfactorily resolved. Because normal science is founded on con- sensus on paradigm, as well as standards of appraisal that derive from the paradigm, the practice of normal science is characterized by widespread consensus. In effect, scientists who work within

74 Kuhn, relativism and realism the same paradigm share a common set of epistemic norms on the basis of which their beliefs about matters relating to the paradigm may be justified. By contrast with normal science, the period of crisis and extraordinary science which leads to revolutionary transition between paradigms manifests divergence with respect to epistemic norms. Scientists who favour an alternative paradigm-candidate thereby reject the standards embedded in the reigning paradigm. New standards of puzzle-solving adequacy are introduced in the context of the alternative paradigm-candidate. But while the standards depend upon and vary with paradigm, there is no set of extra-paradigmatic standards to which appeal may be made in order to arbitrate between competing paradigms. Given the absence of neutral standards of theory choice, Kuhn draws an explicit comparison between revolutionary choice of scientific paradigm and political revolution. As he famously put the point, “As in political revolutions, so in paradigm choice – there is no standard higher than the assent of the relevant community” (2012: 94). A number of early critics took Kuhn’s account of paradigm choice to contain elements of relativism and irrationalism (e.g. Lakatos 1970; Popper 1970; Shapere 1964). On the one hand, each paradigm contains standards of puzzle-solving adequacy which provide scientists who work within a paradigm with epistemic norms on the basis of which their beliefs may be justified. On the other hand, Kuhn denies that there are extra-paradigmatic standards on the basis of which the choice between competing paradigms may be made. In the absence of extra-paradigmatic standards, the choice between paradigms seems unable to be made on a rational basis. The impression of irrationalism is reinforced by comments made by Kuhn in which he compares the choice between paradigms to a gestalt switch or religious conversion (2012: 149). On the picture of paradigm choice that emerges from Structure, epistemic justification is relative to the standards of puzzle-solving adequacy that operate within a paradigm, though the choice of paradigm may not itself be rationally based. The perceived irrationalism and relativism of Kuhn’s account of paradigm change lent Struc- ture an air of controversy. Early critics reacted strongly. But more moderate interpretations of Kuhn have since come to prevail. Though not without foundation in the text of Structure, the early critical reaction now seems exaggerated. In later work, Kuhn sought to distance himself from the extreme view that the early critical reaction found in Structure. Despite his apparent denial of fixed or universal methodological norms, Kuhn made passing reference in Structure to “commitments without which no man is a scientist” (2012: 42). However, he did not at that stage fully articulate those commitments. This was a task to which he later turned in response to the early criticism. He outlines the less extreme view in the Postscript – 1969 appended to the second and later editions of Structure. The most detailed statement of the position is to be found in “Objectivity, Value Judgment and Theory Choice” (1977). In contrast with his earlier apparent denial of extra-paradigmatic standards, in this later work Kuhn claimed that there are a number of standards of theory-appraisal found throughout the sciences. His main examples of such standards are accuracy, consistency, breadth, simplicity and fruitfulness, a list that he does not take to be exhaustive (1977: 321–322). Though subject to some qualification, Kuhn allows that these standards are “permanent attributes of science” (1977: 335). When faced with a choice between competing paradigms, scientists appeal to these stand- ards as the basis for comparative assessment of the paradigms. The standards “provide the shared basis for theory choice” (1977: 322). As such, the standards constitute extra-paradigmatic norms of theory-appraisal on the basis of which scientists are able to make a rational choice between competing paradigms. But while the standards provide a basis for theory choice, there are limits on their capacity to determine a choice. The existence of such extra-paradigmatic norms does not guarantee a

75 Howard Sankey unique or unequivocal choice of paradigm. Kuhn pointed out that the individual standards are subject to variant interpretation. Scientists may understand the standards differently. As a result, different scientists may apply the same standards in different ways. Moreover, there is the potential for conflict between the standards when applied to competing theories. For example, one theory might be simpler but less accurate than a competitor, or one might have greater breadth but less consistency than another. Given the ambiguity of the standards and potential for conflict between them, Kuhn chose to describe the standards not as rules but as values (1977: 331). Rather than being rules that dictate an unequivocal choice of theory, the standards serve as values which guide scientists in the assessment of competing theories. Kuhn expressed the point by saying that there is no “neutral algorithm of theory choice” (2012: 198). In the absence of an algorithm of theory choice, scientists must exercise a capacity for judge- ment not directed by rule. The decision to adopt one paradigm over another requires a deliber- ative judgement in which scientists weigh up a diverse range of potentially competing factors. It is possible that they may reach a common decision on the same grounds. But they need not do so. Scientists who agree in choice of paradigm need not arrive at their choice on the same basis. Moreover, scientists who disagree with each other may do so on a rational basis. Scientists who adopt opposing paradigms may do so on a rational basis since their choice may be based on the shared standards of scientific theory choice. With this as background, I now turn to the relation of Kuhn’s view of theory choice to scientific realism. I will focus on the epistemic relativist aspects of Kuhn’s view rather than his denial of an algorithm of theory choice. The reason for this is that there is no apparent need for the realist to deny that the appraisal of theories is based on a range of methodological considerations or to insist on an algorithm of theory choice. As for the epistemic relativism found by early critics in Structure, the question is how it connects with scientific realism. On the characterization that I have given of scientific realism, there is no commitment on the part of the scientific realist to any claim about the nature or function of scientific methodology. One advantage of such a characterization is that it leaves it open for the realist to adopt a view of the practice of science that plays down or denies altogether the role of methodological factors. Still, as I will now proceed to argue, it is possible to bring scientific realism into more direct conflict with epistemic relativism. A realist might downplay or deny the role of methodological factors in choice of theory. But this is not the usual approach. Most philosophers of science take the methodology of science seriously. Scientific realists are no exception. Realists tend to grant the norms of scientific method a pivotal role in the appraisal, the adoption and even the development of theories. Because they take methodological considerations to play a role in relation to theory choice their views with respect to method enter into conflict with the epistemic relativist position. To see this, let us recall the realist view of the aim of science and the nature of scientific progress. On this view, the aim of science is to discover the truth about the world. With the advance of science, later theories attain a higher degree of truth than earlier theories. The norms of method contribute to the progress of science because of the role they play in choice of theory. Scientists base their choice of theory on norms of theory appraisal. That choice results in advance on truth. As a result, increase in truth is due to the use made by scientists of the norms of theory appraisal. Given the role they play in the advance on truth, the norms perform an epistemic func- tion. Use of the norms of scientific theory appraisal reliably leads scientists closer to the truth. If not an infallible indicator of truth, the norms at least have the capacity to exclude false theories in favour of theories that have a good prospect of being true. Given the role played by norms of theory appraisal, realism entails the truth-conducive nature of those norms. The norms constitute a reliable means for the pursuit of truth. We

76 Kuhn, relativism and realism may now see how scientific realism and Kuhn-style epistemic relativism enter into conflict. On a realist construal, the norms of scientific method conduce to truth. But on an epistemic relativist view of the standards of theory-appraisal, there is no commitment to the reliability or truth-conduciveness of the norms of scientific method. They are just norms that have been adopted by social consensus within a scientific community. There is no objective warrant for such norms. At this point, a clear conflict emerges between scientific realism and the epistemic relativism of Structure. This conflict requires adoption of a realist construal of the norms of theory appraisal as truth-conducive norms of inquiry. In order to bring out this conflict, I have appealed to the realist’s view of the role of truth in scientific inquiry. But as we are about to see in the next sec- tion, Kuhn’s account of science affords little scope for a realist conception of truth.

4 Kuhn on progress and truth As traditionally understood, scientific realism takes the aim of science to be truth and progress to consist in advance toward that aim. By contrast, Kuhn conceives of science in terms of problem-solving. For him, scientific progress is to be thought of as progress with respect to the solution of problems. In normal science, progress consists in the continued solution of the puzzles that arise within the paradigm. In scientific revolution, the anomalies which brought about a crisis in the old paradigm must be resolved by the new paradigm. Given the emphasis on problem solving, Kuhn’s account of progress in Structure ascribes no role to the notion of truth. Indeed, Kuhn raises doubts about whether the notion of scientific progress is to be conceived in terms of a goal toward which science progresses. Instead, he notes that the “developmental process” described in Structure is “a process of evolution from primitive beginnings” (2012: 169). As such, progress need not be thought of as “a process of evolution toward anything” (2012: 170). Kuhn recognizes that an evolutionary conception of progress which does not involve progress toward an endpoint may conflict with common assumptions about scientific progress. But he wonders whether it is really necessary to conceive of scientific progress as movement toward “one full, objective, true account of nature” (2012: 170). Once an evolutionary conception of progress is adopted, he suggests, there is no need to think of progress in teleological terms as an advance toward the truth. The problem-solving account of progress that Kuhn presents in Structure accords no role to truth. But Kuhn’s stance appears to have hardened by the time he wrote the Postscript. Rather than a stance of neutrality, his tone becomes dismissive. He expresses doubt about the very idea of a correspondence between theory and reality. He also sees no historical evidence of conver- gence on truth in the record of past scientific theory change. As the two points are distinct, I will discuss them separately. As for the first point, Kuhn considers the idea that successive changes of theory in the history of science give rise to an increasing approximation to the truth. He understands the relevant idea of truth to be a correspondence notion. As he puts it, such a notion of truth involves a “match . . . between the entities with which the theory populates nature and what is ‘really there’” (2012: 205). For his part, Kuhn holds that little sense can be made of such a notion of truth.

Perhaps there is some other way of salvaging the notion of “truth” for application to whole theories, but this one will not do. There is, I think, no theory-independent way to reconstruct phrases like “really there”; the notion of a match between the ontology of a theory and its “real” counterpart in nature now seems to me illusive in principle. (2012: 205)

77 Howard Sankey

At first, Kuhn appears to be concerned with whether it is possible to apply the notion of truth to an entire theory rather than the individual assertions entailed by the theory. But the main thrust of his remark seems to relate to the intelligibility of the idea of a correspondence between theory and reality rather than the applicability of the idea at the level of whole theories. Kuhn’s claim that the idea of a match between theory and reality is “illusive in principle” suggests a view that is positivistic in spirit. Such a view might derive support from the point that it is not possible to provide conclusive empirical verification that a match obtains between what a theory says and the way the world is. This point might incline a philosopher of positivist persuasion to dismiss the idea of a match between theory and reality as unintelligible. But unless one harbours such positivistic scruples, it is difficult to see why one might suppose there to be any basis for doubt that coherent sense may be made of a match between theory and reality. If a theory says that the world is a certain way, and the world is indeed that way, then what the theory says matches the way that the world actually is. If the world is not the way the theory says it is, then a match fails to obtain between the theory and reality. This remains the case whether or not it is possible to empirically verify that the relation does or does not obtain. (For detailed analysis of this aspect of Kuhn, see Bird 2000: 225–237.) Kuhn presents his second point as a historical one. As a historian, he finds it doubtful that the- ories in a historical sequence converge on truth at the level of ontology. He allows that progress is made with respect to problem-solving ability in the transition from Aristotelian to Newtonian and later Einsteinian physics. But, he says,

I can see in their succession no coherent direction of ontological development. On the contrary, in some important respects, though by no means in all, Einstein’s general theory of relativity is closer to Aristotle’s than either of them is to Newton’s. (2012: 205)

On Kuhn’s evolutionary problem-solving account of progress, the transition from Aristotle through to Newton and Einstein may constitute scientific progress. But at the level of ontology, Kuhn claims that there are respects in which Einstein’s theory is closer to Aristotle’s than to New- ton’s. As a result, he maintains that this historical sequence of theories fails to display evidence of ontological convergence. As he later tended to put the point, there is no historical evidence that this sequence of theories is “zeroing in” on the truth (cf. Kuhn 2000: 206, 243). Kuhn does not explain exactly what ontological differences he has in mind when he insists on lack of ontological convergence. But it seems clear that he takes convergence on truth to require that the differences introduced within a historical sequence of theories be no more than refinements within a shared ontological framework. Here two points may be made on behalf of the realist in response to Kuhn. First, it is entirely possible for a theory to make more true claims than another theory even if the theories differ at the level of ontology. Given this, it is entirely possible for one theory to be closer to the truth about the world than the other despite their difference in ontology. Second, there is no need for the realist to be wed to the idea that progress requires convergence on one true theory of the world. All that is required for progress with respect to truth is that there be an increase in the number of truths asserted by a later theory as compared to an earlier theory. (Here it is important to note that technical difficulties have confronted the idea of closeness to the truth. For an overview of the issues, see Psillos 1999: ch. 11.) Throughout his career, Kuhn remained critical of correspondence conceptions of truth as well as the idea of ontological convergence. However, in later work Kuhn came to see that truth does play a crucial role in science. Though he holds that the correspondence theory of

78 Kuhn, relativism and realism truth must be rejected, he suggests that a weaker conception of truth might replace the cor- respondence theory:

. . . something like a redundancy theory of truth is badly needed to replace it, something that will introduce minimal laws of logic (in particular, the law of non-contradiction) and make adhering to them a precondition for the rationality of evaluations. (2000: 99)

On Kuhn’s later view, such a minimalist notion of truth is required to play a normative role in scien- tific discourse. The normative role of the notion of truth is to serve as the basis for a “language game whose rules forbid asserting both a statement and its contrary” (2000: 100). According to Kuhn, the nature of the scientific enterprise is such that scientists cannot accept both a statement and a state- ment that conflicts with it. Instead, they are required to accept or reject statements based on evidence that either supports or conflicts with the statements. In Kuhn’s view, the notion of truth plays a cru- cial role in underpinning the practice of accepting or rejecting statements on the basis of evidence.

5 Kuhn on incommensurability Along with his one-time colleague Paul Feyerabend, Kuhn defended the claim that scientific the- ories or paradigms may be incommensurable. As we saw in section 2, in Structure Kuhn took the notion of incommensurability to have methodological, semantic, perceptual and perhaps onto- logical dimensions. However, in his later work Kuhn narrowed the focus of incommensurability to the semantic sphere. In doing so, Kuhn brought his use of the concept of incommensurability into line with that of Feyerabend, who restricted the notion of incommensurability to semantic relations between theories (e.g. Feyerabend 1981). In this section, I will explore the implica- tions of the thesis of semantic incommensurability with respect to scientific realism. I will also briefly consider an anti-realist interpretation of incommensurability which understands Kuhn’s approach to science in neo-Kantian terms. In Structure, Kuhn emphasized that transition between paradigms leads to significant concep- tual change. Old concepts are rejected or altered, and new concepts are introduced. Because of such conceptual shift the vocabulary that scientists employ to express the concepts may be subject to semantic variation.

Within the new paradigm, old terms, concepts, and experiments fall into new relation- ships one with the other. . . . To make the transition to Einstein’s universe, the whole conceptual web whose strands are space, time, matter, force, and so on, had to be shifted and laid down again on nature whole. . . . Communication across the revolutionary divide is inevitably partial. (2012: 148)

As a result of conceptual shift brought about by scientific revolution, the laws of an earlier paradigm may fail to reduce to laws of a later paradigm. As an example, Kuhn argues that, in a strict sense, Newtonian laws may not be derived from Einstein’s physics. This is because the Einsteinian versions of Newton’s laws employ terms such as “space”, “time” and “mass” in an Einsteinian rather than a Newtonian sense:

. . . the physical referents of these Einsteinian concepts are by no means identical with those of the Newtonian concepts that bear the same name. (Newtonian mass is

79 Howard Sankey

conserved; Einsteinian is convertible with energy. Only at low relative velocities may the two be measured the same way, and even then they must not be conceived to be the same.) Unless we change the definition of the variables in the [Einsteinian versions of the laws], the statements we have derived are not Newtonian. (2012: 102)

In addition to variation in central theoretical concepts, Kuhn argued against the empiricist idea of a neutral observation language (2012: 125–129). As a result, semantic difference between paradigms may fail to be restricted to central theoretical concepts. Semantic variation between paradigms may extend to the vocabulary employed by scientists in their reports of observation and experiment. In work subsequent to Structure, Kuhn came to think of incommensurability increasingly in terms of the inability to translate between the vocabularies employed by theories. At first, reflection on the nature of translation led Kuhn to compare incommensurability with Quinean indeterminacy of translation. But the comparison was not entirely apt. For Quine, there are multiple correct translations compatible with the evidence, whereas for Kuhn there is no correct translation between theories. In Kuhn’s later work, incommensurability is taken to be localized failure of exact translation between inter-defined clusters of terms within the special vocabulary of theories (for the development of Kuhn’s views of incommensurability, see Sankey 1993). Kuhn’s talk of conceptual change and untranslatability led Donald Davidson to take incom- mensurability as one of the main targets of his attack on the idea of a conceptual scheme (David- son 1984). To argue that a language is untranslatable into another by providing examples of expressions that are unable to be translated is to court paradox. The very act of presenting an example of an untranslatable expression within the language into which it fails to be translatable shows that the expression is able to be translated into that language. For this and related rea- sons, Davidson argued that the very idea of a conceptual scheme is to be rejected as incoherent. Objections such as Davidson’s led Kuhn to insist that the translation failure characteristic of incommensurability is not wholesale translation failure between natural languages or even the vocabulary of competing theories. It is restricted to subsets of the special vocabulary of theories (cf. Kuhn 2000: 35–36). This avoids the paradoxical implications that are the focus of Davidson’s criticism, since a vocabulary may be shown to lack the resources possessed by another vocabulary without having to translate between them (see Sankey 1990). Translation failure between theories is connected with the topic of the previous section. As we saw, Kuhn opposes the realist view of progress as advance on truth because of doubts about truth and convergence. In addition to the earlier considerations, Kuhn came to think that incom- mensurability militates against convergence. He suggests that claims of “science’s zeroing in on, getting closer and closer to, the truth” are “meaningless” as a result of incommensurability (2000: 243). Because of the semantic variation which leads to inability to translate, “no shared metric is available to compare our assertions about force and motion with Aristotle’s and thus to provide a basis for a claim that ours (or, for that matter, his) are closer to the truth” (2000: 244). This brings us to the nub of the issue. Kuhn insisted that incommensurability does not entail incomparability. But there is a sense in which the content of incommensurable theories may not be compared. “There is no neutral language,” he writes, “into which both of the theories as well as the relevant data may be translated for purposes of comparison” (2000: 204; cf. 189). Empirical predictions and other assertions made by rival theories using terms that differ in meaning are unable to be compared with respect to assertion or denial of the same proposition. A claim about the world made by one theory will neither assert nor deny the same thing as a claim of the other theory if the claims are expressed by means of terms that do not have the same meaning. But if it

80 Kuhn, relativism and realism is impossible to compare the content of theories in this way, then it will not be possible to show that one theory makes more true claims than the other. Moreover, if the semantic variation is extensive, it may not be possible to show that in the transition between theories there is conver- gence on truth by means of a cumulative increase of truth about the same things. Thus, given the implications for comparison of content, inability to translate between incommensurable theories gives rise to further problems for a realist account of scientific progress as advance on truth. But the claim that untranslatability entails incomparability of content trades on an ambiguity in the notion of meaning. As pointed out by Scheffler (1967: 54–66), it is important to distinguish between the sense and the reference of a term or expression. The distinction may be illustrated using Frege’s famous example of “Morning-Star” and “Evening-Star”. The two expressions dif- fer in sense even though they have the same reference, namely the planet Venus. Scheffler pointed out that in order for assertions made by competing theories to agree or disagree, the terms occur- ring in the assertions need only have the same reference. They need not have the same sense. Given this, the claim that meaning varies between theories does not entail that the content of the theories is unable to be compared. Provided only that the terms refer to the same things, the content of theories may be compared, whether or not the terms have the same sense. Thus, even if it is impossible to provide an exact translation of the claims of one theory using semantically equivalent vocabulary of the other theory, it may nevertheless be possible to compare what one theory says about the world with what the other theory says. Scheffler’s point about reference and content comparison was an important step in coming to grips with the problem of incommensurability. But as traditionally understood, it is not just that sense and reference are distinct aspects of meaning. It was also held that sense determines refer- ence. If we take the sense of an expression to be given by a description associated with the expres- sion, the reference of the expression is the item (or set of items) that satisfies the description. For example, the word “tiger” refers to those items which satisfy the description “large carnivorous, orange-coloured feline with black stripes found in the jungles of Asia”. The problem with ref- erence depending on description in this way is that in cases of conceptual change in science the descriptive content associated with an expression may be subject to significant alteration. As a result, descriptions associated with a term employed by competing theories may be unable to be satisfied by the same things. This is evidently what Kuhn had in mind in claiming that Newto- nian mass is conserved whereas Einsteinian is convertible with energy. The two descriptions may not be jointly satisfied. For this reason, the Newtonian term “mass” must fail to have the same reference as the Einsteinian term “mass”. In general, if reference is determined by description, it cannot be assumed that terms employed by semantically variant theories refer to the same things. Recognition that the determination of reference by sense may lead to discontinuity of refer- ence between theories led philosophers to explore causal theories of reference on which reference is not determined by description (e.g. Devitt 1979). If reference is determined by causal relation between speaker and world rather than by descriptive content associated with an expression, then reference will be insensitive to variation in descriptive content. Though this was a promising idea, it soon emerged that the reference of theoretical terms may not be determined simply by causal relation between speaker and world. For this reason, philosophers working on this topic have tended to endorse some variant of the causal-descriptive theory of reference on which both causal and descriptive elements contribute to the determination of reference (see, for example, Psillos 1999, chapter 12). For scientific realists, appeal to the theory of reference has seemed the most promising approach to the problem of incommensurability. But it has been objected that the theory of reference is unable to serve as a neutral ground in the debate between scientific realists and defenders of incommensurabil- ity. In his influential interpretation of Kuhn, Paul Hoyningen-Huene (1993) has argued that Kuhn’s

81 Howard Sankey metaphysical view is best understood in neo-Kantian terms. In describing change of paradigm, Kuhn sometimes characterized the experience of scientists as a change of world: for example, “the historian may be tempted to exclaim that when paradigms change, the world itself changes with them” (2012: 110). Rather than take such claims literally, Hoyningen-Huene argued that they may be understood in terms of a Kantian distinction between the world-in-itself and the phenomenal world. On this view, what changes in scientific revolution is the phenomenal world inhabited by scientists. The world-in-itself, which lies beyond the epistemic reach of science, may be presumed to remain fixed between paradigms. On this interpretation, the terms of incommensurable paradigms refer to entities within the phenomenal worlds of the paradigms. For the realist to argue that there is continuity of reference between paradigms is to beg the question against the neo-Kantian anti-realist who treats reference as internal to the phenomenal world of a paradigm. (For a recent neo-Kantian approach to Kuhn that incorporates some realist themes, see Massimi 2015.)

6 Conclusion In this chapter, I have explored the ways in which Kuhn’s views are in tension with scientific realism. As we have seen, Kuhn’s account of scientific progress enters into conflict with the realist view that progress consists in advance on truth. Kuhn’s doubts about a match between theory and reality led him to reject the correspondence theory of truth which many realists endorse. Further problems about scientific progress and an increase in truth also arise from Kuhn’s ideas about incommensurability. But it should not be thought that Kuhn’s views are comprehensively opposed to scientific real- ism. There is no reason for the scientific realist to deny that the evaluation of scientific theories is multi-criterial or to insist that the choice between theories must be based on an algorithm. Provided that the criteria of theory choice reliably promote the search for truth, the realist may accept much of what Kuhn says about the appraisal of theories. Moreover, there is no reason for the realist to deny that scientific progress may be thought of in evolutionary and problem-solving terms. Provided that theories at a later stage of evolution which display an increase in problem- solving effectiveness are also making headway toward the truth, the realist may agree that increased adaptation and problem-solving ability are features of scientific progress. Finally, there is no need for the realist to deny that substantial conceptual change takes place in the history of science. The realist may well wish to say that such changes bring the conceptual apparatus of science into closer accord with real divisions in nature. But conceptual change as such is not something that the realist need resist.

Further reading In this chapter, I have sought to cover those aspects of Kuhn’s work which intersect with central themes of scientific realism. For broader coverage of Kuhn, the following two recent antholo- gies provide an excellent introduction to contemporary scholarly work on Kuhn. V. Kindi and T. Arabatzis (eds.), Kuhn’s The Structure of Scientific Revolutions Revisited (New York: Routledge, 2012); W. J. Devlin and A. Bokulich (eds.), Kuhn’s Structure of Scientific Revolutions – 50 Years On (Switzerland: Springer International Publishing, 2015).

References Bird, A. (2000) Thomas Kuhn, Chesham, UK: Acumen Press. Davidson, D. (1984) “On the Very Idea of a Conceptual Scheme,” in Inquiries into Truth and Interpretation, Oxford: Oxford University Press, pp. 183–198.

82 Kuhn, relativism and realism

Devitt, M. (1979) “Against Incommensurability,” Australasian Journal of Philosophy 57, 29–50. Feyerabend, P. (1981) “Explanation, Reduction and Empiricism,” in Realism, Rationalism and Scientific Method: Philosophical Papers (Vol. 1), Cambridge: Cambridge University Press, pp. 44–96. Hoyningen-Huene, P. (1993) Reconstructing Scientific Revolutions: Thomas S. Kuhn’s Philosophy of Science, Chi- cago: University of Chicago Press. Kuhn, T. S. (1977) “Objectivity, Value Judgment and Theory Choice,” in The Essential Tension, Chicago: University of Chicago Press, pp. 320–339. ——— (2000) The Road since Structure, Chicago: University of Chicago Press. ——— (2012) The Structure of Scientific Revolutions (4th ed.), Chicago: University of Chicago Press. Lakatos, I. (1970) “Falsification and the Methodology of Scientific Research Programmes,” in I. Lakatos and A. Musgrave (eds.), Criticism and the Growth of Knowledge, Cambridge: Cambridge University Press, pp. 91–196. Massimi, M. (2015) “Walking the Line: Kuhn between Realism and Relativism,” in W. J. Devlin and A. Bokulich (eds.), Kuhn’s Structure of Scientific Revolutions: 50 Years On, Switzerland: Springer Interna- tional Publishing, pp. 135–152. Popper, K. R. (1970) “Normal Science and Its Dangers,” in I. Lakatos and A. Musgrave (eds.), Criticism and the Growth of Knowledge, Cambridge: Cambridge University Press, pp. 51–58. Psillos, S. (1999) Scientific Realism: How Science Tracks Truth, London: Routledge. Sankey, H. (1990) “In Defence of Untranslatability,” Australasian Journal of Philosophy 68, 1–21. ——— (1993) “Kuhn’s Changing Concept of Incommensurability,” British Journal for the Philosophy of Science 44, 759–774. Scheffler, I. (1967)Science and Subjectivity, Indianapolis: Bobbs-Merrill. Shapere, D. (1964) “The Structure of Scientific Revolutions,” The Philosophical Review 73, 383–394.

83 7 INSTRUMENTALISM

Darrell P. Rowbottom

1 Introduction ‘Instrumentalism’ – a term that was probably coined1 – byhas Deweymeant several different things to different philosophers of science, and there is no standard definition thereof. It is best to think of instrumentalism as a philosophical movement with 2 a Scientifichistorical realismbasis. is no different in this respect. As Hacking (1983: 26) puts it:

Definitions of ‘scientific realism’ merely point the way. It is more an attitude than a clearly stated doctrine . . . Scientific realism and anti-realism are . . . movements. We can enter their discussions armed with a pair of one-paragraph definitions, but once inside we shall encounter any number of competing and divergent positions . . .

And as Chakravartty (2011) adds:

It is perhaps only a slight exaggeration to say that scientific realism is characterized differently by every author who discusses it, and this presents a challenge to anyone hoping to learn what it is.

So whatcan we say about the instrumentalist 3 movement?It has two key components. First, as one might expect, it involves a cluster of views – both normative and descriptive – on which science, or a significant part thereof, is construed as an instrument. Second, it involves characterizing the positive role of said instrument solely, or centrally, in terms of observable things (or phenomena). Thus instrumentalism is closely aligned with, and might even be understood to be a subspecies of, empiricism about science. Here are some specific examples of theses that are instrumentalist in character, according to the prior characterization: science is valuable primarily in so far as it is an instrument for mak- ing predictions about the observable; science is merely an instrument for making predictions about the observable; and scientific discourse about the unobservable is merely an instrument for making predictions concerning the observable. One might think of such theses as falling into categories such as ‘axiological’, ‘epistemic’, and ‘semantic’ if this helps to compare them with realist alternatives. It’s also worth bearing in mind that each thesis might be weakened somewhat

84 Instrumentalism and still retain an instrumentalist character. For example, the final thesis might be weakened to ‘scientific discourse about the unobservable is typically no more than an instrument for making predictions concerning the observable’. Appeals to typicality feature in some characterizations of realism, such as Boyd’s (1980), too. For illustrative purposes, let’s consider constructive empiricism. Van Fraassen (1980: 12) defines this position as follows: ‘Science aims to give us theories which are empirically adequate; and acceptance of a theory involves as belief only that it is empirically adequate’. Empirical adequacy is in turn defined in terms of observability (ibid.): ‘a theory is empirically adequate exactly if what it says about the observable things and events in this world is true’.4 So constructive empiricism satisfies one of the two criteria to be instrumentalist, in so far as it characterizes the positive role of science in terms of observable things. But does it fulfill the other? This depends on how one construes the talk of ‘the aim of science’.5 But suffice it to say two things about this. First, rephrasing the definition of constructive empiricism such that it begins ‘Science is an instrument for giving us theories . . .’ doesn’t, prima facie, do any violence to it. Second, as we’ll see in what follows, Mach wrote of ‘the task of science’, which is plausibly interchangeable with ‘the aim of science’, in a highly similar way. And Mach is widely acknowl- edged to have been an instrumentalist.

2 How do instrumentalist theses interrelate? Nineteenth-century lessons A good way to penetrate to the core of instrumentalism, and to get an understanding of how the theses associated with it are connected, is to look at the work of the key historical figures associ- ated with the movement. Let’s begin by considering Mach’s views on science and distilling the core instrumentalist component of these. We can then consider how Mach’s views relate to those of several other philosophers and scientists of the time who shared his anti-realist inclinations. Wherein did Mach take science’s primary value to lie, if not in finding the truth about the unobservable world? The answer is apparent from the following passage:

The biological task of science is to provide the fully developed human with as perfect a means of orientating himself as possible. (Mach 1984: 37)

But why did Mach think this? In part, the answer is that he took physical objects to be bundles of sensations. In his own words: ‘Properly speaking the world is not composed of “things” as its elements, but of colors, tones, pressures, spaces, times, in short what we ordinarily call individual sensations’ (Mach 1960: 579). So Mach held that there are no unobservable phys- ical things for us to describe. It is a short step to thinking that ‘[i]t is the object of science to replace, or save, experiences, by the reproduction and anticipation of facts in thought’ (Mach 1960: 577). Mach also strongly emphasized the importance of economy and declared, as a result, that much talk about unobservable entities and processes should be eliminated. (He did allow that some talk of unobservable things is useful.)6 In his words: ‘all metaphysical elements are to be eliminated as superfluous and as destructive of the economy of science’ (Mach 1984: xxxviii). As Pojman (2009) explains,

Mach’s reason for insisting that economy must be a guiding principle in accepting or rejecting a theory is that uneconomical theories cannot fulfill their biological function, which . . . he insists is (in a descriptive sense) the function of science. The biological

85 Darrell P. Rowbottom

purpose of science is the improvement or the better adaptation of memory in service of the organism’s development.

However, Mach’s view was more extreme than it needed to be given his view on the biological purpose, or task, of science. That’s because ‘orienting’ oneself may involve more than acquiring predictive (and retrodictive) power (in an economical way). Gaining an understanding of the phenomena and how they interrelate may also be necessary for a genuine orientation, or at least for achieving ‘as perfect a means of orienting [oneself] as possible’. (For instance, having an understanding of why something occurs may improve one’s memory that it occurs.) And a tale concerning unobservable things – especially one involving analogies with observable things – might furnish one with such an understanding. Now if understanding is factive – that is to say, if any proposition expressing an under- standing is true – then this view is unsustainable. But philosophers such as Elgin (2007) and Rancourt (2015) argue that understanding is not factive.7 What’s more, a non-factive (and not even quasi-factive) view of understanding was popular among many of Mach’s contemporaries who took a more positive view towards discourse about unobservables, while nonetheless taking said discourse to be largely, or completely, non-literal in character.8 For example, here are two key passages in which Poincaré approvingly discusses non-literal scientific discourse:

[Some hypotheses have] only a metaphorical sense. The scientist should no more banish them than a poet banishes metaphor; but he ought to know what they are worth. They may be useful to give satisfaction to the mind, and they will do no harm as long as they are only indifferent hypotheses. (Poincaré 1905: 182)

[I]ndifferent hypotheses are never dangerous provided their characters are not mis- understood. They may be useful, either as artifices for calculation, or to assist our understanding by concrete images, to fix the ideas, as we say. They need not therefore be rejected. (Poincaré 1905: 170–171)

Poincaré’s mention of ‘satisfaction to the mind’ stands in contrast to Mach’s (1911: 49) suggestion that ‘[w]hat we represent to ourselves behind the appearances . . . has for us only the value of a memoria technica or formula’. It also gels with the stance that many (Cambridge-educated) British physicists in the Victorian era took on mechanical models, as explained by Heilbron (1977: 41–43):

[T]he representations were not meant or taken literally . . . The same physicist might on different occasions use different and even conflicting pictures of the same phenomena . . . piecemeal analogies or provisional illustrative models.

Heilbron (1977: 42) adds, ‘Such pictures, they believed, fixed ideas, trained the imagination, and suggested further applications of the theory’. As Lodge (1892: 13) put it:

[I]f we resist the help of an analogy . . . there are only two courses open to us: either we must become first-rate mathematicians, able to live wholly among symbols, dispensing with pictorial images and such adventitious aid; or we must remain in hazy ignorance of the stages which have been reached, and of the present knowledge . . .

86 Instrumentalism

Indeed, Kelvin (1883: 270) went so far as to declare that having a model is necessary for having understanding:

If I can make a mechanical model, I understand it. As long as I cannot make a model all the way through I cannot understand . . . I want to understand light as well as I can without introducing things that we understand even less of.9

Mach disapproved of piecemeal modelling in the name of economy.10 In short, having to remem- ber lots of different models (or theories) for different occasions seemed to him to be bad, reason- ably enough, in so far as memory space is concerned. However, Mach was considering economy at a global, rather than a local, level. And a theory (or theoretical framework or model) that lacks economy at the global level may have economy in considerable measure – even when it comes merely to considering ease of calculation – at the local level. It may also, as intimated earlier, be easier to remember than any more economical competitor. Think of classical mechanics. We could dispense with it entirely in principle. But we don’t because it’s such a useful tool in some circumstances. It’s simpler than the alternatives in the straightforward sense that it involves fewer variables and constants; it doesn’t require reference to the speed of light, or a wave function, or a quantum potential. Of course, Mach could reply to this example that classical mechanics is simply a ‘special-case’ approximation to the other forms of mechanics we have. But this route isn’t always open; in the case of models, as distinct from theories, it typically isn’t. To this we may add that if we’re interested in understanding, not just easy prediction, then simple models for local contexts can be worthwhile parts of our intellectual arsenal for independent reasons. For navigation at sea, for example, it’s convenient to think in terms of a central Earth and a sphere of stars revolving around it. This doesn’t mean we should disregard global economy. It’s important. But it should not be maximized irrespective of local considerations. Let’s take stock. We’ve seen that Mach’s instrumentalism involved evaluative and descriptive components. Descriptively, for example, Mach thought that scientists (and hence science) could not find the truth about the unobservable world. He thought this on ontological grounds, although modern instrumentalists typically argue for it on epistemological grounds. Evaluatively, he thought that science progresses when its ability to orient us increases. We have also seen that Mach took these views to have methodological significance. But we have also seen that their genuine methodological significance is open to dispute. So Mach might have been persuaded by the foregoing considerations concerning understanding and would have been no less an instrumentalist as a result.

3 What is instrumentalism not? We’re now in a position to dispel some of the common confusions concerning instrumentalism. One kind of error involves conflating, or inappropriately connecting, descriptive and evaluative instrumentalist theses. Consider, for example, the following Encyclopaedia Britannica entry (which is by an eminent academic):

Instrumentalism, in the philosophy of science, [sic] the view that the value of scientific concepts and theories is determined not by whether they are literally true or correspond to reality in some sense but by the extent to which they help to make accurate empirical predictions or to resolve conceptual problems. Instrumentalism is thus the view that scientific theories should be thought of primarily as tools for solving practical problems rather than as meaningful descriptions of the natural world. (de Neufville 2015)

87 Darrell P. Rowbottom

The second sentence does not follow from the first. To see this, let’s imagine we know that con- temporary scientific theories are not only meaningful but also highly accurate descriptions of the natural world. It follows that we should think of those theories in just such a way – that is to say, as approximately true descriptions – on standard accounts of knowledge. But does this mean that their value consists, wholly or even partly, in the fact that they are such descriptions? No. It’s possible to maintain that the primary value of scientific theories is practical, say in providing us with predictive power concerning observable things. And this might hold even if the predictive power of theories is closely correlated with their truthlikeness. Finding truthlike theories could merely be a necessary means to an end. Analogously, the value of shopping doesn’t consist in the loss of money it inevitably involves. I chose this entry because it provides a quick and easy way to illustrate the point; but this kind of misrepresentation appears in journal articles and monographs too. This claim is supported by the next example, in which instrumentalism is instead mischaracterized as centrally involving a methodological thesis, in one of the most influential books on scientific realism in recent decades:

Syntactic instrumentalism comes in two variant forms: eliminative and non-eliminative. The non-eliminative variant (a kind of which can be associated with Duhem) is that one need not assume that there is an unobservable reality behind the phenomena, nor that science aims to describe it, in order to do science and to do it successfully. . . . Elimina- tive instrumentalism takes a stronger view: theories should not aim to represent anything ‘deeper’ than experience, because, ultimately, there is nothing deeper than experience to represent. However it will typically resist the project of translating away t-discourse [i.e., discourse about unobservables]. (Psillos 1999: 17)

Let’s begin by considering the way that non-eliminative instrumentalism is represented as a thesis concerning what it takes – or more appositely, doesn’t take – for a person, or perhaps a group of people, to be able to do science successfully. Here’s why this is a serious mischaracterization of instrumentalism. Imagine an ardent realist who holds that all contemporary scientific theories are almost entirely true when taken at face value. She believes in black holes, DNA, atoms, electrons, photons, quarks, the Higgs boson, and even superstrings. She also believes that these things have the properties standardly ascribed to them and behave in the ways they are standardly said to behave: for example, she thinks that black holes emit Hawking radiation, that atoms have nuclei, and that electrons have an intrinsic property of spin. Must she also, on pain of inconsistency, commit to the falsity of ‘non-eliminative instrumentalism’, as Psillos defines it? On the contrary, she might wholeheartedly endorse it. First, she might hold that scientists have simply learned, and never needed to assume, that there is an ‘unobservable reality behind the phenomena’. She might think that they tried talk- ing about unobservable things, found this was useful, and hence acquired good evidence for the existence of such things. (After all, realists tend to think that empirical success is indicative of probable truthlikeness.) She might also think, more importantly, that successful science was done before any good evidence for unobservable things was acquired, say by ancient and medieval astronomers. Second, more incisively, she might think that it’s possible to do science well while mistakenly thinking that all talk about unobservable things is just for convenience. For example, two sci- entists might select theories (or models) on the basis of exactly the same criteria – simplicity, consistency, and so forth – and weight those criteria in precisely the same way yet disagree about the ultimate purpose of the theory-selection (or model-selection) process. They could

88 Instrumentalism agree on what the best theories (or models) were but disagree about what those best theories (or models) achieve. The nub of my argument in the last three paragraphs, in summary, is simple. Instrumentalism – whatever it is – is a form of anti-realism. But the position that Psillos calls ‘instrumentalism’, in the passage cited, is compatible with (ardent) realism (in at least one variant). Therefore, the position that Psillos calls ‘instrumentalism’ is not instrumentalism. Let’s now briefly consider whether ‘eliminative instrumentalism’ is also characterized in a methodological way in the passage quoted. Unfortunately, the prose concerning this is awkward to interpret because it has a highly metaphorical, and hence undesirably vague, character. For example, what does it mean for theories to aim and to say that theories are under an obligation (not to aim at one thing in particular)? Given the methodological character of the previous part of the passage, however, it seems fair to interpret the claims about ‘theories’ as claims about scientists, and more particularly about what scientists should do with theories. (Note that ‘science aims’ is used in the definition for the ‘non-eliminative’ variant, too. There is a regrettable precedent for talking about ‘science aims’ in different ways – see Rowbottom [2014] – but one reading is ‘sci- entists aim’.) It therefore appears reasonable to suspect that Psillos’s ‘eliminative instrumentalism’, as defined in this passage, is not a form of instrumentalism either. What explains Psillos’s mischaracterization of instrumentalism as methodological in character? The answer, I suspect, is that several early instrumentalists were practicing scientists who took their views on the value and purpose of science, and indeed their views on fundamental ontology, to be more closely linked to scientific practice than was strictly necessary. That’s what we saw with Mach, who was plausibly one of the philosophers that Psillos had in mind in writing of ‘elim- inative instrumentalism’. As we’ve seen, Mach’s view was that there are no (‘physical’) entities beyond the phenomena and hence that there are no unobservable (‘physical’) entities. It follows that theories cannot – not that they ‘should not’ (whatever that means) – ‘represent anything deeper than experience’ (Psillos ibid.). But strictly speaking, as we saw in the previous section, nothing of substantial methodological import follows. In short, one might agree with Mach on these matters, as well as on his thought that science is valuable (and/or progresses) primarily in so far as it saves the phenomena, yet hold that realist scientists are, or that a realist approach to science is, best. For example, one might think that realist scientists are typically better motivated because of their (delusional) beliefs that they’re uncovering deep mysteries concerning the world, and one might think that developing theories positing unobservable entities is typically indispensable for achiev- ing a superior understanding of how phenomena interrelate. One might even go so far as to think that realist scientists are typically better at developing theories positing unobservable entities than anti-realists are. An instrumentalist certainly needn’t agree with the remarkably strong descriptive view attributed to her by Sorensen (2013: 30), namely: ‘the scientist merely aims at the prediction and control of the phenomena . . . scientists are indifferent to the truth’. Rigid presentations of instrumentalism are convenient for realist authors, or other opponents of the movement, in so far as they present clear targets for assault. But these targets are straw men. And were the roles to be reversed in this process, realists would object. If a modern anti-realist were to declare that scientific realism involves the claim that ‘contemporary scientific theories are approximately true’, for instance, then a moderate realist might respond by saying that this is only a rough characterization of the epistemic element of scientific realism and that scientific realism survives when it’s recast as involving a weaker claim, such as ‘well-confirmed theories in mature sciences are typically approximately true’. But what’s sauce for the goose is sauce for the gander. There has been considerable movement towards the centre, away from the extremes, in the scientific realism debate as a whole over the past century. Yet identification with traditions and movements continues.

89 Darrell P. Rowbottom

Nevertheless, some authors sympathetic to instrumentalism also characterize it in ways that are unnecessarily strong. For instance, Sober (1999: 5) claims that:

Instrumentalism does not deny that theories are and ought to be judged by their sim- plicity, their ability to unify disparate phenomena, and so on. However, instrumentalism regards these considerations as relevant only in so far as they reflect on a theory’s predic- tive accuracy. If two theories are predictively equivalent, then a difference in simplicity or unification makes no difference, as far as instrumentalism is concerned.

We’ve already seen two reasons why this is too strong. First, the more economical of two empir- ically equivalent theories may be the more valuable, for an instrumentalist, because of the greater ease of using it or remembering it. Second, gaining an understanding of how the phenomena relate, in addition to ‘saving’ them, may be of value to some instrumentalists.

4 Objections to instrumentalism Since instrumentalism is a movement, few instrumentalists defend precisely the same theses. Hence, a reasonable objection to any given form of instrumentalism is unlikely to be a reason- able objection to most other forms. Nevertheless, it’s possible to point to classes of objections which are similar in character in some significant respects and which are pertinent to all – or, failing that, almost all – forms of instrumentalism. Those covered here all involve the concept of observability. One type of argument is that the set of unobservable entities has shrunk in the course of past science and should be expected to shrink considerably further in future science. To understand how this type of argument works, it’s important to note that ‘unobservable’ can legitimately be understood in different ways. Like other notions of considerable historical significance in the philosophy of science, such as ‘verifiable’ and ‘falsifiable’, it’s modal. To be specific, ‘unobservable’ means ‘impossible to observe’. But the impossibility might be either in practice or in principle. (Or to put it differently, the ‘impossibility’ may be context bound rather than fully general.) In prac- tice, we cannot build a spacecraft capable of travelling from Earth to Mars in under four months. However, it’s plausible that we can do so in principle, in so far as it’s plausible we will be able to do so in practice at a later time. So the set of the observable in principle is plausibly much larger than the set of what’s observ- able in practice at present. A maximally strong realist claim is that everything is observable in principle. Weaker assertions, which are correspondingly more plausible, involve more restricted claims: one such claim is that relatively few things are unobservable in principle. But how may what’s observable in practice increase over time? Maxwell (1962) argues that there are two main mechanisms. First, on the assumption that observations are theory laden (or even theory infected), we may discover new theories. Second, we may devise and build new instruments, via the use of which we may extend our sensory range (in some sense).11 Let’s think about theory change first. Let’s grant, for the sake of argument, that all observations – or, if preferred, many observations relevant to science – are theory laden in a relatively uncon- troversial sense. Let’s say that (sincere) observation statements, rather than observations simpliciter, are theory laden. Let’s also say that such statements express beliefs. Hence what a person comes to believe, on the basis of their sensory inputs in some specific situation, depends on the theories that they (explicitly or implicitly) believe to be true (or approximately true). Here’s an example. If my nine-year-old daughter Clara and I were walking in the fields near our home and came upon a beast, she might believe ‘I am seeing a rabbit’, whereas I might believe ‘I am seeing a hare’.

90 Instrumentalism

What’s more, I might believe ‘I am seeing a lagomorph’. She wouldn’t, as she doesn’t yet possess the concept ‘lagomorph’.12 The thesis that observations are theory laden has its roots in instrumentalist thinkers. Duhem (1954) is often cited in connection with the claim, and Poincaré (1905: 159–160) agreed:

It is often said that experiments should be made without preconceived ideas. That is impossible. Not only would it make every experiment fruitless, but even if we wished to do so, it could not be done. Every man has his own conception of the world, and this he cannot so easily lay aside. We must, for example, use language, and our language is necessarily steeped in preconceived ideas. Only they are unconscious preconceived ideas, which are a thousand times the most dangerous of all.

Indeed, if a realist holds that observations are theory laden to a high degree – following Feyera- bend (1958) – then it will be tricky for her to defend claims that she is inclined to defend, such as that we have the ability to latch onto natural kinds, conceptually, on the basis of experience. Why might our observations not be easier to save and comprehend when we employ non-natural category schemes, for example? And wouldn’t successful actors outcompete deep knowers, from an evolutionary perspective? Moreover, there is no need for the instrumentalist to deny that observations are theory laden in order to maintain that there’s a significant distinction between the observable and the unobservable. For example, the distinction might be relativised to a privileged theory set that tends to remain stable over time. One option is to appeal to innate theories of some form or another, as some philosophers have done when it comes to our linguistic ability. Another option, which I defend (Rowbottom Manuscript: ch. V), is to look to the set of folk theories central in our upbringings and daily lives. What counted as a table or chair for Newton counts as a table and chair for you, and vice versa. The same may be said for many property ascrip- tions, like ‘spherical’, ‘blue’, ‘hard’, and so on. (Your ‘know-how’, concerning when such terms apply to things in everyday life, is no different.) And one could understand scientific progress to centrally involve improving our predictive ability and/or understanding as expressed in that restricted, relatively stable language. One natural realist response is that stability isn’t absolute; there are folk theories now about computers, for example, that there weren’t in the past. However, the instrumentalist may rejoin that these theories tend to remain stable once they’re introduced and also that they have a large degree of autonomy from the content of science. Indeed, technological changes often occur without significant theoretical changes. A stronger realist objection might be that the folk theo- retical set isn’t, or shouldn’t be, privileged. And here, some of the deeper considerations that split realists and instrumentalists will come to the fore. Different values – and even perhaps stances (ch. 18) – are at play. Instrumentalists tend to be more pragmatic in orientation than realists and to see little or no intrinsic value in science. Facts about the unobservable world, even if they can be known, may be seen to be as trivial as facts about numbers – for example, that the lowest prime number is two – unless they have some ‘cash value’ in the world of experience. A different instrumentalist strategy, proposed by Stanford (2006), is also worthy of mention. This is to deny that there is any ‘privileged or foundational role for the hypothesis of the bodies of common sense or any specific hypothesis built into the instrumentalist’s account of the world and our knowledge of it’ (Stanford 2006: 205). Why does he say this? Stanford’s basic idea is that both realists and instrumentalists take some theories to be true (or highly truthlike) and others to be merely instrumentally useful. In particular, scientific realists continue to think it’s reasonable to use discredited theories – like the ideal gas law, classical mechanics, and so forth – purely as means

91 Darrell P. Rowbottom to predictive ends in particular contexts.13 And ‘the instrumentalist simply assigns a much larger set of theories we have to [that] category’ (Stanford 2006: 205). Stanford thinks instrumentalists can be justified in so doing on the basis of the argument from unconceived alternatives; see ch. 17 and Rowbottom (in press; Manuscript, ch. III). Let’s now think about whether we can use instruments to extend the range of our observa- tions. Carnap (1966: ch. 23) suggests the answer is “No” only when we consider direct obser- vation, which, on Carnap’s view, must be technologically and inferentially unaided. Moreover, Carnap allowed that ‘observable’ and ‘unobservable’ are vague (and van Fraassen 1980 follows suit on this):

There is a continuum which starts with direct sensory observations and proceeds to enormously complex, indirect methods of observation. Obviously no sharp line can be drawn across this continuum; it is a matter of degree. (Carnap 1966: 226)

This makes it dubious that the directly observable is of special significance in ontological, epis- temological, axiological, or methodological senses. For example, I can see it’s snowing outside from where I’m sitting. But should I treat my observation through the double-glazed window – through the panes of glass – as less reliable than the one I’d make if I were outside in the cold? The instrumentalist need not answer in the positive. Rather, there is considerable leeway for an instrumentalist to give ground on the sense of ‘observable’ pertinent to her position while main- taining that many of our modern instruments do not enable observation. Such an instrumentalist might concede that optical microscopes enable the observation of cells (in line with Hacking 1985). But she might deny that scanning-force microscopy enables the observation of atoms or that the so-called ‘Sudbury neutrino observatory’ was really an observatory.14 It might be added that some realists just concede that the observable is limited and argue directly that detection has a similar status to observation in the context under discussion. See Contessa (2006), for example, on how detection procedures can allegedly provide good evidence for existential claims. This brings us onto a final argument type. Rather than arguing that the set of the unob- servable will shrink over time and so forth, a realist may instead deny that there’s a significant distinction between the observable and the unobservable. The sense of ‘significant’ will be determined by the context, namely the specific kind of claims – semantic, epistemic, and so forth – under dispute. For example, in response to the claim that science is merely an instru- ment for predicting how observable things behave, one might conceivably object that it’s not possible to predict how observable things behave without knowing about and predicting how (at least some) unobservable things behave (in some parts of contemporary science). One might suggest that we couldn’t genetically engineer drosophila melanogaster variants without knowing some things about DNA, such as that it contains ‘base pairs’ of purines and pyrimidines, even granting that DNA is unobservable. The basic idea behind this objection is that the empirical success of a theory is linked with how truthlike it is. (Note that in this example, we’re able to intervene; the ‘empirical success’ at stake doesn’t merely concern passive observations.) Realists tend to think that the empirical success of a theory indicates its truthlikeness, whereas instrumentalists (like many other empiricists) tend to deny this. The classic argument for the instrumentalist view is the so-called pessimistic induction, presented by Laudan (1981) and (plausibly much earlier by) Poincaré (1905), which is discussed in more detail in ch. 4. The basic idea behind this argument is that past theories have been responsible for great predictive successes in spite of ‘centrally’ positing unobservable things that we now take not to exist, such as caloric and phlogiston.

92 Instrumentalism

5 Conclusion In this chapter, we have seen how instrumentalism is best characterized relatively broadly, with reference to the work of those involved in the movement’s genesis. We have also seen how it is often mischaracterized, even in recent work. Finally, we have seen that many of the central arguments concerning the viability of instrumentalism concern the nature and significance of observability. If there is a single ‘take home’ message, then it is as follows. Instrumentalism is dead only to the extent that one understands it in a highly restrictive, and correspondingly uncharitable and implausible, fashion. In the words of Stanford (2006: 192): ‘instrumentalist sympathies have produced a wide variety of importantly divergent attitudes (sometimes within the works of a single author) toward the cognitive, semantic, and epistemic status of theories . . . ’. Modern instrumentalists continue to develop, explore, and defend new views on science (and parts thereof ). Typically, these new views are more sophisticated, and hence defensible, than their earlier counterparts.

Notes 1 I will not be able to explore how Dewey’s notion relates to the history of philosophy of science. But there are some interesting parallels, and being a pragmatist aligns well with being an instrumentalist about science, as will emerge in what somefollows. forms Indeed, of instrumentalism about science may be construed as local forms of pragmatism. 2 It might be best construed as a kind of stance. Stances are discussed in ch. 18 of this volume. 3 The followingcharacterization is a in so far as it presents individually necessary but not jointly sufficient conditions for a position to count as instrumentalist. One advantage in this characterization, as we will see, is that it avoids the common error of understanding instrumentalism too narrowly. 4 In fact, observability is more central to constructive empiricism than this- rough definition of empir ical adequacy makes apparent. To see this, consider the statement ‘The fire emits phlogiston’. This is about an observable thing, namely a fire, but is (presumably) false. Therefore, on van Fraassen’s definition, it couldn’t be a part of (or a consequence of ) an empirically adequate theory. But such a claim should be able to be. Thus, van Fraassen intended to convey something- like ‘a theory is empir ically adequate exactly if what it says about observable aspects of observable things and observable events is true’. 5 See Rosen (1994), van Fraassen (1994), and Rowbottom (2010, 2014) on this matter. 6 For example, concerning striking an elastic rod held in a vise, he wrote:

Even when the sound has reached so high a pitch and the vibrations have become so small that the previous means of observation are notadvantageously of avail, imaginewe still the sounding rod to perform vibrations. . . . [T]his is exactly what we do when we imagine a moving body which has just disappeared behind a pillar, or a comet at the moment invisible, as continuing its motion and retaining its previously observed properties. . . . We fill out the gaps in experience by the ideas that experience suggests . . . (Mach 1960: 588)

He thought differently about atoms, partly because of the properties ascribed to them:

But the mental artifice atom was not formed by the principle of continuity. . . . Atoms cannot be perceived by the senses; like all substances, they are -things of thought. Further more, the atoms are invested with properties that absolutely contradict- the attributes hith erto observed in bodies. However well fitted atomic theories may be to reproduce certain groups of facts, the physical inquirer who has laid to heart Newton’s rules will only admit those theories as provisional helps, and will strive to attain, in some more natural way, a satisfactory substitute. (Mach 1960: 589)

93 Darrell P. Rowbottom

7 This is an ongoing matter of dispute. Another recent paper that takes a more realist-friendly view is Rice (2016). This is not the place to develop a full anti-realist account of understanding. So suffice it to say this. An instrumentalist might grant that ‘understanding that’ is factive (or involves, at least, approximately true propositions). But she might deny that ‘understanding why’ and ‘understanding how’ are factive (or need to involve even approximately true propositions). See Rowbottom (Manuscript) for more on this. 8 Another way of thinking about ‘understanding’ in this context is as non-factive explanation. As Niini- luoto (2002: 167) puts it: ‘for the realist, the truth of a theory is a precondition for the adequacy of scientific explanations’. Instrumentalists may disagree, as, for instance, van Fraassen (1980) does. 9 The use of ‘mechanical’ is noteworthy; at the time of writing, it was typically considered important that ‘only those classes of forces with which physicist had become familiar since the time of Newton should be admitted . . . the resultant description had to be continuous in space and time’ (Heilbron 1977: 41). However, this demand may be relaxed and was profitably relaxed, for example, by Bohr. For more on this, see Bailer-Jones (2009: 41–42) and Rowbottom (Manuscript: §IV.2 & §IV.4). 10 It was similarly condemned by Duhem (1954: §IV.5), who remarked:

In the treatises on physics published in England, there is always one element which greatly astonishes the French student; that element, which nearly invariably accompanies the exposi- tion of a theory, is the model . . .

11 Another possibility is that the community of scientists might change. For example, mutant humans (Maxwell 1962) or alien species (Churchland 1985) may be introduced. However, this may not be so different from our instruments changing. Indeed, our eyes and ears may be viewed as instruments of a natural variety. 12 One can push deeper and think about differential knowledge of , of the Ancient Greek roots of ‘lagomorph’ – that is to say, λαγώς and μορφή – and so forth. How one conceives of the differences will depend on whether one adopts an atomistic or a holistic view of the content of beliefs. On this, see Schwitzgebel (2015: §3.2). 13 A potential differentiator not mentioned by Stanford, however, is that realists may be much more inclined to think that such theories (and/or associated models) aren’t vehicles for providing genuine understand- ing. Hence, here is an interesting contact point between Stanford’s version of instrumentalism and my own version. For example, Stanford’s account might be strengthened by incorporating an anti-realist conception of understanding. 14 One important possible view, suggested by van Fraassen (2001: 155), is that ‘The instruments used in science can be understood as not revealing what exists behind the observable phenomena, but as creating new observable phenomena to be saved’. As intimated in the main body of text, one might weaken this view, for example, by introducing ‘typically’ or ‘often’ before ‘can’.

References Bailer-Jones, D. M. (2009). Scientific Models in Philosophy of Science, Pittsburgh: University of Pittsburgh Press. Boyd, R. (1980) “Scientific Realism and Naturalistic Epistemology,”PSA 1980 2, 613–662. Carnap, R. (1966) Philosophical Foundations of Physics: An Introduction to the Philosophy of Science, New York: Basic Books. Chakravartty, A. (2011) “Scientific Realism,” in E. N. Zalta (ed.), Stanford Encyclopedia of Philosophy. URL: http://plato.stanford.edu/archives/fall2015/entries/scientific-realism/ Churchland, P. (1985) “The Ontological Status of Observables: In Praise of the Superempirical Virtues,” in P. Churchland and C. Hooker (eds.), Images of Science: Essays on Realism and Empiricism (with a Reply from Bas C. van Fraassen), Chicago: University of Chicago Press, pp. 35–47. Contessa, G. (2006) “Constructive Empiricism, Observability, and Three Kinds of Ontological Commit- ment,” Studies in History and Philosophy of Science 37(4), 454–468. Duhem, P. M. M. (1954) The Aim and Structure of Physical Theory (P. P. Wiener, trans.), Princeton, NJ: Princeton University Press. Elgin, C. Z. (2007) “Understanding and the Facts?” Philosophical Studies 132, 33–42.

94 Instrumentalism

Feyerabend, P. K. (1958) “An Attempt at a Realistic Interpretation of Experience,” Proceedings of the Aristo- telian Society 58, 143–170. Fraassen, B. C. van (1980) The Scientific Image, Oxford: Clarendon Press. ——— (1994) “Gideon Rosen on Constructive Empiricism,” Philosophical Studies 74(2), 179–192. ——— (2001) “Constructive Empiricism Now,” Philosophical Studies 106, 151–170. Hacking, I. (1983) Representing and Intervening: Introductory Topics in the Philosophy of Natural Science, Cam- bridge: Cambridge University Press. ——— (1985) “Do We See through a Microscope?” in P. Churchland and C. Hooker (eds.), Images of Science: Essays on Realism and Empiricism (with a Reply from Bas C. van Fraassen), Chicago: University of Chicago Press, pp. 132–152. Heilbron, J. L. (1977) “Lectures on the History of Atomic Physics 1900–1922,” in C. Weiner (ed.), History of Twentieth Century Physics, New York: Academic Press, pp. 40–108. Kelvin, Lord [Thomson, W.] (1884) Lectures on Molecular Dynamics, and the Wave Theory of Light, Baltimore: Johns Hopkins University Press. Laudan, L. (1981) “A Confutation of Convergent Realism,” Philosophy of Science 48(1), 19–49. Lodge, O. J. (1892) Modern Views of Electricity, London: Macmillan. Mach, E. (1911) The History and Root of the Principle of Conservation of Energy, Chicago: Open Court. ——— (1960) The Science of Mechanics (6th ed.), La Salle: Open Court. ——— (1984) The Analysis of Sensations and the Relation of the Physical to the Psychical (C. M. Williams, trans.), La Salle: Open Court. Maxwell, G. (1962) “The Ontological Status of Theoretical Entities,” in H. Feigl and G. Maxwell (eds.), Scientific Explanation, Space and Time, Minneapolis: University of Minnesota Press, pp. 3–27. Neufville, R. de (2015) “Instrumentalism,” Encyclopaedia Britannica. URL: www.britannica.com/topic/ instrumentalism Niiniluoto, I. (2002) Critical Scientific Realism, Oxford: Oxford University Press. Poincaré, H. (1905) Science and Hypothesis, London & Newcastle: Walter Scott Publishing. Pojman, P. (2009) “Ernst Mach,” in E. N. Zalta (ed.), Stanford Encyclopedia of Philosophy. URL: http://plato. stanford.edu/archives/win2011/entries/ernst-mach/ Psillos, S. (1999) Scientific Realism: How Science Tracks Truth, London: Routledge. Rancourt, B. T. (2015) “Better Understanding Through Falsehood,” Pacific Philosophical Quarterly. doi:10.1111/papq.12134 Rice, C. C. (2016) “Factive Scientific Understanding without Accurate Representation,” Biology and Phi- losophy 31, 81–102. Rosen, G. (1994) “What Is Constructive Empiricism?” Philosophical Studies 74(2), 143–178. Rowbottom, D. P. (2010) “Evolutionary Epistemology and the Aim of Science,” Australasian Journal of Philosophy 88(2), 209–225. ——— (2011) “The Instrumentalist’s New Clothes,” Philosophy of Science 78(5), 1200–1211. ——— (2014) “Aimless Science,” Synthese 191(6), 1211–1221. ——— (In Press) “Extending the Argument from Unconceived Alternatives: Observations, Models, Predictions, Explanations, Methods, Instruments, Experiments, and Values,” Synthese. doi:10.1007/ s11229-016-1132-y ——— (Manuscript) The Instrument of Science. Schwitzgebel, E. (2015) “Belief,” in E. N. Zalta (ed.), Stanford Encyclopedia of Philosophy. URL: http://plato. stanford.edu/entries/belief/ Sober, E. (1999) “Instrumentalism Revisited,” Crítica 31, 3–39. Sorensen, R. (2013) “Veridical Idealizations,” in M. Frappier, L. Meynell and J. R. Brown (eds.), Thought Experiments in Science, Philosophy, and the Arts, London: Routledge, pp. 30–52. Stanford, P. K. (2006) Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives, Oxford: Oxford University Press.

95 8 EMPIRICISM

Otávio Bueno

1 Introduction Empiricism has a long, distinguished, and complex history, and as is -usual with any live phil osophical tradition, it is continuously recreated and re-invited as it evolves- over time. Tradi tionally presented as a doctrine, empiricism is often formulated as the claim that experience is the only source of information about the world (for a discussion, see van Fraassen 2002). Understood in this way, empiricism seems to involve a particular belief: in the truth of the claim that characterizes this doctrine, and it becomes an issue whether acquiring that belief outstrips the boundaries of experience and thus of empiricism itself (van Fraassen 2002; Alspector-Kelly 2001, 2004). But empiricism also has a second, negative component: a critical reaction to and a suspicion of metaphysics. And a number of arguments empiricists have developed are skeptical arguments of various sorts. They need not be global arguments meant to question indiscriminately entire areas of investigation, but they are meant to question more targeted issues. In particular, they are arguments against the postulation of entities that are putatively about the world but which are not detectable by experience (substances, essences, possible worlds) and question whether one can have knowledge of the existence of such entities. Targeted skeptical arguments tend to be more effective. Wholesale skeptical arguments are generally less plausible and more easily addressable than their local counterparts. These two features of empiricism – a positive view regarding the source of information about the world and a negative attitude toward the limits of what can be known – remain by and large constant throughout its history (for additional discussion, see van Fraassen 2002). Not surprisingly, there are very close connections between empiricism and skepticism. Ancient Greek skeptics, in particular, Pyrrhonists, can be thought of as empiricists in the sense that they clearly embodied a critical attitude toward metaphysics and took seriously the senses as sources of information about the world. However, in the case of Pyrrhonism, these features should not be thought of as claims or theses about (the nature of ) the world: the Pyrrhonist suspends judgment about any such claims. Rather they are expressions of a certain attitude toward investigation. As part of that investigation, the skeptic is sensitive to claims to the effect that so and so is the case. The skeptic then questions whether evidence can be provided in support of such claims. If situations incompatible with the claims in question are not ruled

96 Empiricism out by the evidence, it seems that, according to the standards of those who make such claims, the evidence for them is not available. Additional research is, thus, required, and the skeptic continues to investigate. Both features are clearly present in the version of empiricism that was articulated in the twen- tieth century by logical positivists and logical empiricists. A clear suspicion of metaphysics and doctrines regarding experience as a source of information about the world is found throughout the development of these philosophical views. A typical example is, of course, Carnap’s work. In his celebrated paper on the rejection (or, perhaps, the overcoming) of metaphysics through a logical analysis of language (Carnap [1932] 1959), a clear anti-metaphysical stance is advanced. Moreover, the requirement that scientific theories be testable and that only statements with empirical content express something about the world articulates a clearly empiricist doctrine about experience (Carnap [1936] 1937). Constructive empiricism similarly exemplifies these two features of empiricism. First, we find in constructive empiricism a skeptical attitude toward metaphysics, in particular, about possible worlds and real modalities in nature (van Fraassen 1989; van Fraassen 2002). Second, constructive empiricism advances a doctrine about empirical adequacy as the aim of science; that is, science aims at the truth of the observable aspects of the world and thus about what can be experienced (van Fraassen 1980; van Fraassen 2008). As van Fraassen notes, a theory is empirically adequate “exactly if what it says about the observable things and events in this world, is true – exactly if it ‘saves the phenomena’” (van Fraassen 1980: 12; emphasis added). Both the negative and the positive features of empiricism involve experience: either by reject- ing (or, at least, by being agnostic about) the postulation of entities that go beyond what can be experienced or by articulating approaches to the empirical content of scientific theories that assign a crucial role to experience, which has typically been understood in terms of what can (or cannot) be observed. Thus, a major problem that has troubled several empiricist views in the philosophy of science is that of finding a principled way of distinguishing observable and unobservable entities. Log- ical positivists and logical empiricists have systematically tried to solve this problem in terms of particular theories of meaning, first by articulating verification criteria and, with their demise, by introducing weaker confirmation conditions (see, e.g., Carnap [1936] 1937; Carnap 1956). Similarly, even holist empiricists have attempted to provide an account of the distinction between the observable and the unobservable given that this distinction helps to make sense of the divide between “core beliefs” and “beliefs in the periphery” (Quine 1953). Even though, on Quine’s view, the distinction between the observable and the unobservable is not sharp – nor is the dis- tinction between “core beliefs” and “beliefs in the periphery” – in both cases there’s still a distinc- tion to be drawn. (I’ll return to this point in what follows.) Finally, for constructive empiricists, to demarcate the observable from the unobservable is crucial, given that, as noted, the divide is presupposed in the formulation of empirical adequacy – the very aim of science according to their view (see van Fraassen 1980). Despite the importance of the problem, so far no principled way of distinguishing the observ- able and the unobservable has successfully been developed. In this chapter, I will suggest one way of doing that and indicate why this way of approaching the issue ends up providing a better ver- sion of empiricism. After all, one of the interesting outcomes of the proposed distinction is that it yields a very natural answer to an additional problem that has also disturbed recent empiricist views: the problem of explaining what is (epistemologically) special about observation. Usually, most versions of empiricism in the philosophy of science take more or less for granted that observation is epistemologically significant, without pausing to give a reason for that. This has led critics of empiricism to complain about the apparent arbitrariness of the view, which presupposes

97 Otávio Bueno a divide between the observable and the unobservable that has no clear epistemic import. This point has been made repeatedly in discussions of constructive empiricism (see, e.g., Hacking 1981; Musgrave 1985). The proposal here should overcome these difficulties.

2 The roles of the observable/unobservable divide, and its troubles Why is it important for empiricist views to distinguish observable and unobservable objects? Because, the answer goes, the distinction plays several key roles within empiricism. In Carnap’s approach, the distinction is crucial to characterize the empirical content of scientific theories, which is done ultimately in terms of what can be confirmed via observational sentences. But this presupposes that observational sentences – sentences whose terms only refer to observable entities – can be clearly identified and distinguished from nonobservational ones (see Carnap [1936] 1937). On Quine’s view, the distinction plays a different role. Admittedly, there’s no sharp line between the observable and the unobservable (Quine 1953). Of course, this doesn’t entail that there’s no line at all between the corresponding entities. Sets and babies are paradigmatic cases of, respectively, unobservable objects and observable ones, and there is a clear divide between them and their observability status. Despite this, the observable/unobservable divide is significant for, as noted, it’s used to formulate the Quinean distinction between “core beliefs” and “beliefs in the periphery”. Typically, “core beliefs” deal with unobservable entities and thus are particularly entrenched (given their fundamental theoretical role) and widespread throughout science’s con- ceptual framework. “Periphery beliefs”, on the other hand, are usually concerned with entities that can be observed and thus are localized and can be more easily revised without disturbing the whole conceptual framework. Note, however, that the observable/unobservable distinction and the core/periphery divide need not map neatly into each other. As pointed out, beliefs about sets are typically part of “core beliefs”, whereas beliefs about babies usually go to the periphery. But we may have, for example, certain statistical beliefs about babies that are part of the periphery, even though they are cast in mathematical terms. Despite the absence of a perfect mapping, the observable/unobservable distinction is still significant for the Quinean, given that, as just noted, it does help to make sense of the core/periphery divide. Similarly to Quine, although much more explicitly than him, van Fraassen insists that the distinction between observable and unobservable entities is vague, given that there are defi- nite cases of observable entities, definite cases of unobservable ones, and cases in which it’s not definitely determined whether an object is observable or not (van Fraassen 1980: 16–17). Nonetheless, despite the vagueness, there’s still a distinction in kind between these two types of objects: unobservable objects can never be seen, even in principle, with the naked eye; observable ones can. For van Fraassen, however, there’s no need to provide an explicit characterization of the observable/ unobservable distinction. This is something that ultimately science will do. In fact, each scientific theory delimits its range of observable objects through its empirical substructures. Without delimiting that range, the theory couldn’t even be tested, given that reference to observable objects is needed for the testing. But, in delimiting this range, each theory is constrained by two crucial features: (a) What is observable depends on us (the relevant epistemic community), and so the distinction is contextual – different epistemic communities draw the distinction differently. (b) The observable/unobservable divide has no ontological significance (van Fraassen 1980: 18), since the fact that something is unobservable (observable) doesn’t entail its nonexistence (exist- ence). At best, we have agnosticism regarding unobservable entities: such entities may or may not exist, but due to familiar underdetermination considerations, we are unable to decide the issue.

98 Empiricism

After all, the same observable features of the phenomena are compatible with the postulation of radically different unobservable objects, and typically, there’s no way of empirically determining which of these postulations is true. For example, the observable aspects of quantum mechanics are compatible with the postulation of both quantum particles that do not have simultaneously well-determined position and momentum (following a Copenhagen interpretation) and quan- tum particles that do have well-determined position and momentum (following a Bohmian interpretation). Empirically, however, it is unclear how to decide which of these postulations is true (if any). (I’ll return to this point in what follows.) Some realists will, of course, try to undermine such underdetermination arguments by invok- ing methodological criteria. For instance, suppose that theories T1 and T2 are empirically equiva- lent. One of them, say, T1, could be entailed by a broader theory T that has additional, independent confirmation. Such confirmation could then be “transferred” to T1, and in this way, we may have independent reasons to prefer T1 to T2, despite their empirical equivalence (see Laudan and Lep- lin 1991: 67). (Although Laudan himself is not a realist, as opposed to Leplin, this is a move that realists, who are typically dissatisfied with underdetermination arguments, can explore.) Alterna- tively, a realist could note that one of the two theories might be simpler than the other, and thus we may have good reason to prefer it (Musgrave 1985: 202–204). Both responses, however, face difficulties. With regard to the second, simplicity is indeed a factor in theory choice. The question, however, is whether it plays an epistemic or simply a prag- matic role (see van Fraassen 1980, 1985). Simplicity would play an epistemic role if it provided reason to believe in the truth of the theory in question. But why does the fact that a theory is simpler than a rival provide any reason to believe that it is true (or approximately so)? If it doesn’t, then invoking simplicity would fail to support realism. Simplicity would play a pragmatic role, in turn, if this role only concerned us, the users of the theory, rather than the connection between the theory and the world. For instance, we may have more reason to accept a simpler theory than a more complex rival: it might be easier to work with the former. But this fails, of course, to provide a reason to believe in the simpler theory’s truth – it concerns us, not the world. A realist could concede this point and note that, just as the anti-realist, realists can also invoke pragmatic reasons in theory acceptance (see Musgrave 1985: 203). That’s right. But if realists only had pragmatic reasons for the acceptance of theories, there wouldn’t be any reason to entitle them to claim that the selected theory is true (or approximately so). As a result, according to the realist’s own standards, realism would only be a superfluous metaphysical addition that fails to yield significant benefits. Suppose, however, that by systematically attempting to construct simple theories, scientists end up formulating empirically well-confirmed theories (Musgrave 1985: 203–204). Wouldn’t this indicate that simplicity is more than just a metaphysical addition and plays a genuine epistemic role in science? Well, if by trying to formulate simple theories, scientists obtain empirically ade- quate ones, the empiricist would be, of course, the first to applaud! But this still leaves open the issue of whether such theories are true (or approximately so). And the latter issue is the crucial one for the realist.

With regard to the first response, note that, by hypothesis, T entails T1 (see Laudan and Leplin 1991: 67). But, in this case, it’s no longer clear that T could provide additional, independent confir- mation for T1. After all, since T1 is derived from T, T doesn’t provide evidential warrant for T1: this would be question begging. In fact, as J. S. Mill has noticed a long time ago and David Miller has spelled out in a more general setting (Miller 1994: 51–74), deductively valid arguments – just as the one from T to T1 – are circular. What makes them valid is the fact that the information con- tained in the conclusion T1 is already contained in the premise T. Thus, to claim that the premise supplies evidence for the conclusion is, in the limit, to claim that the premise supplies evidence

99 Otávio Bueno for the information that it already provides, which is clearly question begging. But without asserting that a premise yields evidential warrant for its conclusion, the realist is not entitled to claim that the independent evidence that supports T also supports (even indirectly) T1. Given that T entails T1, to say so would be to beg the question. As a result, the argument is then blocked. Still, for van Fraassen, the role played by the observable/unobservable divide is crucial. After all, as noted, it’s in terms of this distinction that the key notion of empirical adequacy is character- ized. So, without distinguishing the observable and the unobservable, constructive empiricism – the view according to which science aims to provide empirically adequate theories – couldn’t even be formulated. The considerations so far indicate that the observable/unobservable distinction plays different roles in different empiricist views. But noting that each of these roles is definitional can bring all of them together. In fact, the observable/unobservable divide helps to define significant notions within these empiricist views: empirical content in Carnap’s case, core and periphery beliefs in Quine’s, and empirical adequacy in van Fraassen’s. However, in each case, the notions that are thus defined have a significant epistemological role. Carnap uses the notion of empirical content to distinguish scientific theories from metaphysical ones, and given that only the former have empirical content – and meaning – our commitment should be restricted to them (Carnap [1936] 1937). Quine employs the core/periphery distinc- tion to constrain his otherwise rather radical form of holism: core beliefs, although still open to revision, are much harder to let go, given that, being widespread throughout science’s conceptual network, too many other beliefs depend on them (Quine 1953). Finally, van Fraassen uses the notion of empirical adequacy to constrain belief: given underdetermination arguments, there is no need to believe in the truth of a scientific theory, but believing in its empirical adequacy is enough. Hence, our beliefs can be restricted to the observable (van Fraassen 1980). In the end, in each case, the observable/unobservable divide is a mechanism of constraint: it restricts either our commitments or our beliefs. There are, however, two major troubles with these proposals. First, how exactly should the observable and the unobservable be distinguished? Carnap and, to some extent, Quine have devel- oped approaches to science in which the distinction is presupposed without, however, properly articulating a successful, clear-cut criterion. For instance, what is it that makes observational sentences observational? The fact that such sentences only contain terms that refer to observable entities presupposes that, somehow, we have already managed to draw the distinction in question. But how exactly was that done? Van Fraassen’s proposal, in response, tries to undercut the need for a philosophical answer to that question: the answer will be ultimately provided by science. But this has invited worries about an inherent circularity in the project. After all, as noted, the empirical adequacy of scientific theories is characterized in terms of the observable; but what is observable is, in turn, circumscribed by scientific theories themselves (see, e.g., Giere 1985). Moreover, van Fraassen’s move has also been criticized for being incoherent. After all, the constructive empiricist presumably cannot believe it to be true that anything is unobservable, given that belief in the truth is restricted to the observable. But in this case, how can the observable/unobservable demarcation be correctly drawn? In fact, as Musgrave insists:

The constructive empiricist can accept [a theory] T as empirically adequate, that is, believe to be true only what T says about the observable. But “B is not observa- ble by humans” cannot, on pain of contradiction, be a statement about something observable by humans. And, in general, the consistent constructive empiricist cannot

100 Empiricism

believe it to be true that anything is unobservable by humans. And, if this is so, the consistent constructive empiricist cannot draw a workable observable/unobservable dichotomy at all. (Musgrave 1985: 208)

The constructive empiricist has, of course, addressed these worries (see van Fraassen 1980; van Fraassen 1985: 255–256, 303–305). But, arguably, an account of observation in which these worries don’t even get off the ground would be preferable. After all, what is ultimately at stake here is the significance of drawing a line between the observable and the unobservable. Typically, realists find it simply arbitrary to try to draw that line in the first place, let alone extract any epistemological significance from the resulting demarcation. So what is needed is an account that captures what is epistemologically significant about observation – that even realists could grant. If such an account could be articulated, it may then be possible to motivate the demarcation that empiricists are looking for without creating the sense of arbitrariness. After all, the demarcation would be formulated based on features of observation that even realists agree are significant. This would alleviate the realists’ worries and would help explain where the sense of arbitrariness comes from. In brief, the idea is that if notions not necessarily shared by realists (such as a brute assumption of the primacy of vision) are invoked in the attempt to demarcate the observable and the unobservable, the resulting account could never be recognized by realists as epistemo- logically well motivated. This immediately leads to the second difficulty with the empiricist accounts. Even if we grant that the observable/unobservable distinction could be drawn, how can we justify its epistemological role of constraining beliefs and commitments? Musgrave put the point in vivid terms:

Can a distinction [between observable and unobservable entities] which is admitted to be rough-and-ready, species-specific, and of no ontological significance really bear such an epistemological burden? (Musgrave 1985: 205)

On his view, the answer is clearly no. Ian Hacking, also a realist (although of a different kind), would concur. As he points out:

Taking van Fraassen’s view to the extreme you would say that you have observed or seen something by the use of an optical instrument only if human beings with fairly normal vision could have seen that very thing with the naked eye. The ironist will retort: “What’s so great about 20–20 human vision?” It is doubtless of some small interest to know the limits of the naked eye, just as it is a challenge to climb a rock face without pitons or Everest without oxygen. But if you care chiefly to get to the top you will use all the tools that are handy. Observation, in my book of science, is not passive seeing. Observation is a skill. Any skilled artisan cares for new tools. (Hacking [1981] 1985: 135; italics added)

In other words, Hacking explicitly challenges the assumption, taken more or less for granted, that there is something special about vision – or observation, narrowly construed. However, he also raises an additional issue about the nature of observation. Observation cannot be mere looking but requires the development of particular skills. As opposed to the constructive empiricist, Hacking conceives of observation as involving certain instruments. In this way, on Hacking’s view, we may

101 Otávio Bueno be able, in the end, even to see with a microscope (Hacking [1981] 1985: 149–151). Clearly, a broader notion is in place here. Would it be possible to make sense of that notion in a minimal way, acceptable to both realists and empiricists? These are significant challenges. And if empiricism is to be a reasonable view, it’s crucial to be able to address them.

3 Levels of observation Is there something special about observation that justifies the role it plays within empiricism? Interestingly, even the constructive empiricist hasn’t provided an account of what is epistemically special about observation. The closest we get is a discussion of what can be called the empiricist dogma, namely, the claim that experience is the only legitimate source of information about the world (see van Fraassen 1985). Given that observation is clearly a form of experience, presumably in this way we could explain what is special about observation: it’s the only proper source of information about the world. But, as van Fraassen (2002) has later pointed out, any conceptu- alization of empiricism in terms of the empiricist dogma is actually incoherent. And given that the discussion of experience in van Fraassen (1985) presupposes the empiricist dogma, with his rejection of the dogma, a different account is required. In fact, even if we grant that empiricism should not be identified with the empiricist dogma, this is not sufficient to justify the claim that the empiricist need not provide an account of what makes observation special. After all, as noted, the constructive empiricist relies on the epistemic priority of the observable to delineate the distinction between truth and empirical adequacy. The question still remains: Why does observation have such an epistemic priority? To answer this question, we first need to be clear about what it takes to have epistemic access to an object. According to Jody Azzouni, there are two forms of epistemic access to a given object (Azzouni 1997: 474–477; see also Azzouni 2004). We have a thick form of epistemic access if this access (i) is robust, (ii) can be refined, (iii) enables us to track the object, and is such that (iv) certain properties of the object itself play a role in how we come to know other properties of the object. Let me say a few things about each of these conditions. The robustness of an epistemic access process, which can be instrumentally mediated or not, indicates that the access in question operates independently of what we believe (e.g. we blink, walk away and the object is still there). We can also refine our access to the object (e.g. we can move in closer for a better inspection). Moreover, we can track spatiotemporally an object, say, an insect, and study its location for several hours. We can also “connect certain properties of the objects [. . .] with our capacity to know about their properties” (Azzouni 1997: 476). For exam- ple, we can easily explain why we can determine “how fast a flock of antelopes is moving: they are large opaque objects that do not travel very fast (even when panicked)” (Azzouni 1997: 476). Although none of the four conditions is formulated in terms of the notion of observation, the con- nection between thick epistemic access and observation should be clear enough. Observation is one way of having thick epistemic access. However, in Azzouni’s view, observation is by no means the only way of obtaining such access. For instance, he takes that we have thick epistemic access to atoms (via appropriate microscopes), even though strictly speaking, I would say, we have never observed them. As I’ll argue in what follows, the empiricist has no reason to accept that we do have thick epistemic access to atoms, although he or she will certainly stress that observation is a case – the most basic case – of thick epistemic access. In contrast with thick epistemic access, there is a thin form of access as well (Azzouni 1997: 479). We have thin epistemic access to an object if the access to this object is obtained through a theory that has five virtues: (i) simplicity, (ii) familiarity, (iii) scope, (iv) fecundity, and (v) success

102 Empiricism under testing. On Quine’s view, these five theoretical virtues provide good epistemic reasons to adopt a theory (see Quine 1976: 247). This form of access is considerably weaker than that provided by thick epistemic access. Using van Fraassen’s distinction between acceptance and belief (van Fraassen 1980, 1985), we can say that thin epistemic access allows us to accept the entities postulated by a given theory, but it may not provide reasons to believe that such entities exist. Only a strong form of thick epistemic access provides decisive reason to believe in the existence of the entities in question. (We can call it ultra-thick epistemic access!) What is this strong form? Although Azzouni doesn’t make this point, it is important to highlight that even thick epis- temic access comes in degrees (in this sense, there are different forms of such access). The degrees range from unaided observation of an object (the basic form of access) through measurements of certain properties of this object (e.g. divergence of an electric field) tomeasurement of immediate effects of this object (e.g. a track left by a pion in a cloud chamber). Michael Dickson discusses this point in the context of a very interesting examination of realism in quantum mechanics (see Dickson 1995: 125–131). Of course, there may be some overlap between these forms of epistemic access. For example, in some cases, the measurement of an immediate effect of an object is also a meas- urement of certain properties of this object. Despite this, observation is still more basic – and it yields the ultra-thick form of epistemic access. This is because both measurement of properties of an object and measurement of the immediate effects of an object ultimately depend on observation. It is ultimately through observation that these measurements are carried out, but observation alone does not generate these types of measurements (we often need appropriate instruments to make the measurements).

4 What is so special about observation? My claim is that the empiricist can adopt the distinction between thick and thin epistemic access – suitably understood to include the strong form of thick epistemic access (ultra-thick) – to explain what is epistemically special about observation. After all, observation does provide us with thick epistemic access to objects (given that, as noted, it satisfies the four conditions of thick epistemic access). But observation also provides the most basic form of thick epistemic access (ultra-thick access), given that observation does not depend on any other type of measurement, but such measurements depend on observation. This, in turn, explains why we have reason to believe in the existence of observable (and observed) entities: because of the particularly strong form of access we have to them. Note, however, that the empiricist would deny Azzouni’s contention that we have thick epis- temic access to atoms and other unobservable particles. Clearly, observation provides us with thick epistemic access to observable objects. But in the case of unobservable objects, the robustness condition is not satisfied. There is no way in which we are justified inliterally claiming that “we blink, walk away, and the unobservable object is still there”. For whether the unobservable object is there or not is what needs to be established, and it can only be established – if it can be established at all – ultimately via certain instruments (e.g. appropriate microscopy devices) and (in many cases) a theory that is taken to be simple, familiar, successful under testing, and so on. But, with regard to the theory that is used, such methodological criteria, providing only a thin form of epistemic access, give no reason to believe in the existence of the corresponding object. And with regard to the instruments, of course they need to be used to detect unobservable particles. But several such instruments depend heavily on the relevant theories (particularly quantum mechanics) for their construction and implementation, and as a result, they arguably only satisfy the criteria for thin epistemic access. Moreover, even the interpretation of the results will depend on theories. Thus, the

103 Otávio Bueno nature of the unobservable objects that are detected will not be uniquely determined, given that different theories will yield different answers regarding the properties of the objects in question. So we have a case of underdetermination here. Thus, the empiricist is warranted in claiming that we have, at best, thin epistemic access to unobservable entities – not thick – and hence agnosticism about these entities is warranted. Furthermore, as noted, there is an important asymmetry between the observable and the unobservable. Even when instruments of access to unobservable entities are constructed (such as various types of microscopy devices), they ultimately presuppose access to something observable: the outcomes of the devices are observable. This means that any thick form of epistemic access ultimately relies on observation. (This doesn’t mean, of course, that the results of observation are not open to revision. No foundationalism is presupposed here.) So observation does provide a very special form of thick epistemic access. Given the asymmetry between observation and other forms of epistemic access, we are entitled to take observation as an ultra-thick form of epistemic access. And this explains why observation is so special for the empiricist: it constrains belief at the right point, the point in which we have good reason to believe in the existence of the objects in question, without being subject to underdetermination considerations. Moreover, we can also use thick epistemic access as a way of distinguishing observable entities from unobservable ones. The entities to which we have a thick form of access are those that are observable. Note that, given the way in which thick epistemic access was characterized, it doesn’t invoke the notion of observation. After all, remember that the characterization was cast in terms of four conditions. We have thick epistemic access to an object if the access: (i) is robust (i.e. it’s independent of the object), (ii) can be refined (for example, via better resolution), (iii) allows us to track the object (e.g. by determining its position and trajectory), and (iv) uses properties of the object to get to know other projects of the object. None of these conditions is characterized in terms of observation, even though, of course, observation satisfies them (after all, as noted, observation is a form of epistemic access). Thus, as opposed to Giere’s charge against van Fraassen, no circularity is involved here. The resulting account is not incoherent either. After all, by delimiting the range of the observ- able, we thereby also delimit the range of the unobservable. Hence, as opposed to Musgrave’s criticism, the empiricist can coherently draw the line between the observable and the unobserv- able. In fact, for objects that fail to satisfy the named conditions (for example, electrons, which is unclear that can be tracked or to which we can have a robust access), we may not have reason to believe that they are observable. Thus, we can come to believe to be true that something is unobservable by humans – if we suppose that quantum mechanics is empirically adequate, given that the theory would postulate such objects, although we only have a thin form of epistemic access to them. Furthermore, instrumentally mediated thick forms of epistemic access will count as observation, since the four conditions listed hold (and are known to hold). It is then possible to provide a principle divide between observable and unobservable without unduly fixing the observable at the level of the naked eye. Furthermore, the four features of observation highlighted here are also features that the realist would grant as significant. Realists agree that observation is robust, can be refined, and allows us to track certain objects and to use certain properties of a given object to get to know other properties of that object. These are indeed minimal features of observation. Interestingly, these features also allow us to make sense of what is special about observation in a way that even realists can accept. Hopefully, in this way, the attempt to draw the distinction between the observable and the unobservable will no longer seem to be arbitrary. This doesn’t mean, of course, that the way of drawing the distinction suggested here is uncontroversial. It’s not. For instance, some realists may still insist that we have thick epistemic access to atoms, which empiricists will question. But at

104 Empiricism least the observable/unobservable distinction, as drawn here, will no longer seem arbitrary, given that significant features of observation – recognized as such even by realists – are being captured. Finally, note that the notion of thick epistemic access is formulated in terms of attitudes. In particular, on this account, observation (ultra-thick epistemic access) involves a number of things we do to objects: we try to track these objects, we interact with them, we refine our mechanisms of access to them, we use properties of these objects to get to know other properties of these objects. Observation, as opposed to Hacking’s charge against constructive empiricism, is certainly not a passive, detached enterprise. It’s indeed a skill. And, as such, it can be improved, refined, made more sophisticated, and revised. With a notion of observation tied to thick epistemic access, the empiricist no longer has a passive concept lurking in the background. Now, instruments, such as various kinds of microscopes, might be used in processes that are described by scientists as being observational. Physicists, for example, say things like, “With scan- ning tunneling microscopes (STM), we are finally able to see atoms!” This may initially seem bizarre. An STM, by systematically scanning, with its tip, a specimen, provides at best topographical information about it (see Chen 1993). The topographical information is then converted, through computer software, into visual information by using a variety of now-hidden coding conven- tions. However, with different coding conventions, the resulting images of atoms would look very different. Once again, in contrast to observation, we seem to have underdetermination. Thus, it is unclear that seeing applies in this case (see also Bueno 2011). Now, Hacking admits that the notion of observation used by scientists in contexts like this is rather broad: “This is doubtless a liberal extension of the notion of seeing” (Hacking [1981] 1985: 151). But what motivates scientists to engage in such a use? Well, given that many features of observation seem to be found in thick epistemic access, the extension, although liberal, isn’t unnatural (see Azzouni 1997). That would indeed be so if we had thick epistemic access to the corresponding objects (including atoms) in the first place! It’s not clear, however, as noted earlier, that the robustness condition, for example, is actually met. But still, due to the close connection between observation and thick epistemic access, we can make sense of this way of speaking and why in fact we have here not literal seeing, but “a liberal extension” of the notion. This is an additional illustration of the significant asymmetry in the use of instruments in science: instruments ultimately require observation, but not vice versa. This is, the empiricist would say, as it should be.

5 Conclusion As argued, by articulating further the notion of thick epistemic access, it’s possible to (i) provide a distinction between observable and unobservable entities and (ii) explain what is special about observation while (iii) avoiding the familiar charges that previous attempts at drawing the dis- tinction have faced. Although more needs to be done, at least we have here a first step toward an empiricist account of observation.

Further reading A collection of essays that offers careful studies of the cultural and philosophical context of logical empiricism as well as of the roles of experience and empirical knowledge within this program is R. Giere and A. Richardson (eds.), Origins of Logical Empiricism (Minneapolis: Uni- versity of Minnesota Press, 1996). In Carnap’s Construction of the World (Cambridge: Cambridge University Press, 1998), A. Richardson examines the role of Carnap’s Aufbau in the development of logical empiricism. Some of the challenges that Quine raised to Carnap are examined in

105 Otávio Bueno

R. Creath, “Quine’s Challenge to Carnap,” in M. Friedman and R. Creath (eds.), The Cambridge Companion to Carnap (Cambridge: Cambridge University Press, 2007, 316–335). B. Monton (ed.), Images of Empiricism: Essays on Science and Stances, with a Reply from Bas C. van Fraassen (Oxford: Oxford University Press, 2007) collects excellent essays on constructive empiricism focusing on both The Scientific Image (van Fraassen 1980) and The Empirical Stance (van Fraas- sen 2002). A monograph that articulates a critical assessment of constructive empiricism is P. Dicken, Constructive Empiricism: Epistemology and the Philosophy of Science (Hampshire: Palgrave Macmillan, 2010). In a series of insightful papers, F. Muller thoughtfully engaged with a variety of issues raised by observability within constructive empiricism; see, in particular, “Can Con- structive Empiricism Adopt the Concept of Observability?”, Philosophy of Science 71 (2004): 637–654; “The Deep Black Sea: Observability and Modality Afloat”, British Journal for the Philosophy of Science 56 (2005): 61–99, and “How to Talk about Unobservables” (co-authored with B. van Fraassen), Analysis 68 (2008): 197–205.

Acknowledgements For extremely valuable discussions, over the years, about empiricism and observation, I wish to thank Jody Azzouni, Davis Baird, Steven French, R.I.G. Hughes, Brad Monton, and Bas van Fraassen.

References Alspector-Kelly, M. (2001) “Should the Empiricist be a Constructive Empiricist?” Philosophy of Science 68, 413–431. ——— (2004) “Seeing the Unobservable: van Fraassen and the Limits of Experience,” Synthese 140, 331–353. Ayer, A. J. (ed.) (1959) Logical Positivism, New York: The Free Press. Azzouni, J. (1997) “Thick Epistemic Access: Distinguishing the Mathematical from the Empirical,” Journal of Philosophy 94, 472–484. ——— (2004) Deflating Existential Consequence: A Case for Nominalism, New York: Oxford University Press. Bueno, O. (2011) “When Physics and Biology Meet: The Nanoscale Case,” Studies in History and Philosophy of Biological and Biomedical Sciences 42, 180–189. Carnap, R. ([1936] 1937) “Testability and Meaning,” Philosophy of Science 3, 419–471, and 4, 1–40. ——— (1956) “The Methodological Character of Theoretical Terms,” in H. Feigl and M. Scriven (eds.), Minnesota Studies in the Philosophy of Science, vol. 1, Minneapolis: University of Minnesota Press, pp. 33–76. ——— ([1932] 1959) “The Elimination of Metaphysics through Logical Analysis of Language,” in A. J. Ayer (ed.) (1959), pp. 60–81. (The paper was originally published in German in Erkenntnis 2, 1932). Chen, C. J. (1993) Introduction to Scanning Tunneling Microscopy, New York: Oxford University Press. Dickson, M. (1995) “An Empirical Reply to Empiricism: Protective Measurement Opens the Door for Quantum Realism,” Philosophy of Science 62, 122–140. Fraassen, B. C. van (1980) The Scientific Image, Oxford: Clarendon Press. ——— (1985) “Empiricism in the Philosophy of Science,” in P. M. Churchland and C. A. Hooker (eds.), Images of Science: Essays on Realism and Empiricism, with a Reply from Bas C. van Fraassen, Chicago: The University of Chicago Press, pp. 245–308. ——— (1989) Laws and Symmetry, Oxford: Clarendon Press. ——— (2002) The Empirical Stance, New Haven: Yale University Press. ——— (2008) Scientific Representation: Paradoxes of Perspective, Oxford: Clarendon Press. Giere, R. (1985) “Constructive Realism,” in P. M. Churchland and C. A. Hooker (eds.), Images of Science: Essays on Realism and Empiricism (with a Reply from Bas C. van Fraassen), Chicago: The University of Chicago Press, pp. 75–98. Hacking, I. ([1981] 1985) “Do We See through a Microscope?” Pacific Philosophical Quarterly 62, 305–322. (Reprinted in P. M. Churchland and C. A. Hooker (eds.), Images of Science: Essays on Realism and Empiri- cism (with a Reply from Bas C. van Fraassen), Chicago: The University of Chicago Press, 1985, pp. 132–152. Page references are to the reprinted version.) 106 Empiricism

Laudan, L. and Leplin, J. (1991) “Empirical Equivalence and Underdetermination,” Journal of Philosophy 85, 1–23. Miller, D. (1994) Critical Rationalism: A Restatement and Defence, La Salle, IL: Open Court. Musgrave, A. (1985) “Realism versus Constructive Empiricism,” in P. M. Churchland and C. A. Hooker (eds.), Images of Science: Essays on Realism and Empiricism, with a Reply from Bas C. van Fraassen, Chicago: The University of Chicago Press, pp. 197–221. Quine, W.V.O. (1953) “Two Dogmas of Empiricism,” reprinted in W.V.O. Quine (ed.), From a Logical Point of View, Cambridge, MA: Harvard University Press, pp. 20–46. ——— (1976) The Ways of Paradox and Other Essays (revised and enlarged ed.), Cambridge, MA: Harvard University Press.

107 9 STRUCTURAL REALISM AND ITS VARIANTS

Ioannis Votsis

1 Introduction Perhaps the most influential realist view in recent years, structural realism’s appeal can be found in the ease with which it seems to explain away certain difficulties that afflict other, more traditional, versions of realism. Roughly formulated, it is the view that our epistemic and perhaps even our ontic commitments must be reduced to the structural as opposed to the non-structural features that successful scientific theories ascribe to the unobservable world. This view can be traced at least as far back as the early twentieth century, when advances in logic and set theory made it possible to express in more exact terms the notion of structure. Among its pioneers, one can count the physicist, mathematician, and philosopher Henri Poincaré, the mathematician and philosopher Bertrand Russell, and the historian and philosopher Ernst Cassirer. In what follows, we survey the conceptual foundations as well as the presumed advantages and presumed disadvantages of structural realism.1 We start with a discussion of the notion of structure and proceed to explore variants of the view. Only then do we turn to motivations for these variants. The chapter ends with a number of challenges to structural realism and a brief consideration of some further issues.

2 What is structure? What does it mean to be a realist about structures? Any attempt to answer this question must first put the notion of structure on a proper footing. We can reasonably assume that proposed conceptions of structure should meet two conditions: (i) they should be formal so as to be able to delimit our epistemic and/or ontic commitments in a clear manner and (ii) they should be suffi- ciently rich so as to be able to adequately characterise the (proper) content of scientific theories. Below we consider four alternative conceptions of structure that appear to satisfy the previous two conditions but are otherwise distinct. We begin with the well-known and well-understood set-theoretical notion of structure. A structure S, according to this conception, is an ordered tuple (U, R), consisting of a non-empty set of objects U, that is a domain or universe of discourse, and a non-empty set of relations R defined over the first set. Set theory is a powerful tool that allows us to express a wide array of relations between objects as well as relations between structures. For example, we can evaluate whether two structures are the same or whether one is part of another. In technical terms, two structures

108 Structural realism and its variants are the same in all their formal features if and only if they are isomorphic. Satisfaction of this last notion requires the existence of a bijective mapping between the objects of two structures, that is a mapping where each and every object in the one domain is mapped to a distinct object in the other and where no object remains unmapped at the end. Moreover, the notion requires that such a mapping respects the presence of relations between objects, that is, any relation between objects in one domain has its analogue between the corresponding objects in the other domain. Championed by Michael Redhead (2001), the set-theoretic conception of structure offers a seamless entry into structural realism in that numerous philosophy of science discussions already employ set theory. Moreover, since structural realists are realists about the formal features of relations, the notion of a class of isomorphic structures proves apt, as such a class clearly identifies only formal features, abstracting away other details. Take as a toy example the specific, or as Red- head would call it ‘concrete’, structure S1 set up by the specific relation R1. Suppose R1 exhibits the formal features of irreflexivity and symmetry. If structural realists are realists only about such features, then their realism is circumscribed by the isomorphism class that S1 is a member of, that is the class of all structures that are isomorphic to S1 and whose corresponding relations are irreflexive and symmetric likeR 1. That is, they cannot be realists about S1. Rather, they can only be realists about the so-called ‘abstract structure’ that S1 shares with all other members of the given isomorphism class. The set-theoretic conception of structure is controversial, but perhaps much less so than the conception which utilizes a device known as the Ramsey sentence. A Ramseyfied scientific theory is a theory that has been logically weakened in a specific way. To apply this procedure to a theory, a distinction must first be drawn between two kinds of non-logical terms. The distinction’s typ- ical manifestation pits observational against theoretical terms.2 Ramseyfication weakens theories by replacing theoretical terms with variables and existentially quantifying over them. The effect is the same as the weakening one gets by moving from the claim that ‘A philosopher of science authored this entry’ to the claim. Someone authored this entry. The first implies the second but not vice versa. Why do we dispose of theoretical terms and weaken theories in this way? Observational terms are meant to denote properties or relations that we are experientially or at least instrumentally acquainted with and are thus thought to rest on more secure foundations than theoretical terms. After all, the latter are more distal with respect to our experience by virtue of denoting unobserv- able properties and relations. In schematic form, a theory Θ(α1, α2, . . . αn; β1, β2, . . . βm) where α1 to αn are theoretical terms and β1 to βm are observational ones is turned into its Ramseyfied version thus: ∃φ1 . . .∃φnΘ(φ1, φ2, . . . φn; β1, β2, . . . βm) where now the theoretical terms have been replaced by an equal number of distinct and existentially bound variables φ1 to φn. The only information about unobservables that survives this process concerns that which is conveyed by the logical relations between the theoretical variables and between those variables and the observational terms. Since logical relations are formal, some take the said weakening to respect structural realist commitments. Others have questioned the association of Ramsey sentences with structural realism, arguing that this formulation amounts to giving up realism altogether. We return to this very important issue in section 5. Other proposed conceptions of structure include the partial structures approach of da Costa and French (2003). This approach aims to liberalise the traditional set-theoretic notion of a struc- ture. Whereas the latter is understood in terms of a set of relations defined over some domain, the former replaces such relations with so-called ‘partial relations’. In more detail, each partial relation Ri is conceived as an ordered triple 〈Ri1, Ri2, Ri3〉, where Ri1 is the set of n-tuples that satisfy Ri, Ri2 is the set of n-tuples that fail to satisfy Ri, and Ri3 is the set of n-tuples for which we do not know whether they satisfy Ri. A partial structure S is defined as a pair 〈U, R〉 where U is

109 Ioannis Votsis once more the universe of discourse and R the set of partial relations, each member of which is an ordered triple in the sense just specified. As soon as such a notion is in place one can also define a notion of partial isomorphism as a natural weakening of the notion of isomorphism between ‘full’ structures. What is the allure behind partial structures? Well, the appellation ‘partial’ signifies that the available information concerning those relations and structures is incomplete. The allure is therefore that one can represent some structure of the world without needing to specify everything there is to know about it. And what does all of this have to do with structural real - ism? If even the structural features of the world are only imperfectly reflected in the structures of our best scientific theories, then it seems that the notion of isomorphism is too strong, as it implies a perfect mirroring between the worldly and theoretical structures. The notion of a partial isomorphism, by contrast, is capable of capturing varying degrees of closeness to an isomorphic mapping and, therefore, appears more appropriate. Whether this is indeed true has been the subject of debate – see, for example, Lutz (2015). Moreover, the partial structures approach is not wedded to realism and has in fact been employed as a framework for anti-realist views – see, for example, Bueno (1997). One final approach is worth considering here. There are those who think that the notion of structure needed to ‘do the business’ must be much more radically divorced from traditional conceptions of objects and properties. Under those conceptions, properties, including relations, are borne by objects and are thus ontologically secondary to them. Enter category theory, a framework that is at least as powerful as set theory and arguably powerful enough to act as a foun- dational theory of mathematics – see, for example, Lawvere (1966). According to some readings, this highly abstract framework reverses the dependency relation between objects and relations. Its primitive notions are objects and morphisms. But these are not objects in the commonsense or even in the set-theoretical meaning of the term. Instead, objects in category theory are them- selves structures of sorts. Furthermore, even though category theory is rich enough to model set-theoretic elements in terms of morphisms from so-called ‘terminal objects’ to non-terminal ones, these morphisms are relations, and hence reference to anything like objects (traditionally conceived) seems to be avoided. If that is indeed the case, this framework helps motivate the coherence of at least one form of structural realism – see, for example, Bain (2013).

3 Epistemic and ontic variants We already touched upon structural realism’s curtailed epistemic commitments. Though these are shared by nearly all structural realists, they only constitute the whole story for the support- ers of the epistemic version of the view.3 It thus seems fitting to begin with them. Epistemic structural realism holds that our knowledge of the world never exceeds a correct description of its structural features and that these features are reflected in parts of the mathematical structure of successful scientific theories. As with the other major version of structural realism, that is ontic structural realism, the epistemic one comes in several variants. Two will be explored here. The most prominent variant is that found in Worrall (1989) and first advocated by Maxwell (1968). We get at the structure of theoretical concepts by Ramseyfying the corresponding terms. This is a ‘top-down’ approach in that one starts with a scientific theory, which has already been suitably reconstructed in first-order predicate logic, and then peels away the meaning of theoret- ical terms via Ramseyfication to reveal the essence of that theoretical structure in second-order logic. To give a toy example, suppose our scientific theory asserts that all bodies with mass attract each other. On the further supposition that mass is a theoretical term that we denote in our for- mal language with letter ‘M’ and attraction is an observational term that we denote with letter ‘A’

110 Structural realism and its variants we can offer a first-order formal rendition of the theory as follows: (∀x)(∀y) [(Mx ∧ My ∧ x≠y) → Axy]. This can in turn be Ramseyfied in the following way: (∃Φ)(∀x)(∀y) [(Φx ∧ Φy ∧ x≠y) → Axy], where Φ is a (second-order) variable. What this aims to achieve is a form of modesty towards the referents of the theoretical terms. Thus, whatever mass is, if two distinct objects have it, they are mutually attracted. That is the extent to which mass can be known. The other variant of epistemic structural realism can be found in Votsis (2005) and was first advocated by Russell (1927).4 In contrast to Worrall’s variant, this has been classified as a ‘bottom-up’ approach in that one starts with the structures of observables that are then used to infer (part of ) the structure of the unobservables. The operative assumption here is that the latter is by and large faithfully reflected in the former. It is worth keeping a few things in mind when attempting to understand the Russellian approach. The first is that, much like the previous approach, it does not pretend to give an accurate portrayal of actual scientific practice. Indeed, unlike the Ramsey-sentence approach, it does not even take actual scientific theories as a departure point. The second thing to keep in mind is that the observables-versus-unobservables distinction utilised here is more akin to the internal-versus-external world distinction of indirect realism. Roughly speaking, our perceptions of the world are internal, and their proximal causes, the things in the world, are external. Why is the one distinction merely akin but not identical to the other? Actually, in Russell’s view the two are one and the same, but one can easily see that the old distinction’s conception of what is internal is too narrow by modern standards. After all, most scientific observations nowadays involve instruments. A duly updated version of the distinction takes observables to be the results of sensory organs and/or scientific instrument measurement. By contrast, the objects of those measurements are ‘unobservables’. For example, the needle in a voltmeter pointing to three is observable, but the physical state, that is the electrical potential difference, being measured is unobservable. The third and final thing to keep in mind is that this approach is typically articulated in terms of the set-theoretic notion of structure, though nothing prohibits its articulation by means of another notion.5 The attention bestowed on the epistemic version of structural realism in the last fifteen years pales in comparison to that bestowed on its ontic cousin. The latter view holds, first and foremost, that it is our ontological commitments that need curtailment. To be more precise, we should believe only in the existence of structures, thereby excluding objects and monadic properties at least as they are traditionally conceived. Curtailing one’s ontic commitments in this way implies a corresponding curtailment of epistemic commitments. After all, if only structures exist, only structural epistemic commitments need to be made. Ontic structural realism, much like its epis- temic cousin, has been articulated in several distinct ways. Three variants are explored in what follows. Radical ontic structural realism, which we may call ‘radicalism’ here for expedience, makes the highly contentious claim that only structures exist. That is, no objects or object-borne monadic properties should, according to this view, be admitted into our ontology. But, the reader will surely ask, how can a structure exist without the existence of the very things it is meant to structure? Otherwise put, how can something be a relation without the things it relates, that is without its relata? Many philosophers find this idea difficult to conceptualise. At the very least, its implementation would require a highly revisionary metaphysical and formal framework. Perhaps that’s a moot point. After all, it is not clear whether anyone ever truly supported radicalism, though suggestive comments can certainly be found in James Ladyman (1998) and French (2014). Eliminativist ontic structural realism, which we may call ‘eliminativism’ here for expedience, is a more modest view. Defended by French and Ladyman (2011), it holds that what needs to be dropped is the commitment to individuals but not objects. What is an individual? Traditionally,

111 Ioannis Votsis this notion has been understood via Leibniz’s famed principle of the identity of indiscernibles. The principle asserts that two objects that share all properties are in fact one and the same object. Alternatively put, if two objects are distinct, then there is at least one property possessed by one but not the other. This implies that objects get individuated through their properties. The elim- inativist simply denies this and other conceptions of individuality. Objects, on this view, have to be reconceptualised as ‘thinner’ entities. That is precisely what French and Krause (2006) attempt to do when they argue for a minimal notion of object-hood that can be defined through the use of quasi-set theory. This theory in effect generalises set theory by ensuring that both (traditional) elements that obey the law of identity, that is that every thing is such that it is equal to itself, but also (non-traditional) elements that violate it can form sets. It is, of course, the latter sort of element that is utilised to represent objects as non-individuals. Quasi-set theory thus provides the formal setting where quantifying over non-individual objects becomes possible. Last but not least, we have moderate ontic structural realism, or ‘moderationism’ for short, advocated by Esfeld (2004) and Esfeld and Lam (2011). Loyal to its name, this view underwrites traditional ontological categories like objects and individuality in contrast to radicalism and eliminativism. It is, nevertheless, a form of ontological structural realism by virtue of rejecting intrinsic properties. What exactly is an intrinsic property? A hotly debated topic in its own right, the rough idea in this context is that it is a property that an object possesses independently of other objects or things altogether. 6 By rejecting intrinsic properties, moderationists are not only discarding a traditional ontological category but, at the same time and in a more constructive tone, also highlighting the autonomy of relations. An example would perhaps make the point more conspicuous. Take the notion of inertial mass. Roughly speaking, it is a measure of a body’s ability to resist acceleration. Inertial mass is standardly interpreted as an intrinsic property. So how do moderationists propose to understand it relationally? Luckily, the notion does make reference to something other than the object in possession of the property, namely acceleration, which itself can be understood in relational terms. Acceleration could not but stem from the presence of other objects or, to be precise, forces, and hence inertial mass seems to be more naturally con- strued as a relational property. The foregoing disagreements between structural and other realists, but also among structural realists themselves, are sometimes formulated in terms of the idea of ontological priority. It may be claimed, for example, that structures are ontologically prior to other ontological categories like objects. What this priority amounts to is a much-discussed topic. Some take it to be a form of dependence. For example, x ontologically depends on y means that x would not exist had y not existed. In the case at hand, eliminativists may say that thin objects would not exist had the relations that structure them not existed. Others opt for a weaker conception like superven - ience. For example, x supervenes on y means that there could not be a difference in x without a difference in y. In the case at hand, eliminativists may say that there could not be a difference in thin objects without a corresponding relational difference. Yet still others consider the whole discussion a wild goose chase and claim that neither relations nor relata are ontologically prior to one another. For a more informed deliberation on these matters, the reader may turn to Kerry McKenzie (2014).

4 Motivations Up to now we have kept silent on what motivates structural realism. That is the aim of this section. Several motivations have been proposed over the years. Some were intended to serve the needs of specific variants but have been adapted to serve the needs of others. It is worth noting

112 Structural realism and its variants that not all motivations can be so adapted. We begin with a motivation that was originally pro- posed to support one of the epistemic variants of structural realism but that has since become a sine qua non for all variants of the view. For reasons that will soon become apparent, let us call it the ‘historical motivation’. The historical motivation originates with Poincaré ([1905] 1952).7 To fully appreciate its pull, consider the following general remarks about scientific revolutions. Suppose it is true that scien- tific revolutions bring about all sorts of changes at the theoretical level. Suppose, moreover, that not everything changes at that level and indeed that some things remain the same. If it is reason- able to maintain that the things that change are not likely to truthfully represent any real features of the world, then, it may be claimed, seems to some extent reasonable to maintain that the things that remain the same are likely to truthfully represent such features. This is especially the case when those things that remain the same do so across not just one but all subsequent revolutions. The historical motivation for structural realism identifies the structure of unobservable posits as that which remains (and ought to remain) invariant across revolutions. More precisely, in so far as it is only the structure that is responsible for any genuine contribution made by theoretical parts to the empirical success of a theory, it can be claimed that, with reference to the theoretical parts, it is structure alone that ought to be preserved in successor theories. Hence, it is structure that ought to be deemed worthy of realist commitments. Worrall (1989) casts this argument to promote a middle way between the realist no miracles argument and the anti-realist pessimistic meta-induction argument. Structural realism, according to this view, does not make the success of science a miracle because it maintains that successful scientific theories have got some part of the structure of the world right. But it also does not ignore the fact that various theoretical components of theories get discarded in the aftermath of a scientific revolution. Note, crucially, that this motivation for structural realism can be held accountable against the historical record of science. This includes the future course of science, thereby turning structural realism into an empirically falsifiable hypothesis. Does the history of science vindicate structural realism? A number of case studies have been conducted in support of the view. Worrall (1989), for example, argues that the structure attrib- uted to light by Fresnel’s theory survives the dismissal of the ether in the form of the equations that are derivable from Maxwell’s mature theory of electromagnetism and are still valid today. Another example is Ladyman (2011), who argues that part of the structure of the phlogiston theory of combustion survives into the modern theory of oxidation – see also Schurz and Votsis (2014). A third example can be found in Votsis and Schurz (2012), who argue that part of the structure of the caloric theory of heat is reflected in the modern kinetic theory of gases. Beyond the natural sciences, a small number of case studies have been carried out in the special sciences – see Ross (2008) for examples from the field of economics and French (2014) for examples from biology. These and other case studies are obviously insufficient to demonstrate beyond doubt that some form of structural realism is correct. At best, they are meant to inductively lend credence to such a view. But even this form of support has not escaped the critics. Saatsi (2005), for example, argues that the Fresnel-Maxwell case supports a form of realism that is arguably different from structural realism. More generally, each and every case study presented in favour of structural realism, or any other form of realism for that matter, can be contested on interpretational grounds. That’s why now more than ever there is greater need for interpretational clarity. To conclude the discussion concerning the historical motivation, whether case studies support one as opposed to another version of structural realism, or indeed one as opposed to another version of realism (or even anti-realism) is an unresolved matter.8

113 Ioannis Votsis

An altogether different motivation, closely tied to Russellian structural realism, has as its source a foundational approach to epistemology. We may thus call it the ‘foundational motivation’. Suppose that the foundation of all knowledge is observational. Clearly, this means that any knowledge of the unobservable world – understood in the broader sense explicated earlier – must be based on observational knowledge. The operative assumption here is that by and large the structure of the observables faithfully reflects (part of ) the structure of the unobservables. Thus, some information about the latter can be inferred from the former. The reflection is established through some principles, one of which will be discussed in passing here. Keeping in mind the revised meaning of the terms ‘observable’ and ‘unobservable’, the principle holds, roughly, that differences in observables track differences in unobservables. This and related principles are defended by pointing out that their violation would make the successful tackling of the world around us, including the vital functions of survival and learning, virtually impossible. 9,10 How much knowledge can such inferences license? If the unobservable world is no more than mirrored in our perceptions then such knowledge is merely structural. To use a well-worn exam- ple, consider a map of the London underground. The dots in the map signify stations, but they are obviously not identical to those stations. At best, what such a map preserves are the relations between the target elements. In other words, such a map preserves only structure. In the case at hand, that means, very roughly, some indication about the relative distance between the stations and the locations where one can change lines. Similarly, observable elements are obviously not identical to unobservable elements, and hence, at best, the two sets of elements share structure. Psillos (2001) has questioned the foundational motivation and in particular the aforesaid principles, deeming that they are either too strong when taken at face value and in their totality, or too weak when reduced in number and amended so as to be more sensible. In turn, Psillos’ critique has been contested by Votsis (2005), where more sophisticated versions of the principles are proposed and where it is argued that these principles were never meant to guarantee isomor- phic relations between the observable and unobservable worlds, at least not in all cases. The claim instead is made that we gain some information about the structure of the unobservable world through the structure of the observable world. Whether this is the view originally defended by Russell is a further question. Moreover, and more importantly, whether the foundational motiva- tion can deliver the goods is a non-trivial and highly contested topic. Proponents of ontic structural realism rely more on motivations coming from modern phys- ics; these do not seem capable of supporting epistemic variants of structural realism. Recall that eliminativists argue against the conception of objects as individuals. They do so because individ- uality does not seem to sit well with quantum mechanics. It has, for example, been argued that quantum particles fail to satisfy the principle of the identity of indiscernibles. In more detail, there are pairs of entangled particles that appear to have all and only the same properties, includ- ing spatial ones. If this holds, perhaps particles cannot be said to possess individuality but are at most bare particulars of some sort – a thin conception of object-hood if there ever was one. 11 On the other hand, it has been argued that quantum particles in entangled states can nevertheless be ‘weakly discernible’ by irreflexive relations between them (e.g. the relation of having opposite spin), and that this suffices for individuality (Saunders 2006). Similar concerns regarding object- hood and individuality arise in relation to spacetime points in the context of the general theory of relativity – see, for example, Bain (2013) and Stachel (2006). 12 These motivations are no less controversial than the motivations for epistemic structural real- ism. In fact, given the uncertainties surrounding the interpretation of quantum mechanics they are arguably even more controversial. One reaction, for example, has been to point out that if it turns out that a hidden variable interpretation of quantum mechanics is correct, then the prin- ciple of identity of indiscernibles may still be satisfied by quantum particles. That’s because such

114 Structural realism and its variants an interpretation envisions a disentangling of particles on the basis of their ‘hidden’ properties, for example location in Bohmian mechanics. Unless such interpretational disputes are empiri- cally resolved, it would thus appear unwise to put much weight on the individuality motivation. Having said this, the motivation is not for naught, as it helps lay the conceptual groundwork for the philosophical view that best fits some interpretations of quantum mechanics, one of which may turn out true. A final motivation worth noting is what we may call the ‘representational motivation’. It has been employed in support of both the epistemic and the ontic forms of structural realism. Simply put, representation in the natural sciences is accomplished through the use of mathemat- ical objects and the relations between them. Such objects, as structuralists in the philosophy of mathematics have been pressing, are nothing but places in a structure. That is to say, mathemat- ical objects are structurally conceived or specifiable only up to isomorphism. By extension, the domains they represent are also structurally conceived or specifiable only up to isomorphism. Hence, mathematical modelling or representation naturally leads to structural realism. This seems to be the case irrespective of whether there is something over and above the abstract structure physical objects instantiate, that is irrespective of whether one supports the epistemic or ontic form of structural realism. As with all the others, the representational motivation has not gone unchallenged. One line of criticism is that mathematical objects should not be understood in structuralist terms. Indeed, although the structuralist view holds great sway in the philosophy of mathematics, it is not a foregone conclusion. Another line of criticism is that the same motivation can be, and has been, utilised for anti-realist views – Bueno (1997), for example, fuses empiricism with structuralism – and hence does not uniquely favour structural realism.

5 Challenges Arguably the most famous argument against structural realism is the so-called ‘Newman objec- tion’, named after the mathematician M.H.A. Newman, who first raised it in a critical review of Russell’s The Analysis of Matter(1927). Newman (1928) spotted that, on a certain natural reading of Russell’s view, structural realism appears to offer no substantial information about the world. That’s because the claim that a collection of objects can be structured in a certain way follows purely as a matter of set theory. In moreS withdetail, cardinalityfor c, a setset theory implies that providedc is large enoughany relation and anyhence structure can be definedS.13 Thus, on Σ if the structural realists claim that we can identify theD abstract(but notstructure its concrete structure SD) of a certain Ddomain with cardinalitycD, that seems tantamount to saying that Σ there exist relations, we know not which, that have the formal featuresD. Butidentified the by existence ofall relations compatible with cardinalitycD is guaranteed by set theory. Hence, if, on the one hand, that is what structural realists claim, then the only non-trivial component of that claim concerns the cardinality of the domain. If, on the other hand, structural realists attempt to say something more about those relations, then the structural restriction of their realism does not hold anymore, and hence the view is strictly speaking false. Though it went largely unnoticed in the decades that followed, the objection was re-discovered by Demopoulos and Friedman (1985) – see also Ketland (2004) – who recast it in the context of the Ramsey-sentence approach to structural realism. Provided a theory is consistent and its observational consequences are true, the only non-trivial information about a domain of unob- servables conveyed by the Ramsey sentence of that theory concerns its cardinality. For, just as in the original objection, so long as the cardinality of the unobservable domain is large enough (and the suppositions about consistency and observationalany truthrelation hold) and anyhence

115 Ioannis Votsis structure can be defined on it. Thus, saying that there exist relations between unobservables possessing formal features identified by the corresponding Ramsey sentence is not saying anything we don’t already know through set theory or second-order logic. To be clear, the triviality charge concerns only the unobservable content of Ramseyfied theories. The observable content is presumably still made true by facts about the world. But, as Demopoulos and Friedman emphasise, curtailing one’s epistemic commitment to the latter content is as good as giving up on realism. As expected, a number of replies have arisen in a bid to save the view. Here we consider three. The first can be found in a letter Russell wrote to Newman – reproduced in his ([1968] 1998). In it he concedes that the claim that we can only know the (abstract) structure of the unobservable world is indeed trivial but points out that in the same book he had already framed the whole discussion around several metaphysical suppositions about the unobservable world. In short, Russell seems to be suggesting that we ‘know’ things in addition to structure. It may be objected that such a view is no longer a form of structural realism and therefore that this is a capitulation to other forms of realism. We need only note here that it is a non-trivial matter whether the result is a collapse of Russell’s impure form of structural realism into an existing form of realism. The second has been put forth by Worrall. Less conciliatory than Russell, Worrall thinks that the trouble lies with the orthodox interpretation of the Ramsey sentence’s content. According to him, another interpretation must be adopted, one in which it turns out that the non-observational part of that content covers much more than claims about cardinality. That’s because some claims constructed merely out of observational terms are in fact non-observational in the sense that we are incapable of directly checking their truth value through observation. 14 Epistemic com- mitment to such statements is thus something that no empiricist would sanction. If correct, this move would defuse the charge that Ramsey-sentence structural realism collapses to anti-realist empiricism. Though, once again, one wonders whether the view construed this way does enough to differentiate itself from existing competitors. The third response is due to Melia and Saatsi (2006). They argue that the best chance of res- cuing the Ramsey-sentence approach to structural realism involves its augmentation with inten- sional operators, that is operators such as ‘it is nomologically necessary that . . . ’. That’s because, in their view, the Achilles’ heel of that approach is the purely extensional treatment of theoretical terms. Such a treatment is incapable of expressing modal relations between properties like the counterfactual dependence of some properties on others. That’s where the aforesaid intensional operators can be of help. Once in place, they bind the theoretical variables of a Ramsey sentence in a way that allows them to convey not just information about the cardinality of the unob- servable domain but also information about its modal properties. This is certainly a move that attempts to do justice to modal intuitions, but whether these intuitions are grounded in reality and whether realism needs modality are topics that have long been disputed. Ontic structural realism is presumably immune to the Newman objection and therefore needs to provide no response to it. This is not only because it never espoused the Ramsey-sentence formulation but also because it (often) incorporates a commitment to modality. Ladyman and Ross (2007), for example, take physics and, more broadly, science to describe modal structure. In so doing, their epistemic commitments clearly take them beyond the frugal commitments of empiricist anti-realists. That being said, the exact richness of those commitments is unclear, as the structure to be described is sometimes construed as a structure of phenomena. This raises concerns about the extent to which such a view can be legitimately portrayed as a version of realism. Indeed, proponents of the view have openly flirted with empiricism. Several other worries have been voiced. One concerns the cogency of the structure- versus-non-structure distinction. It has been claimed by Psillos (1999) that the two categories form a continuum, and therefore the distinction cannot be drawn in a crisp way. A related

116 Structural realism and its variants worry is that structural realists can only identify structure as that which survives scientific revolutions – if this were true, the historical motivation would of course be hollow. Attempts to address these worries have been made. Alas, considerations of space do not permit us to explore these in depth here.15 Suffice it to say that one way to draw the distinction that seems to put the brakes on the continuum worry is in terms of descriptions that are restricted to abstract features of the unobservable domain and those that seek to go beyond them. What is more, and to defuse the second worry, one can argue that abstract features and in particular those that contribute to the success of the theory are discernible prior to any scientific revolution by considering what is required to infer the relevant successful predictions – see, for example, Chakravartty (2007) and Votsis (2012). A final worry concerns the scope of the view. Almost without exception, cases motivat- ing structural realism centre on the natural sciences and in particular on physics. To what extent, then, can it be said that the view applies also to the other sciences? This is a theme pursued by Kincaid (2008) who argues that structural realism can be fruitfully applied across the social sciences, indicating, for example, that it fits nicely with causal modelling and equilibrium explanation cases. Unfortunately, this is not a simple matter to solve. For one thing, many social science theories are not mathematised. For another, many such theories cannot be said to posit unobservable entities. Neither issue is conclusive in its indictment of structural realism – after all, theories can be implicitly mathematisable, and the observable-versus-unobservable distinction can take different forms – but they certainly raise serious doubts. There is, needless to say, an easier way out. If physical reductionism is true, then the final ontology of social science reduces to the final ontology of physics, and scope no longer poses a threat. But we simply cannot assume this to be the case. Instead, we must add it to the long list of issues that need to be resolved before any concrete answers can be given.

Notes 1 For a review of structural realism’s historical trajectory, see Gower (2000). 2 How exactly it is drawn and whether it can be drawn at all is of course a major issue in this debate – see, for example, Putnam (1962). For now suppose that all of this is uncontroversial. 3 The sole exception is Katherine Brading and Elaine Landry (2006). They advocate what they deem to be a methodological version of structural realism, which, among other things, focuses on the role shared structure plays in relating predecessor to successor theories as well as high-level theories to low-level data. 4 Maxwell traces his own view back to Russell, among others. Though some elements of his view certainly derive from Russellian ideas, this attitude cannot be maintained with respect to the Ramsey-sentence formulation, which is demonstrably absent from Russell’s own writings. 5 For example, the Russellian view can be given a Ramsey-sentence articulation – see Maxwell (1970). 6 See Lloyd Humberstone (1996) for some of the many possible ways of construing the distinction between intrinsic and extrinsic properties. 7 For a recent analysis of this motivation see Votsis (2011). 8 For more case studies see Chakravartty (2007) and Kitcher (1993). 9 See Frigg and Votsis (2015) and Russell (1927) for a lengthier exposition and critical evaluation of the requisite principles. 10 It is hard to imagine how this motivation can be of use to anything but the Russellian brand of structural realism. This is because the conception of observables and unobservables utilised here is different to those employed in other brands of structural realism. 11 One version of the argument (see French 2014: §2) sees the individuality of quantum objects as being underdetermined by the evidence. When faced with underdetermination, it is argued, the most prudent action is to drop the category for which no determination can be made. In the current case, the offend- ing category is individuality.

117 Ioannis Votsis

12 Comparable arguments have been given for group theory and quantum field theory – see, for example, Cassirer ([1936] 1956) and Ladyman (1998). 13 A modern reconstruction of this set-theoretic reasoning goes like this: Extensionally understood, rela- tions are ordered sets or sets of subsets. Thus, a relation in a domain of objects can be identified with some set of subsets of that domain. We know from the axiom of power set that every subset of a domain of objects exists. That means every relation of that domain exists. 14 As an example he offers the following assertion: “Nothing is older than 6000 years old” (2011, 167). 15 The reader is urged to consult Frigg and Votsis (2011: §3.2.1 & §3.5), where a brief discussion and further references are given.

References Bain, J. (2013) “Category-Theoretic Structure and Radical Ontic Structural Realism,” Synthese 190(9), 1621–1635. Brading, K. and Landry, E. (2006) “Scientific Structuralism: Presentation and Representation,” Philosophy of Science 73, 571–581. Bueno, O. (1997) “Empirical Adequacy: A Partial Structures Approach,” Studies in History and Philosophy of Science 28(4), 585–610. Cassirer, E. ([1936] 1956) Determinism and Indeterminism in Modern Physics , New Haven: Yale University Press. Chakravartty, A. (2007) A Metaphysics for Scientific Realism, Cambridge: Cambridge University Press. Costa, N. da and French, S. (2003) Science and Partial Truth: A Unitary Approach to Models and Scientific Rea- soning, Oxford: Oxford University Press. Demopoulos, W. and Friedman, M. (1985) “Critical Notice: Bertrand Russell’s the Analysis of Matter: Its Historical Context and Contemporary Interest,” Philosophy of Science 52, 621–639. Esfeld, M. (2004) “Quantum Entanglement and a Metaphysics of Relations,” Studies in History and Philosophy of Modern Physics 35, 601–617. Esfeld, M. and Lam, V. (2011) “Ontic Structural Realism as a Metaphysics of Objects,” in A. Bokulich and P. Bokulich (eds.), Scientific Structuralism (Boston Studies in the Philosophy and History of Science), vol. 281, Dordrecht: Springer, pp. 143–159. French, S. (2014) The Structure of the World: Metaphysics and Representation, Oxford: Oxford University Press. French, S. and Krause, D. (2006) Identity in Physics: A Historical, Philosophical, and Formal Analysis , Oxford: Clarendon Press. French, S. and Ladyman, J. (2011) “In Defence of Ontic Structural Realism,” in A. Bokulich and P. Bokulich (eds.), Scientific Structuralism (Boston Studies in the Philosophy of Science), vol. 281, Dordrecht: Springer, pp. 25–42. Frigg, R. and Votsis, I. (2011) “Everything You Always Wanted to Know about Structural Realism but Were Afraid to Ask,” European Journal for Philosophy of Science 1(2), 227–276. Gower, B. (2000) “Cassirer, Schlick and ‘Structural’ Realism: The Philosophy of the Exact Sciences in the Background to Early Logical Empiricism,” British Journal for the History of Philosophy 8(1), 71–106. Humberstone, L. (1996) “Intrinsic/Extrinsic,” Synthese 108, 205–267. Ketland, J. (2004) “Empirical Adequacy and Ramsification,” British Journal for the Philosophy of Science 55(2), 287–300. Kincaid, H. (2008) “Structural Realism and the Social Sciences,” Philosophy of Science 75(5), 720–731. Kitcher, P. (1993) The Advancement of Science, Oxford: Oxford University Press. Ladyman, J. (1998) “What Is Structural Realism?” Studies in History and Philosophy of Science 29, 409–424. ——— (2011) “Structural Realism versus Standard Scientific Realism: The Case of Phlogiston and Dephlo- gisticated Air,” Synthese 180(2), 87–101. Ladyman, J. and Ross, D. (2007) Every Thing Must Go: Metaphysics Naturalised , Oxford: Oxford University Press. Lawvere, F. W. (1966) “The Category of Categories as a Foundation for Mathematics,” in S. Eilenberg, D. K. Harrison, S. MacLane and H. Röhrl (eds.), Proceedings of the Conference on Categorical Algebra , La Jolla: Springer-Verlag, pp. 1–20. Lutz, S. (2015) “Partial Model Theory as Model Theory,” Ergo 2(22), 563–580. McKenzie, K. (2014) “Priority and Particle Physics: Ontic Structural Realism as a Fundamentality Thesis,” British Journal for the Philosophy of Science 65(2), 353–380.

118 Structural realism and its variants

Maxwell, G. (1968) “Scientific Methodology and the Causal Theory of Perception,” in I. Lakatos and A. Musgrave (eds.), Problems in the Philosophy of Science, Amsterdam: North-Holland Publishing Com- pany, pp. 148–177. ——— (1970) “Structural Realism and the Meaning of Theoretical Terms,” in S. Winokur and M. Radner (eds.), Analyses of Theories and Methods of Physics and Psychology, Minneapolis: University of Minnesota Press, pp. 181–192. Melia, J. and Saatsi, J. (2006) “Ramseyfication and Theoretical Content,” British Journal for the Philosophy of Science 57(3), 561–585. Newman, M. (1928) “Mr. Russell’s ‘Causal Theory of Perception’,” Mind 37, 137–148. Poincaré, H. ([1905] 1952) Science and Hypothesis, New York: Dover. Psillos, S. (1999) Scientific Realism: How Science Tracks Truth, London: Routledge. Psillos, S. (2001) “Is Structural Realism Possible?” Philosophy of Science 68(3), S13–S24. Putnam, H. (1962) “What Theories Are Not,” in E. Nagel, P. Suppes and A. Tarski (eds.), Logic, Methodology, and Philosophy of Science, Stanford: Stanford University Press, pp. 240–251. Redhead, M. (2001) “The Intelligibility of the Universe,” in A. O’Hear (ed.), Philosophy at the New Millen- nium, Cambridge: Cambridge University Press, pp. 73–90. Ross, D. (2008) “Ontic Structural Realism and Economics,” Philosophy of Science 75(5), 731–741. Russell, B. (1927) The Analysis of Matter, London: George Allen & Unwin. ——— ([1968] 1998) The Autobiography of Bertrand Russell (Vol. 2), London: George Allen and Unwin. Saatsi, J. (2005) “Reconsidering the Fresnel – Maxwell Theory Shift: How the Realist Can Have Her Cake and EAT It Too,” Studies in History and Philosophy of Science 36(3), 509–538. Saunders, S. (2006) “Are Quantum Particles Objects?” Analysis 66, 52–63. Schurz, G. and Votsis, I. (2014) “Reconstructing Scientific Theory Change by Means of Frames,” in T. Gam- erschlag, D. Gerland, R. Osswald, and W. Petersen (eds.), Frames and Concept Types (Studies in Linguistics and Philosophy, vol. 94), New York: Springer, pp. 93–109. Stachel, J. (2006) “Structure, Individuality and Quantum Gravity,” in D. Rickles, S. French and J. Saatsi (eds.), Structural Foundations of Quantum Gravity, Oxford: Oxford University Press, pp. 53–82. Votsis, I. (2005) “The Upward Path to Structural Realism,” Philosophy of Science 72(5), 1361–1372. ——— (2011) “Structural Realism: Continuity and Its Limits,” in A. Bokulich and P. Bokulich (eds.), Scientific Structuralism (Boston Studies in the Philosophy and History of Science), Dordrecht: Springer, pp. 105–117. ——— (2012) “The Prospective Stance in Realism,” Philosophy of Science 78(5): 1223–1234. ——— (2015) “Perception and Observation Unladened,” Philosophical Studies 172(3), 563–585. Votsis, I. and Schurz, G. (2012) “A Frame-Theoretic Analysis of Two Rival Conceptions of Heat,” Studies in History and Philosophy of Science 43(1), 105–114. Worrall, J. (1989) “Structural Realism: The Best of Both Worlds?” Dialectica 43(1–2), 99–124. ——— (2011) “Underdetermination, Realism and Empirical Equivalence,” Synthese 180(2), 157–172.

119 10 ENTITY REALISM

Matthias Egg

This chapter discusses an approach known as entity realism or experimental realism (I will use the two names interchangeably), which was introduced in the early 1980s by Ian Hacking and Nancy Cartwright and seeks to defend some middle ground between traditional scientific realism and antirealism. In section 1, I will introduce Hacking’s version of entity realism. The centerpiece of his account, the argument from manipulability, will be discussed in sections 2 and 3. I will reply to some of the points raised by Hacking’s critics, postponing some other points to later sections, because they find their answer in Cartwright’s version of entity realism, to which I will turn in sections 4 to 6. I will conclude in section 7 by briefly advertising what I take to be the relevant heritage of entity realism in the contemporary debate on scientific realism.

1 From theories to entities Scientific realism is usually taken to involve two closely related commitments, one to the (approx- imate) truth of our best theories, the other one to the existence of the entities postulated by these theories. The classic statement of this assumption is due to Wilfrid Sellars (1963: 91): “As I see it, to have good reason for holding a theory is ipso facto to have good reason for holding that the entities postulated by the theory exist.” The starting point for Hacking’s entity realism is the observation that these two commitments can come apart: “There are two kinds of scientific realism, one for theories, and one for entities” (1983: 26). Hacking then uses the example of the electron to show how realism of the second kind need not be accompanied by realism of the first kind:

The scientific-entities version of [realism] says we have good reason to suppose that electrons exist, although no full-fledged description of electrons has any likelihood of being true. Our theories are constantly revised; for different purposes we use different and incompatible models of electrons which one does not think are literally true, but there are electrons, nonetheless. (27)

One of Hacking’s reasons to abstain from theory realism is the historical variability of our theories (see P. Vickers, “Historical challenges to realism,” ch. 4 of this volume), and another one has to

120 Entity realism do with the disunity of different theoretical models of electrons. But how can he then assert that “there are electrons, nonetheless”? The reason is that he takes the strongest argument for realism about unobservable entities not to depend on realism about theories:

Experimental work provides the strongest evidence for scientific realism. This is not because we test hypotheses about entities. It is because entities that in principle cannot be “observed” are regularly manipulated to produce new phenomena and to investigate other aspects of nature. They are tools, instruments not for thinking but for doing. (262)

The important contrast here is between thinking and doing (or, as the title of Hacking’s book expresses it, between representing and intervening). As long as only representation is concerned, even the most useful concepts (“tools for thinking”) can be interpreted non-realistically. But when it comes to intervention, Hacking’s suggestive idea is that something cannot be used as a tool unless it is real. And since, according to Hacking, successful intervention can occur in the absence of true representation, we can have entity realism without theory realism.1 I will further discuss this argument from manipulability in the next two sections, but let me first tackle a more general issue regarding Hacking’s entity realism. The fact that its central claim is about the existence of unobservable entities has prompted some commentators to regard it as pri- marily or even exclusively metaphysical (Morrison 1990: 1; Elsamahi 1994: 174–175; Nola 2002: 1). This reading receives some initial support from a remark in the introduction to Hacking (1983), where Hacking claims that his book is not about logical or epistemological questions but about metaphysical ones (1–2). But the context of this remark shows that it is only a very specific cluster of epistemological questions which he wants to exclude from his investigation, namely the ones about the rationality of science raised by the writings of Thomas Kuhn and Paul Feyerabend. When it comes to scientific realism, Hacking makes it very clear that he has no intention of separating claims about what we know from claims about reality:

I run knowledge and reality together because the whole issue would be idle if we did not now have some entities that some of us think really do exist. If we were talking about some future scientific utopia I would withdraw from the discussion. (Hacking 1983: 28)

It is therefore not correct to say that Hacking’s realism privileges metaphysics at the expense of epistemology. This distinguishes it from the variants of entity realism defended by Michael Devitt and Brian Ellis, who explicitly emphasize the metaphysical (as opposed to epistemological) character of their doctrines. However, there is also significant overlap between these positions. In particular, both Ellis (1987: 56–58) and Devitt (1997: 111–113) endorse what I take to be a key idea of experimental realism (see section 4), namely that the success of science in some cases, but not in others, warrants belief in the truth of scientific claims and that the difference between the two kinds of cases has to do with whether the claims are causal or not.

2 Is manipulability necessary for reality? Many critics of entity realism have pointed out that tying the reality of hypothetical entities to our ability to manipulate them may have some counterintuitive consequences. On the one hand, there might be some real entities which we will never be able to manipulate. On the other hand, some entities which are taken to be manipulable may turn out not to exist. If this is so, then

121 Matthias Egg manipulability is neither a necessary nor a sufficient criterion of reality. I will argue in the next section that Hacking has an answer to the second of these issues. For now, I will look at the first one and show that it necessitates a modification of the criterion of manipulability. In the final paragraph of his book, Hacking (1983: 275) confesses “a certain skepticism, about, say, black holes” and other “long lived theoretical entities, which don’t end up being manipulated.” In a later paper (1989), he adds a case study on gravitational lenses, which leads him to an even more explicit endorsement of antirealism in astrophysics, insofar as it deals with entities which we will never be able to manipulate. Since astrophysics is rather unique in that respect, Hacking thinks that this is a very restricted form of antirealism, but his critics point out that it may quickly spread to many other scientific domains. Alan Gross (1990: 426) mentions “historical sciences like geology, cosmology, and .” In these disciplines, “we must describe a series of events for which there are no witnesses, a series unavailable to exper- imentation. If Hacking’s criterion is applied, the events that evolutionary biology reconstructs will permanently lack reality” (427). This leaves the experimental realist with a dilemma: Either he bites the bullet and abandons realism about all these domains, thereby significantly reducing the attractiveness of experimental realism, which initially promised to reflect closely the com- mitments of scientists themselves, or he accepts the reality of some non-manipulable scientific entities, thereby compromising the central feature that distinguishes experimental realism from traditional scientific realism. As we saw, Hacking opts for the first horn of the dilemma as far as astrophysics is concerned, and Dudley Shapere is quick to point out that this leads to a conflict between Hacking’s antire- alism and the commitments implicit in the scientific enterprise:

The fact is that scientists do build on what they have learned to make inferences even in cases where they cannot lay hands on the entities about which the inferences are made; they use what they have already found out – whether by interfering actively or passively observing – to probe further. Surely this is a most important aspect of the scientific enterprise. To disallow it is to truncate that enterprise for the sake of a contention which is unsupported by any real argument, scientific or otherwise – in short, a dogma. (Shapere 1993: 148)

In Shapere’s view, Hacking employs an over-restrictive reading of the term use when he restricts realism to entities which we can actively manipulate in order to interfere with something else. A more liberal (and more appropriate, according to Shapere) reading would extend realism to entities which we can use as tools of inquiry even though we cannot actively manipulate them. All it takes to count as a “tool” in this sense is that we use (what we take to be) our knowledge about such entities to make inferences about other objects of inquiry. For example, Shapere (1993: 146–147) suggests that we might some day use gravitational lenses to find out more about the presence and density of dark matter, cosmic distances, or the precise value of the Hubble constant (see also Dorato 1988: 178–179 for an analogous argument regarding the potential “use” of black holes). Hacking sometimes writes as if he could accept Shapere’s more liberal reading of use, particu- larly when he claims that we do not so much infer the existence of electrons from our manipu- lative success as presuppose their existence when we use them as tools of inquiry (Hacking 1983: 265). This presuppositional status of the entities in question is independent of whether we “use” them in Hacking’s narrow or in Shapere’s wide sense. While such a shift from manipulation to presupposition would save Hacking from the problems associated with the first horn of the dis- cussed dilemma, it would considerably weaken his experimental argument for realism: scientists

122 Entity realism have all kinds of presuppositions, and it is not clear why an inspection of them should be our best guide to reality.2 As I will argue in section 4, there is a more promising strategy to avoid the consequences of the first horn. The shift should not be from manipulation to presupposition but from manipulation to causal explanation.

3 Is manipulability sufficient for reality? While it is quite natural to question the necessity of Hacking’s criterion, it is not easy to see how manipulability could fail to be a sufficient condition for reality. The very notion of “manipulat- ing something” seems to imply that this something must exist. Nevertheless, the sufficiency of the manipulability criterion has been questioned in two ways. The first one is mainly epistemic: If we knew that we are able to manipulate some entity x, we could indeed conclude that x is real, but in fact we can at most believe that we have this ability. And this is not sufficient for the reality of x, for our belief might turn out to be false (Gross 1990). The second criticism is ontological: it claims that for some entities which can reasonably be regarded as manipulable, a rigorous onto- logical analysis shows that they should not be taken as real (Gelfert 2003). I think that the entity realist has a reply to these two criticisms. Let me address them in turn. Hacking (1983: 23) famously expresses his realism about electrons by means of the slogan “if you can spray them then they are real.” The phrase is catchy but not unassailable. Gross (1990: 425) comments, “when we spray electrons, we say nothing criterial about their existence: we confirm only the causal property we call spin.” In other words, manipulation may warrant belief in certain properties but not a commitment to the entities supposed to possess these properties.3 Accordingly, manipulability does not save entity realism from the pessimistic induction:

When an old theory is discarded, its entities may be discarded along with it; if so, their causal properties, those still acknowledged by the new theory, will be redistributed in accordance with that theory. Some similar fate may well await the electron in some future science, its spin, mass, and charge reinterpreted and reassigned within a new ontology. (Gross 1990: 426)

The point that our knowledge of unobservable (causal) properties is more secure than our knowl- edge of unobservable entities has become important in later developments based on entity realism (see Chakravartty 2007 and the literature mentioned in section 7). Here I limit myself to a brief sketch of how Hacking can answer Gross’s objection. First, Hacking (1983: 271) admits that real- ism about entities does not immediately follow from realism about properties:

Once upon a time it made good sense to doubt that there are electrons. Even after Thomson had measured the mass of his corpuscles, and Millikan their charge, doubt could have made sense. We needed to be sure that Millikan was measuring the same entity as Thomson.

But this does not mean that our realism must forever be confined to nothing but properties, as Hacking goes on to note: “Things are different now. The ‘direct’ proof of electrons and the like is our ability to manipulate them using well-understood low-level causal properties” (274). Unfortunately, Hacking is not very explicit about how manipulation takes us from property realism to entity realism. Based on the example he uses in this context (the polarizing electron

123 Matthias Egg gun PEGGY II), I take his idea to be the following: what is employed in building and operating a polarizing electron gun is not just the property of spin or the property of charge but the system- atic interplay of these properties, along with detailed knowledge of how they manifest themselves in various specific circumstances. With this in mind, Gross’s scenario of a future theory which may reinterpret and reassign spin, mass, and charge within a new ontology needs to be qualified: It will not be enough that such a new ontology includes the properties of spin, mass, and charge in some arbitrary way. Rather, it will have to respect the systematic coherence of these properties which we now associate with the concept of the electron. But this is to say that this new ontology will – at least in some sense – have to include electrons.4 The second argument against the sufficiency of manipulability as a criterion of reality is the following. According to Axel Gelfert (2003), the problem of the manipulability criterion is not that it yields realism about entities which may turn out to be unreal but about entities which we already know to be unreal, namely the various kinds of quasi-particles in solid-state physics. Examples include unoccupied electron states (“holes”) in semiconductors, excitons (bound electron-hole states), and magnons (coherent excitations of electron spins). All these entities can, in some sense, be manipulated and should therefore count as real by the experimental realist’s standards. Gelfert takes this to be an unacceptable consequence due to “a built-in mismatch between actual and intended referent” (260) of quasi-particle terms. More precisely, since quasi-particles are “in some way parasitic on sets of electrons” (ibid.), what we are really referring to when we speak of quasi-particles are, according to Gelfert, sets of electrons. And since one and the same set of electrons can form the material basis of different quasi-particles, there is a failure of reference for quasi-particle terms in the sense that terms purporting to refer to different entities actually refer to one and the same entity. This seems to preclude realism about quasi-particles.5 However, the experimental realist should resist the presupposition that we can only success- fully refer to entities of which we can identify the “material basis.” Why not identify quasi-parti- cles in the same way as electrons, namely via their causal properties? Of course, the identification of particular individuals may not always be possible, but the same is true for electrons, as the long-standing debate on quantum non-individuality shows (French 2015). In this context, I must concede that Gelfert’s critique of entity realism addresses an important point. As we will see in the next section, so-called “home truths” play an important role in Hack- ing’s realism, and they may lead to a mistaken description of the entities with which such a realism is concerned. For example, there is a danger of regarding it as a home truth that electrons are individual objects, which, as just remarked, might be inadequate. However, it is not clear whether entity realism is really committed to the kind of home truths Gelfert attributes to it. This can be seen in the following argument against the reality of quasi-particles:

If one were to grant quasi-particles the same degree of reality as electrons, one would violate the very intuitions that lie at the heart of entity realism; namely, that there is a set of basic substantive entities that have priority over composite or derivative phenomena. A proliferation of entities would evolve into what one might call inflationary realism . . . (Gelfert 2003: 257)

I do not see that it is “at the heart of entity realism” to think of electrons as “basic substantive entities.” Such an entity realism would indeed be problematic in the light of all the problems encountered by the attempt to interpret contemporary quantum field theories in terms of fundamental particles (Kuhlmann 2015). But this theoretical particle notion is precisely not what Hacking is after when he describes electrons as tools. Instead, he emphasizes their similarity with other things we can use as tools, be they quasi-particles or other “composite or derivative”

124 Entity realism objects, such as hammers. The entity realist can therefore happily grant the reality of quasi-particles and thereby maintain that manipulability is a sufficient criterion for reality.

4 From manipulation to explanation The experimental realist’s focus on entities as opposed to theories is sometimes taken to imply that he only believes in the existence of certain entities but not in the truth of any further statement about them, because that would already require a commitment to theory. Such a crude entity realism invites an obvious line of criticism, which is most bitingly expressed by Alan Musgrave (1996: 20):

To believe in an entity, while believing nothing further about that entity, is to believe nothing. I tell you that I believe in hobgoblins. . . . So, you reply, you think there are little people who creep into houses at night to do the housework. Oh no, say I, I do not believe that hobgoblins do that. Actually, I have no beliefs at all about what hobgoblins do or what they are like. I just believe in them.

Phrased in this way, the charge is overstated, since no entity realist has ever urged us “to believe in an entity, while believing nothing further about that entity.”6 We already saw that Hacking unhesitatingly ascribes some causal properties to electrons. These property ascriptions are the “home truths” on which we rely when designing apparatus that use electrons to produce new phenomena (Hacking 1983: 265). What Hacking denies is that the conjunction of these home truths is part of a single theory.

But might there not be a common core of theory, . . . which is the theory of the elec- tron to which all the experimenters are realistically committed? I would say common lore, not common core. There are a lot of theories, models, approximations, pictures, formalisms, methods and so forth involving electrons, but there is no reason to suppose that the intersection of these is a theory at all. (264)

This raises the question what differentiates the “common lore” of home truths to which the experimental realist is committed from the kinds of theories he wants to exclude from his real- ism. Many commentators have pointed out that Hacking does not have a satisfactory answer to this question and that, consequently, entity realism without theory realism is an unstable or even incoherent position (Morrison 1990; Elsamahi 1994; Psillos 1999: 256–258; Massimi 2004; Chakravartty 2007: 31–33). Hacking’s antirealism about theories is fueled partly by his distrust in inference to the best explanation (IBE) (Hacking 1983: 52–57). He considers the explanatory power of a hypothesis “a feeble ground” (53) for belief that the hypothesis is true. This is the basis for his distinction between theoretical claims and the home truths of entity realism: the latter, unlike the former, can be warranted without relying on IBE.

Some people might have had to believe in electrons because the postulation of their existence could explain a wide variety of phenomena. Luckily we no longer have to pretend to infer from explanatory success (i.e. from what makes our minds feel good). [The physicists who built PEGGY II] don’t explain phenomena with electrons. They know how to use them. (271–272)

125 Matthias Egg

But, as David Resnik (1994) as well as Richard Reiner and Robert Pierson (1995) point out, it is unclear how successful manipulation should warrant belief in electrons (and the corresponding home truths) if not by means of some kind of IBE. At any rate, Hacking does not explain how this is supposed to work. Entity realism therefore needs a more informative account of how warrant accrues to claims about theoretical entities, and such an account can be found in Cartwright’s (1983) version of realism, to which we now turn. According to Cartwright, theory realism and entity realism both rely on IBE, but there is a crucial difference in the kinds of explanations that are invoked. In the case of theoretical explanations, the explanatory power resides in certain laws. In this case, Cartwright agrees with Hacking that IBE fails, because a law can be explanatory without being true. This is what she alludes to in entitling her book How the Laws of Physics Lie. But causal explanations work by postulating certain entities which are assumed to bring about the explanan- dum. These entities can only perform their explanatory role if they actually exist; hence IBE is valid in this case. Cartwright summarizes the relevant difference between the two kinds of explanation as follows:

What is it about explanation that guarantees truth? I think there is no plausible answer to this question when one law explains another. But when we reason about theoretical entities the situation is different. The reasoning is causal, and to accept the explanation is to admit the cause. (99)

Adopting the terminology of Suárez (2008), we can state the proposal by saying that inference to the most likely cause (IMLC) is a valid inference scheme, while inference to the best theoretical expla- nation (IBTE) is not. The arguments supporting this distinction will be discussed in the next two sections.

5 Redundancy of explanations If we are to infer the truth of a hypothesis on the basis of its ability to explain a given phenome- non, we need to make sure that no other hypothesis provides an equally satisfactory explanation of the phenomenon. Cartwright (1983: 76) calls this “the requirement of non-redundancy,” and she claims that it applies only to causal, not to theoretical explanations. In one sense, this is a statement about a difference in the attitude of scientists. Discussing an example from quan- tum optics, Cartwright shows how physicists happily employ different mutually incompatible theoretical approaches to explain a certain phenomenon without asking which one is “the true one.” The different approaches “are useful for different purposes; they complement rather than compete” (81). This contrasts with the attitude taken towards causal accounts: “We do not have the same pragmatic tolerance of causal alternatives. We do not use first one causal story in expla- nation, then another, depending on the ease of calculation, or whatever” (ibid.). But the difference between causal and theoretical explanation with respect to the non-redundancy requirement does not just lie in our difference of attitude but also in our means of eliminating a redundancy of explanations (assuming that it ought to be eliminated). IBTE relies crucially on evaluating the so-called theoretical virtues (simplicity, explanatory strength, consilience, etc.) of the different accounts, and these evaluations are often unreliable. The situation is different for IMLC, says Cartwright: “unlike theoretical accounts, which can be justified only by an inference to the best explanation, causal accounts have an independent test of their truth: we can perform controlled experiments to find out if our causal stories are right or wrong” (82).

126 Entity realism

The difference between theoretical and causal explanations in terms of redundancy can be questioned on various levels. Let me first get out of the way a rather trivial objection, which appears in the discussion of the underdetermination of theories by evidence (see also D. Tulodziecki, “Underdetermination,” ch. 5 of this volume). One might think that there is always (that is, also in the case of causal explanations) a multitude of mutually incompatible possible explanations between which experimental evidence of the type Cartwright mentions is unable to decide. Consider, for example, a hypothesis H which constitutes a genuine (causal or theoretical) expla- nation for some phenomenon. Now it is always possible to construct an alternative hypothesis H’ asserting that the world behaves exactly as H says if the universe is observed, but if no one is looking, the world behaves in ways which are incompatible with H.7 Obviously, we should not let cases like this cast doubt on our warrant for belief in H, even though there is (by construction) no empirical way to determine whether H or H’ is true. The reason is simply that this kind of underdetermination does not just threaten our warrant for scientific hypotheses but for virtually all our knowledge. It is, of course, possible to deny that any claim can be warranted, but to do so is to leave the debate on scientific realism and to go for general skepticism. Thus, there is a sense in which also the experimental realist relies on the evaluation of theoretical virtues: if a hypothesis is so ad hoc that taking it into account would result in general skepticism, then it is excluded even if there is no experimental evidence against it. A slightly more interesting line of criticism has been advanced by Pierson and Reiner (2008), who claim that the difference of attitudes taken by physicists towards causal and theoretical explanation, respectively, does not imply that we should entertain the same difference of attitude. “What physicists presently do, or historically have done, exercises no logical constraint on what they can do, could have done, or should do” (278). In reply, one should first note that Cartwright does not merely report the attitudes of physicists, but also gives a reason that justifies these atti- tudes (namely the different means of eliminating redundancy mentioned earlier). Furthermore, insofar as philosophy of science should be informed by how science is actually practiced, what physicists do or have done actually does exercise some constraint on how IMLC and IBTE should be viewed, although Pierson and Reiner are of course correct that it is not a logical constraint. This leads to the question whether, in actual scientific practice, the difference between causal and theoretical explanations as regards the non-redundancy requirement is really as unambiguous as Cartwright portrays it. Margaret Morrison (1994) suggests an alternative account of the quan- tum optics example discussed by Cartwright (1983: 78–81), concluding that “causal explanation exhibits the same kind of redundancy present in theoretical explanation” (127). It seems to me, however, that what is established by Morrison’s analysis is not so much a redundancy of causal accounts but a multiplicity: various causal factors contribute to the effect under study, without any of them being redundant. Such multiplicity of causal accounts by itself is no threat to the validity of IMLC. Instead, it is what we should expect, especially if we insist (as Morrison rightly does) that causes need to be specified more precisely than in Cartwright’s generic causal story. It should not surprise us that a phenomenon can in general not be attributed to a single cause but usually results from a combination of several causal factors or partial causes. It is in this sense that we may tell more than one causal story, but there is no redundancy here, because all these stories are needed to get the causal account right. Of course, we need to specify in each case precisely which causal factors are relevant (and to what degree), but this is often possible by the same methods we have already encountered: controlled intervention and manipulation in laboratory conditions. Nevertheless, Morrison’s point that causal explanation and the requirement of non-redundancy are not as intimately linked as Cartwright supposes is well taken. On the one hand, it would be implausible to claim that the elimination of redundant causal explanations by experimental tests is always possible. On the other hand, if we allow the kind of specification described in the

127 Matthias Egg previous paragraph, whereby a multiplicity of causal stories makes up a non-redundant causal account, we may also have to allow a similar specification in the case of theoretical explanations (Morrison 1994: 136). In short, there may be cases of IMLC failing to meet the non-redundancy requirement, as well as cases of IBTE which conform to it.

6 The robustness of causal inference An important intuition concerning the difference between theoretical and causal explanations is that the central explanatory role in the former case is played by the laws from which the expla- nandum is derived,8 while the emphasis in the latter case is rather on the objects, events, and processes which the explanans describes as causally responsible for the phenomena in question. Correspondingly, IBTE is usually expected to establish the truth of some laws, while IMLC is taken to argue for the reality of some causally efficacious entities. Cartwright (1983: 76) derives from this the following argument for the claim that IMLC is more robust than IBTE:

Causes make their effects happen. We begin with a phenomenon which, relative to our other general beliefs, we think would not occur unless something peculiar brought it about. . . . The peculiar features of the effect depend on the particular nature of the cause, so that – in so far as we think we have got it right – we are entitled to infer the character of the cause from the character of the effect. But equations do not bring about the phenomenological laws we derive from them (even if the phenomenological laws are themselves equations). Nor are they used in physics as if they did.

According to Cartwright, the difference between causal and theoretical explanations lies in the fact that there is a relation of “making happen” or “bringing about” only in the former, not in the latter case. Let us therefore try to understand more precisely what “bringing about” means. As a first approximation, one might think that if an event of typex brings about an event of type y, then the occurrence of x is a necessary condition for the occurrence of y, so that, whenever we observe y, we can be sure that x occurred as well. In actual scientific practice, however, things are rarely that simple, because it is usually possible that y came about in an alternative way which does not involve x. In order to establish that x brings about y, one then needs to show (by rigorous experimental and statistical analysis) that at least some of the occurrences of y cannot be attrib- uted to such alternative factors. It will then in general be false to say that whenever y occurs, x has occurred as well. But it will be true that at least in some cases (though we may not know in which ones), y would not have occurred if x had not occurred. The truth of this counterfactual statement for at least some tokens of the event type y is an essential part of what it means for x to bring about y, and in conjunction with a sufficient number of y-occurrences, it establishes realism with respect to x. Now it is important to note that this result does not depend on any particular account of causation or causal explanation. Since all it presupposes is a humdrum type of causal reasoning that is part of all experimental science, any account purporting to capture the essential features of causality needs to incorporate it in some way. Is it possible to construct an analogous argument in the case of theoretical explanations, thereby questioning the purported difference in robustness between IMLC and IBTE? I do not think so. The reason is that, since IBTE aspires to establish the truth of explanatory laws, one would need to replace the counterfactual statement concerning events x and y with a counter- factual (more precisely: counterlegal) statement of the following form: “If law L did not hold,

128 Entity realism phenomenon y would not occur.”9 And, in contrast to the case of a causal explanation, giving a theoretical explanation in general does not entail such a statement. To see this, note that neither the historically influential deductive-nomological model of explanation nor its contemporary successor, the unificationist model, connects the fact that L explains y to any speculation about what would happen if L did not hold. All that is involved in these accounts of “L explains y” is that a statement of L, conjoined with statements of antecedent conditions, allows us to derive a statement describing y. There are, of course, other ways of thinking about theoretical explana- tion, but the fact that the two standard accounts mentioned here do not involve the counterlegal statements that would be needed to support IBTE suffices to show that such statements are not as essential to theoretical explanation as the counterfactual statements supporting IMLC are, as demonstrated, essential to causal explanation. This implies that IMLC is more robust than IBTE.

7 Entity realism today In our quick tour of entity realism, we witnessed that the main problem of Hacking’s version, namely its unacknowledged dependence on IBE (discussed in section 4), is overcome in Cart- wright’s version with its explicit defense of IMLC at the expense of IBTE. This latter version, however, comes with its own problems, related to the fact that the distinction between causal and theoretical explanation may not be as clear-cut as Cartwright supposes. Beside the criticism by Pierson and Reiner (2008) and Morrison (1994) discussed in section 5, I should mention Chris- topher Hitchcock (1992), who gives examples from quantum mechanics purporting to show that, contra Cartwright, scientific practice is actually opposed to treating IMLC differently from IBTE (for discussion, see Egg 2014: 64–66). Due to this battery of criticism (to which one may add all the negative assessments of Hack- ing’s position referenced in sections 2–4), it is widely believed that entity realism is not a viable position. Nevertheless, a modified version of Cartwright’s realism, which I have defended under the label “causal realism” (see Egg 2012, 2014, 2016), is alive and well.10 It is based on earlier defenses of entity realism by Steven Clarke (2001) and Mauricio Suárez (2008), in particular on the latter’s idea of transforming Cartwright’s distinction between causal and theoretical explana- tions into a distinction between causal and theoretical warrant. Let me conclude this chapter with a brief outline of causal realism and its merits. Even those who are suspicious of the dichotomy between causal and theoretical expla- nations (discussed in sections 5 and 6) need to acknowledge that Cartwright has identified important features of explanations which make a difference concerning how much we should trust the corresponding instances of IBE. By shifting from explanation to warrant, causal realism seeks to retain these crucial insights while getting rid of the problematic dichotomy between causal and theoretical explanation. The centerpiece of causal realism is therefore the distinction between two kinds of warrant generated by IBE (causal vs. theoretical war- rant), and the claim is that this distinction can be drawn even in the absence of a clear-cut distinction between causal and theoretical explanations. The way to achieve this is to give a general description of the aforementioned features of explanations and to interpret them as (individually necessary and jointly sufficient) criteria for causal (as opposed to merely theo- retical) warrant. The first one of these criteria isnon-redundancy as discussed in section 5. The second one is called material inference, which takes into account the differences in the relation between explanans and explanandum described in section 6. The third criterion, empirical adequacy, incorporates the idea (not discussed here but alluded to in note 8) that explanations in general are not strict deductions. (For a detailed discussion of these three criteria, see Egg 2014: Chapter 4.)

129 Matthias Egg

By relying on this explicit distinction between causal and theoretical warrant, causal realism is able to provide responses to the points of criticism that were left open in the previous sections of this chapter, because it eschews Cartwright’s problematic dichotomy between causal and theoretical explanations. Furthermore, a case study on the discovery of the neutrino (Egg 2014: chapter 5) indicates that causal realism makes better sense of the practice of particle physics than either full-blown scientific realism or antirealism, insofar as the notion of causal warrant precisely captures what differentiates the “direct detection” of a particle from other (less con- vincing) means of confirming its existence. Finally, conjoining the conceptual apparatus of causal realism with a strategy sketched by Anjan Chakravartty (2008) leads to a successful response to one of today’s most influential challenges to scientific realism, namely Kyle Stanford’s (2006; see his “Unconceived alternatives and the Strategy of Historical Ostension,” ch. 17 of this volume) argument from unconceived alternatives (Egg 2016). Here again, the distinction between causal and theoretical warrant proves its usefulness by showing that there is no reason to expect causally warranted claims to be replaced in the future by hitherto unconceived alternatives. I therefore contend that, although entity realism as presented by Hacking and Cartwright has some flaws, it holds indispensable lessons for the contemporary (and future) debate on scientific realism.

Notes 1 For a deeper analysis of theory realism as a foil for entity realism, see Seager (2012). For a detailed over- view of different possible readings of Hacking’s argument, see Miller (2016). 2 In this context, Morrison (1990: Section 4) speaks of a “transcendental turn” in the interpretation of Hacking’s argument, and she concludes (as I do) that this move is unsuccessful. 3 In the realism debate, “entities” usually refers to concrete particulars, such as objects, events, or processes, as opposed to abstract entities, such as laws or properties. This is in contrast with a more general use of the term entity in contemporary metaphysics, according to which properties are themselves entities. 4 An adherent of a purely descriptivist theory of reference might complain that the term “electron” in the new theory will no longer refer to the same entity as the one we now denote by that term. How- ever, this cannot be Gross’s complaint, because such an account of reference would equally undermine the continuity of reference of the terms “mass”, “charge,” etc., which Gross takes to be unproblematic. Hacking (1983: chapter 6) clearly rejects the descriptivist theory but does not actually endorse Hilary Putnam’s causal theory of reference either: “We do not need any theory about names in order to name electrons. . . . We need only be assured that an obviously false theory is not the only possible theory. Putnam has done that” (81–82). 5 Lack of space prevents me from doing justice to all the complexities involved in the issue of quasi- particles. See Falkenburg (2015) for a more detailed response to Gelfert’s argument. 6 Devitt (1997: 22) may be an exception. For tactical reasons, he chooses to defend an entity realism which is merely committed to the existence of entities of certain kinds but not to the properties science ascribes to them. But when it comes to scientific explanation, even Devitt has to acknowledge that “phenomena are explained not by the mere existence of, say, electrons, but by electrons being the way our theory says they are” (109). 7 The example is modeled on a similar example from Kukla (1993). Stanford (2006: 11–14) gives a concise review of the different strategies pursued in the literature to construct empirically equivalent rivals of existing theories. Stanford’s critique of all these attempts is also the source of my argument against taking them seriously in the context of scientific realism. 8 Another reason (which I cannot discuss here) for Cartwright to oppose theory realism is that such derivations often do not proceed by strict deduction but make use of approximation techniques which sometimes improve on the descriptive accuracy of the laws in question. See Cartwright (1983: 103–127) and Egg (2012: 267–268) for details. 9 By speaking of y as an occurrent phenomenon, I am assuming that the explanandum is a singular fact, but the argument is just the same if y is itself a (phenomenological) law, as Cartwright assumes in the earlier quotation. 10 For another recent version of entity realism, see Eronen (2017).

130 Entity realism

References Cartwright, N. (1983) How the Laws of Physics Lie, Oxford: Clarendon Press. Chakravartty, A. (2007) A Metaphysics for Scientific Realism: Knowing the Unobservable, New York: Cambridge University Press. ——— (2008) “What You Don’t Know Can’t Hurt You: Realism and the Unconceived,” Philosophical Studies 137, 149–158. Clarke, S. (2001) “Defensible Territory for Entity Realism,” British Journal for the Philosophy of Science 52, 701–722. Devitt, M. (1997) Realism and Truth (2nd ed.), Princeton: Princeton University Press. Dorato, M. (1988) “The World of the Worms and the Quest for Reality,” Dialectica 42, 171–182. Egg, M. (2012) “Causal Warrant for Realism about Particle Physics,” Journal for General Philosophy of Science 43, 259–280. ——— (2014) Scientific Realism in Particle Physics: A Causal Approach, Boston: De Gruyter. ——— (2016) “Expanding Our Grasp: Causal Knowledge and the Problem of Unconceived Alternatives,” British Journal for the Philosophy of Science 67, 115–141. Eronen, M. (2017) “Robust Realism for the Life Sciences,” Synthese, DOI 10.1007/s11229-017-1542-5. Ellis, B. (1987) “The Ontology of Scientific Realism,” in P. Pettit, R. Sylvan and J. Norman (eds.),Metaphys- ics and Morality: Essays in Honour of J.J.C. Smart, Oxford: Basil Blackwell, pp. 50–70. Elsamahi, M. (1994) “Could Theoretical Entities Save Realism?” PSA 1994: Proceedings of the Biennial Meet- ings of the Philosophy of Science Association 1, 173–180. Falkenburg, B. (2015) “How Do Quasi-Particles Exist?” in B. Falkenburg and M. Morrison (eds.), Why More Is Different: Philosophical Issues in Condensed Matter Physics and Complex Systems, Berlin: Springer, pp. 227–250. French, S. (2015) “Identity and Individuality in Quantum Theory,” in E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2015 ed.). URL: http://plato.stanford.edu/archives/fall2015/entries/ qt-idind/ Gelfert, A. (2003) “Manipulative Success and the Unreal,” International Studies in the Philosophy of Science 17, 245–263. Gross, A. G. (1990) “Reinventing Certainty: The Significance of Ian Hacking’s Realism,” PSA 1990: Pro- ceedings of the Biennial Meetings of the Philosophy of Science Association 1, 421–431. Hacking, I. (1983) Representing and Intervening: Introductory Topics in the Philosophy of Natural Science, Cam- bridge: Cambridge University Press. ——— (1989) “Extragalactic Reality: The Case of Gravitational Lensing,” Philosophy of Science 56, 555–581. Hitchcock, C. (1992) “Causal Explanation and Scientific Realism,”Erkenntnis 37, 151–178. Kuhlmann, M. (2015) “Quantum Field Theory,” in E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Summer 2015 ed.). URL: http://plato.stanford.edu/archives/sum2015/entries/quantum-field-theory/ Kukla, A. (1993) “Laudan, Leplin, Empirical Equivalence and Underdetermination,” Analysis 53, 1–7. Massimi, M. (2004) “Non-Defensible Middle Ground for Experimental Realism: Why We Are Justified to Believe in Colored Quarks,” Philosophy of Science 71, 36–60. Miller, B. (2016) “What Is Hacking’s Argument for Entity Realism?” Synthese 193, 991–1006. Morrison, M. (1990) “Theory, Intervention and Realism,” Synthese 82, 1–22. ——— (1994) “Causes and Contexts: The Foundations of Laser Theory,” British Journal for the Philosophy of Science 45, 127–151. Musgrave, A. (1996) “Realism, Truth and Objectivity,” in R. S. Cohen, R. Hilpinen and Q. Renzong (eds.), Realism and Anti-Realism in the Philosophy of Science, Dordrecht: Kluwer Academic Publishers, pp. 19–44. Nola, R. (2002) “Realism through Manipulation, and by Hypothesis,” in S. Clarke and T. D. Lyons (eds.), Recent Themes in the Philosophy of Science: Scientific Realism and Commonsense, Dordrecht: Kluwer Academic Publishers, pp. 1–23. Pierson, R. and Reiner, R. (2008) “Explanatory Warrant for Scientific Realism,” Synthese 161, 271–282. Psillos, S. (1999) Scientific Realism: How Science Tracks Truth, London: Routledge. Reiner, R. and Pierson, R. (1995) “Hacking’s Experimental Realism: An Untenable Middle Ground,” Phi- losophy of Science 62, 60–69. Resnik, D. B. (1994) “Hacking’s Experimental Realism,” Canadian Journal of Philosophy 24, 395–412. Seager, W. (2012) “Beyond Theories: Cartwright and Hacking,” in J. R. Brown (ed.), Philosophy of Science: The Key Thinkers, London: Continuum Books, pp. 213–235.

131 Matthias Egg

Sellars, W. (1963) Science, Perception and Reality, London: Routledge and Kegan. Shapere, D. (1993) “Astronomy and Antirealism,” Philosophy of Science 60, 134–150. Stanford, P. K. (2006) Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives, New York: Oxford University Press. Suárez, M. (2008) “Experimental Realism Reconsidered: How Inference to the Most Likely Cause Might be Sound,” in S. Hartmann, C. Hoefer and L. Bovens (eds.), Nancy Cartwright’s Philosophy of Science, New York: Routledge, pp. 137–163.

132 11 TRUTHLIKENESS AND APPROXIMATE TRUTH

Gerhard Schurz

1 Introduction Most scientific theories contain very general hypotheses applying to different kinds of objects. As a consequence, they involve simplifications and idealizations that neglect (small) deviations from reality. So these theories are, strictly speaking, false. Moreover, there is a wealth of empirical evidence for the deviations. So these theories are, strictly speaking, falsified. Nevertheless, scientific theories are highly successful in their explanations and predictions of the empirical facts discovered by scientists. For scientific realists, the best explanation of the empirical success of scientific theories is the assumption that what- they assert is “approx imately true” or “close to the truth,” in the sense of being in good correspondence with reality. This argument goes back to Putnam (1975: 73) and is called- the “no miracles” argu ment. Moreover, scientific realists assume that in spite of the fact that most scientific theories are falsified, there is objective progress in science in the sense of convergence to the truth: the successive theories in the history of a scientific discipline come closer and closer to the objective truth.1 It is hard to see how the notion of “closeness to truth” could - be explicated by probabilis tic means: if a theory is falsified by evidence, then its probability given the evidence is zero, whence it should be rejected by standard (e.g., Bayesian) rules of confirmation. This is a shortcoming of the standard accounts of confirmation, as it makes a probabilistic evaluation of the closeness-to-the-truth of falsified theories impossible. For this reason, Popper developed the conception of truthlikeness, or verisimilitude (1962, 1963, 1972). This is an alternative non-confirmation theoretic account that allows for an evaluation of the- truthlikeness of the ories even if they are false.

2 Popper’s concept of truthlikeness (verisimilitude) Popper’s conception of truthlikeness starts from the observation that although most scientific theories are strictly speaking false, some false theories are closer to the truth than other false theories. Thus we can understand objective theoretical progress as progress in truthlikeness. For example, although we know that Ptolemy’s, Kepler’s, as well as Newton’s theories of our solar system are

133 Gerhard Schurz false, Newton’s theory was closer to the truth than Kepler’s theory, which was in turn closer to the truth than Ptolemy’s theory. Based on this idea, Popper characterized truthlikeness as follows:2

(1) Popper-truthlikeness, informal explication: (1.1) A theory A is closer to the truth than a theory B (in Popper’s sense), abbreviated

A >P B, iff (i) A has more true, but not less false (logical) consequences than B, or (ii) A has not less true but less false (logical) consequences than B, where in the comparative account, “more” is understood as set-inclusion (⊇).

We can make this idea logically precise within a formal toy language L. For most purposes we identify L with a finite language of propositional logic with n propositional variables, p1, . . .,pn, and the usual logical operators (¬,∨,∧,→ material implication ,↔, T verum, ⊥ falsum). For some purposes we let L be the language of full first-order logic. Small letters a, b, c (possibly indexed) stand for arbitrary sentences and capital letters A, B, C for theories, that is, arbitrary sets of sentences. In first-order logic we use F, G, R for predicate letters, di  for individual constants, xi for individual variables, together with the quantifiers (∀,∃) and identity (=). “” stands for logical consequence, C(A) = {S: A  S} denotes the set of A’s (classical) logical consequences, T stands for the complete truth in L, that is, the set of all true sentences (according to a given valuation) and F stands for the set of all false sentences in L (note that T and F are complete theories in L). With this formalism Popper’s comparative notion of “being at least as truthlike as,” abbreviated as ≥P, is defined as follows (with “>P” and “≡P” being defined by ≥“ P” in the usual way):

(2) Popper-truthlikeness, formal explication:

At =def C(A)∩T is the set of A’s true consequences. Af =def C(A)∩F is the set A’s false consequences. A ≥P B iff (a) At ⊇ Bt and (b) Af ⊆ Bf. A >P B iff A ≥P B and not B ≥PA. A ≡P B iff A ≥P B and B ≥PA.

3 Classification of different kinds of truthlikeness Before we come to the problems of Popper’s account, we introduce the following two-times-two classification of different concepts of truthlikeness that helps to sort out the basic intuitions and applications of this concept:

(1.) Concerning the gradation (or scale) type of the truthlikeness concept one can distinguish between comparative and numerical notions of truthlikeness: (1.1) Comparative notions of truthlikeness have the form “theory A is closer (or at least

as close) to the truth than theory B,” abbreviated as A >i (≥i) B, where the index “i” specifies the different explications of this notion. Comparative notions in the line of Popper’s explication (1) do not establish a total but merely a partial ordering of theories according to their truthlikeness, that is, there may exist theories A, B that are incomparable in truthlikeness. (1.2) Numerical notions of truthlikeness explicate this notion in the form of a real-valued

measure that has the form “the truthlikeness of A is r,” in short ti(A) = r. Numerical notions are more fine grained than comparative notions; on the other hand, their

134 Truthlikeness and approximate truth

definition involves more arbitrary conventions. Comparative and numerical truth-

likeness notions must be ordinally equivalent in the sense that A ≥i B iff ti(A) ≥ ti(B).

Popper’s concept of truthlikeness, (1), is comparative. However, in the passages mentioned (see fn. 1) Popper also proposed a quasi-numerical notion of truthlikeness, which we abbreviate as tP(A). Instead of the sets of true and false consequences of a theory A, this notion considers the information contents of these sets based on a given logical probability measure, which are abbre- viated as ctt(A) and ctf(A), respectively:

(2*) Popper’s comparative-numerical concept of truthlikeness:

Theory A is numerically at least as close to the truth as a theory B (A ≥Pn B) iff A’s truth content is at least as great as B’s truth content and B’s falsity content is at least as great as A’s

falsity content. Formally: A ≥Pn B iff ctt(A) ≥ ctt(B) and ctf(A) ≤ ctf(B).

A fully numerical measure of Popper-truthlikeness can be defined by setting tP(A) =def ctt(A)− ctf(A), that is, A’s truth-content minus A’s falsity-content.

(2.) Concerning the content of the hypotheses whose truthlikeness is evaluated one can distin- guish between qualitative and quantitative truthlikeness: (2.1) Qualitative truthlikeness relates to the fact that theories consist of several conjunctive parts, and a theory A is closer to the truth the more parts of A are true and the fewer parts of A are false. If a theory is represented as a disjunction of constituents or a set of models, this consideration applies to the (conjunctive) parts of the constituents or models, respectively. Qualitative truthlikeness can be applied to theories expressed by qualitative (binary) predicates, without any quantitative concepts. (2.2) Quantitative truthlikeness applies to hypotheses or theories that contain quantitative (metric) concepts, such as length, time or mass in physics. These concepts are expressed as functions X: D → R that map objects in a domain D (and possibly time points t ∈ T) to real numbers (r ∈ R). These functions are also called “mathematical variables”; for example, “X(d) = r” standing for “the length of this tower (d) is 19.523 meters.” A singular quantitative statement can express the value of a physical magnitude only with a certain accuracy; it always involves a certain measurement error which expresses the quantitative truthlikeness of the statement. The simplest measure of quantitative

truthlikeness is the absolute (linear) difference, |XA(a)− XT(a)|, between the value pre- dicted by a given theory A, XA(a), and the true value, XT(a), of the quantitative prop- erty X of the individual a; this measure can be normalized by dividing through the maximal measurable length. A similar consideration applies to elementary quantitative (functional) laws. As an example from classical mechanics consider the law hypothesis

L: ∀x∀t(s(x,t) = f(t,m(x),s0(x)), where s is the position function, t the time variable, x a variable ranging over physical objects and f(t,m(x),s0(x)) a mathematical function. (L) says that the position of all objects in an intended domain (e.g., all planets of our solar system) is a unique function f of time, the object’s mass m(x) and the object’s initial

position s0(x). Also in this case we can distinguish between the function fA predicted by theory A and the true function fT. A natural quantitative measure of the truth- likeness of the instance of a law of kind (L) for a singular object d is the integral of the absolute difference between the predicted and the true function over a given interval I of the independent variable (t) (cf. Niiniluoto 1987: §11.3). For finite domains this measure is generalized to universal quantifications by forming averages:

135 Gerhard Schurz

(3) Quantitative truthlikeness (where “XT(x)” = the true value of X for object x): • For singular quantitative statements: t(X(di) = r) = |r− XT(di)|. • For quantitative laws: t(X(d,t) = f(d,t)) = ∫t∈I |f(d,t)− XT(d,t)| dt t(∀x(X(x,t) = f(x,t))) = (Σd∈D ∫t∈I |f(d,t)− XT(d,t)| dt)/|D|

For theories consisting of a conjunction of quantitative hypotheses, qualitative and quantitative truthlikeness can be combined (see def. (9)).

4 Intuitive assumptions, formal explication and the breakdown of Popper’s idea Tichý (1974) and Miller (1974) demonstrated that Popper’s comparative definition of truthlike- ness fails: it has the unintended consequence that no false theory can be closer to the truth than any other theory. This implies the breakdown of Popper’s definition, since the possibility of a false theory being more truthlike than another theory was its primary motivation. The proof of Tichý’s and Miller’s result goes as follows: let theory A be a false theory, so A contains some false consequence which we call f. Now assume that A were closer to the truth than B in Popper’s sense, which means that (a) Bt ⊆ At, (b) Af ⊆ Bf, and one of the two following cases must hold:

Case 1: At contains a true consequence, t, which is not contained in Bt. But then, Af would contain the false consequence t∧f which is not contained in Bf. So condition (b) is vio- lated and A ≥P B cannot hold. Case 2: Bf contains a false consequence f* which is not contained in Af. But then Bt would also contain the true consequence f*∨¬f, which cannot be contained in At (since f ∈ C(A), but f* ∉ C(A)). So condition (a) is violated, and A ≥P B cannot hold either.

In conclusion, it is impossible that A >P B when A is false. In spite of this result, Popper’s idea remains intuitively reasonable. Before we come to the question what has formally gone wrong in Popper’s definition (2), we record Popper’s four basic intuitions about verisimilitude:

(5) Basic intuitions about comparative truthlikeness, where unnegated propositional variables are assumed as true and negated ones false:

(5.1) For true theories truthlikeness increases with logical strength: p1∧p2 >P p1 >P p1∨p2. (5.2) True conjuncts are better than false conjuncts: ¬p1∧p2 >P ¬p1∧¬p2. (5.3) The less false conjuncts, the better: p1 >P p1∧¬p2 and ¬p1 >P ¬p1 ∧ ¬p2, (5.4) Contradictions are worst in truthlikeness: A >p ⊥ for non-contradictory A.

These intuitions are supported by various passages in Popper (1963) and (1972), which are collected in Schurz and Weingartner (2010: §2). These intuitions are also shared by almost all accounts of truthlikeness, be they disjunctive or conjunctive accounts (see §5 in what follows). Intuition (1) is accepted by almost all accounts with the exception of the average measure of Tichý (1974) and Oddie (1981); but there is large agreement that this is a defect of these accounts (cf. Niiniluoto 1987: 235f ). Intuitions (2–4) are accepted by all accounts with the exception of Miller (1978a, 1978b), who accepted extremely unintuitive consequences in order to make

136 Truthlikeness and approximate truth truthlikeness language independent (see §6 in what follows). Since intuitions (2) through (4) are explicated solely in terms of conjunctions, this implies that all accounts on truthlikeness (except that of Miller) agree on the truthlikeness ordering of “purely conjunctive” theories (see §5 in what follows); their disagreement concerns the handling of disjunctions.

5 Conjunction-of-parts and disjunction-of-possibilities accounts to truthlikeness Immediately after the defect of Popper’s notion of truthlikeness was detected, philosophers pro- posed possible repairs to it. Two major families of approaches have been proposed for this pur- pose, which I call conjunction-of-parts and disjunction-or-possibilities accounts. Conjunction-of-parts accounts represent theories (including T and F) as conjunctions of (smallest) content-parts. Since content-parts are (selected) consequences of the theory, conjunc- tive-parts accounts are also called consequence accounts (Schurz and Weingartner 2010; Oddie 2013). In contrast, disjunction-of-possibilities accounts represent theories as disjunctions of max- imally strong alternative situations that are syntactically represented as constituents or semantically as models or possible worlds. Although disjunction-of-possibilities accounts are older (having first been proposed in 1976), we start with explaining conjunction-of-parts accounts (having first been proposed in 1982), since they are similar to Popper’s original account. Deviating from Niiniluoto (1987: xii) it has been emphasized by Schurz and Weingartner (2010) and acknowl- edged by Oddie (2013: §4) that the aspect of truthlikeness as “likeness with the truth” is present both in disjunction-of-possibilities and in conjunction-of-parts accounts.

5.1 Conjunction-of-parts accounts Here theories are represented as conjunctions or sets of certain content-parts of the theory. In Popper’s original account these conjunctive parts were understood as arbitrary logical conse- quences of the theory. The logical defect of this understanding of “conjunctive parts” results from the fact that the notion of classical logical consequence covers various “unnatural” conse- quences which one wouldn’t intuitively count as new consequences – such as the conjunction t∧f in case 1 or the disjunction f*∨¬f in case 2 of the Tichý-Miller proof in §4. This observation gives us the basic clue to how one may repair the explication of Popper’s idea: by restricting the conjunctive parts to certain “canonical” or “minimal” consequences of the theory. In the truth- likeness literature, conjunctive parts have been understood in four major ways:

(i) As relevant (consequence) elements: This account has been developed by Schurz and Wein- gartner (1987); a significantly improved version is given in Schurz and Weingartner (2010: §§4–5). The advantage of this account (compared to other conjunctive-part accounts) is that it applies to theories of all logical formats, since the set of relevant elements of a theory is equivalent to its total logical content. So no information gets lost by the relevant element representation, but irrelevant disjunctive weakenings and redundant conjunctions – the cul- prits of the breakdown of Popper’s definition – are avoided. In propositional logic, the rele- vant elements of a theory are defined as follows: a is a relevant element of a theory A iff (i) a is a clause, and (ii) a follows relevantly from A in the sense that no proper subdisjunction of a follows from A, too.3 Thereby, a clause is a non-repetitive disjunction of literals (unnegated or negated propositional variables) in an alphabetical ordering. Representation of theories by clauses is a standard technique in computer science. An example of a relevantly implied clause is p∨q, ¬q∨r  p∨r. Non-clausal consequences (e.g., conjunction-formations

137 Gerhard Schurz

p, q  p∧q), are responsible for the first “half” of the Tichý-Miller-proof (case 1 of §4) and irrelevantly entailed clauses (e.g., disjunctive weakenings p  p∨q) for the second “half” of this proof (case 2 of §4). (ii) As content-parts in the sense of Gemes (2007). Gemes’s content-parts are similar to the rele- vant elements of Schurz and Weingartner but less fine-grained. Also the representation of a theory by Gemes’s content part does not lose any content of the theory. (iii) As nomic constituents in the sense of Kuipers (1982, 2000: ch. 7.2.2). In a monadic first-order

language with n predicates F1, . . .,Fn, the nomic constituents are conjunctions of the form ∃x(Q1x) ∧ . . . ∧ ∃x(Qkx) ∧ ∀x(Q1x∨ . . . ∨Qkx), where each Qi (1≤i≤k) has the form n ±F1x∧ . . . ∧±Fnx, with “±” for “unnegated” or “negated” and k = 2 . The existential conjunctions express nomic possibilities, and the universal formula asserts that all other Q-predicates are nomically impossible. To emphasize the “nomic” character one may put “◊“ in front of “∃” and “�” in front of “∀.” Kuipers represents nomic possibilities as “models” – however, these models are local models in the sense of structuralistic philosophy of science (see Schurz 2014: §3, “complication 2”). Local models are crucially different from standard global models in the sense of constituents or possible worlds: while only one global model can be true or actual, all local models can be true or actual at the same time. Therefore, Kuipers’s account is not a disjunction-of-possibilities but a conjunction-of-parts account.4 Kuipers’s account is restricted to theories of a specific format, consisting of sets of nomic possibilities and impossibilities. Particular individuals are not mentioned in these theories. The account has a complexity problem in rich monadic and in polyadic languages, as the number and size of constituents becomes too large to be feasible (see fn. 3).

(iv) As independent conjuncts, which in propositional logic have the form of literals (±pi). This account was anticipated in Kuipers (1982) notion of “actual truthlikeness” and has been further developed by Cevolani and Festa (2009) (see also Cevolani, Crupi, and Festa 2011). In its general form, this account is applicable to so-called conjunctive theories. Conjunctive

first-order theories consist of conjunctions of basic statements of any kind, ±a1, . . .,±am that are mutually logically independent in the sense that no basic statement is logically implied by any set of other basic statements.

Since the conjunctive parts of nomic constituents are mutually logically independent, the nomic constituent account is a special kind of conjunctive theory account. The conjunctive format is a strong restriction on theories. For example, conjunctive theories must not contain universal formulas together with singular or existential formulas that imply other singular or existential formulas, as in A = {∀x(Fx→Gx), Fa} or A’ = {∀x(Fx→Gx), ∃xFx}. However, these restrictions have the advantage that the logical entailment relation among basic statements (bi) coincides with  the set-theoretic entailment relation, that is, b1, . . .,bm bm+1 iff bm+1 ∈ {b1, . . .,bm}, since the basic statements are mutually logically independent. In all conjunction-of-parts accounts the basic and comparative notion of “being at least as close to the truth” (≥) can be defined in analogy to Popper’s original definition as follows:

  (6) A ≥i B iff At-parts Bt-parts and Bf-parts Af-parts where At-parts (resp. Af-parts) stands for the set of A’s true (resp. false) “conjunctive parts” in the sense of account i (with i ∈ {SW, G, K, CF}).

Since the notion of “content part” is not closed under logical consequence, the superset relation “⊇” of Popper’s formal definition (2) has to be replaced by the logical entailment relation . In conjunctive theory accounts, however, logical entailment coincides with the super-set relation

138 Truthlikeness and approximate truth for the explained reason; therefore Kuipers (2000) and Cevolani and Festa (2009) explicate (6) by means of “⊇” instead of “.” We finally emphasize that although conjunctive-parts accounts utilize the notion of logical consequence, they are by no means restricted to strict (deterministic) theories but may equally be applied to statistical theories. The relevant elements of statistical theories are elementary statistical statements that follow from the statistical theory by means of the basic probability axioms. For illustration, let p be an objective-statistical probability function over the outcomes of a random experiment E whose repetitions are independently and identically distributed (cf. Schurz 2013: §3.13.3). Then the theory T = {p(Fx|Rx) = r, p(Gx|Rx∧Fx) = q} implies the following relevant elements:

• p(Fx∧Gx|Rx) = r·q, by the basic probability axioms, n • p(f (F|R) = k) =  ·rk·(1− r)(n− k), by the basic probability axioms and statistical n k independence, where “fn(F|R)” is the number of Fs in a random sample of n Rs.

5.2 Combining comparative truthlikeness with numerical and quantitative measures In their basic version all conjunction-of-parts accounts are comparative accounts for qualitative theories. They can be extended to numerical accounts that apply also to quantitative theories. We illustrate this (pars pro totó) for the relevant element and the conjunctive theories account L within the language of propositional logic, 0. Schurz and Weingartner (2010) proposed the following definition of the relevant element-based numerical truthlikeness, SWt (A), of a theory A:

L (7) Numerical truthlikeness for 0-theories based on relevant elements: We set Ar, A tr = Ar∩T, A fr = Ar∩F as the sets of A’s relevant elements, A’s true relevant ele- ments and A’s false relevant elements, respectively. Then:

tSW(A) = Σ{tSW(a): a ∈ Ar}, where v +1 (n k)+1 ! for a ∈ A : t (a) = a · (n −k)a !, for a ∈ A : t (a) = − − a , tr SW ka n! fr SW n!

ka =def the number of a’s literals, and va = def the number of a’s true literals.

The measure (7) is based on the following intuitions, where we assume that unnegated proposi- tional variables are true:

• tSW(A) is defined as the sum of the truthlikeness of all relevant elements of A. • True relevant elements have positive and false relevant elements have negative truthlikeness.

We set tSW(p) = 1, t(¬p) = − 1, and for false clauses, tSW(¬p1∨ . . . ¬pk) = − t(p1∨ . . . ∨pk) (see also fn. 3).

• The set of true clauses of {p1∨pi: 2 ≤ i ≤ n} must have less truthlikeness than p1 because this set is logically weaker than p . The generalization of this idea leads to the factor (n −k)a +1 ! 1 n! in def. (7). • For true clauses with false literals, the truthlikeness-measure is multiplied with the fraction v of true literals among all literals in the clause (factor a in def. (7)). ka 139 Gerhard Schurz

Schurz and Weingartner (2010) show that numerical truthlikeness is ordinally equivalent with comparative truthlikeness over all pairs of ≤-comparable theories A, B. The advantage of tSW over ≤SW is that tSW allows reasonable truthlikeness-comparisons for ≤SW-incomparable state- ments. For example, p∨q and p∨¬q are both true and ≤SW-incomparable, but tSW(p∨q) = 1/2 > tSW(p∨¬q) = 1/4. For conjunctive theories numerical SW-truthlikeness reduces to measure (8.1). By dividing (8.1) through the number n of all propositional variables one obtains the unweighted CF-meas- ure of Cevolani and Festa (2009), and by multiplying the falsity-content (|Afr|/n) with a weight factor w ≥ 0 one obtains the weighted CF-measure:

L (8) Numerical truthlikeness for conjunctive 0-theories: (8.1) tSW(A) = Σ{tSW(±p): ±p ∈ Ar} = |Atr| − |Afr|. (8.2) CF-measure: tCF(A) = |Atr|/n − w·|Afr|/n (where w ≥ 0; unweighted: w = 1).

Definitions (7) and (8) are restricted to qualitative theories. An extension to quantitative theories is possible by assuming that the elementary disjuncts of clauses may be elementary quantitative statements ei (instead of literals), whose quantitative truthlikeness is measured according to defi- nition (3) by a normalized measure qt(ei) that ranges between +1 for “perfect approximation” and − 1 for “maximally distant from the true value.” We define the numerical-quantitative truth- likeness of a clause by adding up the truthlikenesses of its elementary disjuncts and divide by ka 1 for normalization purposes; by multiplying this sum with the content factor (n −k)a + ! of n! def. (7) we obtain:

(9) Numerical truthlikeness for quantitative theories based on relevant elements: (n −k)a +1 ! tSW(A) = Σ{tSW(a): a ∈ Ar} and tSW(a) = · Σe Dis(a)·qt(e) n! ⋅ka ∈ where Dis(a) = the set of elementary disjuncts of clause a and ka = |Dis(a)|.

5.3 Disjunction-of-possibilities accounts These accounts represent theories either semantically as disjunctions of possible words (Hilpinen 1976; Oddie 1981) or syntactically as disjunctions of constituents, which are descriptions of pos- sible worlds (Tichý 1974; Niiniluoto 1977, 1987). In the propositional language, constituents are n given as complete conjunctions of the form ci = (¬)p1∧ . . . ∧(¬)pn; there are 2 such constitu- ents. In first-order languages, constituents are more complicated. In what follows C(A) denotes the set of all constituents entailed by theory A; recall that A; is logically equivalent with the disjunction of its constituents. Almost all disjunction-of-possibility accounts are numerical. Almost all of these accounts (with the exception of Miller’s; see §6) are based on a measure of similarity – or, inversely, a measure of the distance – between possible worlds or constituents, respectively. The truthlikeness of a theory is given as a monotonically increasing function of the similarity of the theory’s constituents with the true constituents. All similarity-based disjunction-of-possibility accounts agree that the natu- ral distance function between propositional constituents is the Hamming distance, which is defined as the number of propositional variables about whose truth value the two constituents disagree. The Hamming distance defines a Lewis-typesphere-model (Lewis 1973: ch. 5) which has the true constituent as its central sphere S0 = {cT}, the set of constituents c with truth-distance ≤ 1 as the sphere S1 that includes S0, and so on.

140 Truthlikeness and approximate truth

A major problem of the disjunction-of-possibilities approach is the question how one should measure the distance between a single constituent (such as cT) and a non-complete theory, that is, a disjunction of constituents A. Given a normalized distance measure dist(A,c) between arbitrary theories A and a constituent c, truthlikeness is defined in all disjunction-of-possibility accounts as follows:

(10) Truthlikeness in disjunction-of-possibility accounts: ti(A) =def 1− disti(A,cT), where disti(A,cT) is a normalized distance between the set of constituents C(A) and the true constituent cT, and “i” is the respective version (i ∈ {av, min-max, min-sum}; see what follows).

There exists no straightforward answer to the question how a distance function between constit- uents should be extended to one between a constituent and a set of constituents (see Oddie 2013: §5). The following normalized distance measures have been proposed (cf. Niiniluoto 1987: ch. 6):

• the average measure (Tichý 1976; Oddie 1981): distav(A,cT) is defined as the average of the distances between cT and the constituents in C(A). • the min-max measure (Hilpinen 1976): distmin-max(A,cT) is defined as a weighted average of the minimum and the maximum of the distances between cT and the constituents in C(A). • the min-sum measure (Niiniluoto 1977, 1987: ch. 6): distmin-sum(A,cT) is defined as a dou- bly weighted average of the minimum distance and the normalized sum of the distances

between cT and the constituents in C(A).

Niiniluoto (ibid., 232–234) shows that these three measures yield different truthlikeness-results in a variety of important examples. This constitutes one problem of disjunction-of-possibilities accounts. A second worry concerns their complexity problem: already for simply languages, the number and size of constituents (or possible worlds) grows astronomically large.5

6 The problem of language dependence In section 5 we mentioned Miller’s account (1978a, 1978b) as the great “exception” among disjunction-of-possibilities accounts. It has the counterintuitive property that the truthlikeness of completely false theories increases with their logical strength – whence this account has also been called the “content account” (Oddie 2013). For example, if (as before) unnegated prop- ositional variables are true, it holds that ¬p∧¬q >Miller ¬p. In other words, the more falsities a false theory implies, the closer it is to the truth, which contradicts the basic intuition (5.3) of truthlikeness. The reason behind Miller’s move is his intention to avoid any kind of language dependence. The conceptions of truthlikeness explained so far are preserved under language-expansions (see Schurz and Weingartner 2010: §6), but they are not preserved under certain disjunctive language translations. We illustrate this with Miller’s weather example:

(11) The problem of language dependence: Language 1: p (hot), q (windy)

Language 2: p* (hot), q* (arizonan ↔def hot iff windy) Intertranslations:

p* ↔def p p ↔def p* q* ↔def (p↔q) q ↔def (p*↔q*)

141 Gerhard Schurz

Theories (where T = truth): T: p∧q T*: p*∧q* A: ¬p∧q A*: ¬p*∧¬q* B: ¬p∧¬q B*: ¬p*∧q* Truthlikeness relations according to the basic intuition in (5.3):

A >t B B* >t A*

As (11) demonstrates, the translation of these theories (as well as of the truth) from one into the other language inverts every notion of truthlikeness which satisfies the basic intuition (5.3) explained in section 4. An account of truthlikeness can be invariant under such translations only if we assume that also for false theories truthlikeness increases with logical strength (since logical strength is preserved under any kind of language translations). Miller went this way because for him linguistic invariance was more important than fit with truthlikeness intuitions. On a closer look, the non-invariance of truthlikeness under disjunctive translations of primitive predicates is not surprising. Every representation of a theory presupposes a certain set of linguistic primitives which refer to a certain decomposition of the “world” into primitive elements. We think that the primitive elements of cognitive representations are not arbitrary but can be epistemologi- cally well motivated (cf. Schurz 2013: §5.11.3). There is no reason to be surprised that the truth- likeness relation between theories may change if, as in the earlier example, one linguistic framework understands a primitive element (q*) of the other linguistic framework not as ontologically inde- pendent but as the result of an interaction between two ontologically independent elements (p, q) and vice versa. In conclusion, we think that this kind of language dependence of truthlikeness should not be conceived as a problem to be solved but rather as a fact to be acknowledged.

7 Epistemic evaluation of truthlikeness The concepts of truthlikeness introduced so far have all been purely semantic – they were expli- cated relative to the set of all truths, which is largely unknown. In epistemic practice we evaluate the truthlikeness of a theory in regard to what we know by observation and assumed background knowledge – in other words, by what we regard as accepted evidence. We can distinguish two major concepts of epistemic truthlikeness:

• Evidential truthlikeness: This account makes few assumptions: it merely assumes a set E of statements that are sufficiently confirmed to count as evidence. • Expected truthlikeness: This account assumes an epistemic probability function over the con- stituents of the language and identifies the epistemic truthlikeness with the probabilistic expectation of the truthlikeness.

The notion of evidential truthlikeness stands in close relation to Lakatos’s account of theory progress. It is usually (though not necessarily) explicated in a comparative way within a conjunc- tion-of-parts account. The probabilistic notion of expected truthlikeness is frequently (though not exclusively) explicated as a numerical measure within a disjunction-of-possibilities account (Niiniluoto 1987: §7.2). Let us explain these notions. The notion of evidential truthlikeness is based on a threefold division of the (relevant) content of a theory into its empirically confirmed content, disconfirmed content and its empiricalexcess content in the sense of Lakatos (1970: 33ff.), consisting of the theory’s relevant elements that have so far not sufficiently been put under empirical test.

142 Truthlikeness and approximate truth

(12) Comparative evidential truthlikeness, in relation to a scientific background system S: (12.1) For a given theory A (formulated within the language of S), we have:

• Atr-ev (and Afr−ev) are the sets of A’s relevant elements that are sufficiently confirmed (respectively, disconfirmed) relative to the total evidence in S.

• Ar-ex =def Ar − (Atr−ev∪Afr−ev) is A’s relevant excess content, that is, A’s relevant ele- ments that are neither sufficiently confirmed nor disconfirmed in S.

(12.2) A has at least as much evidential truthlikeness as B, abbreviated as A ≥ev B, iff   (a) Atr−ev Btr-ev and (b) Bfr−ev Afr-v.  (12.3) A has at least as much excess content as B, abbreviated A ≥ex B, iff Ar-ex Br-ex.

With the help of definition (12) we can explicate Lakatos’s notions of theoretical and empirical progress nicely as follows: the transition from theory B to theory A is theoretically progressive in case A ≥ev B and A >ex B. Moreover, this transition is empirically progressive iff A >ev B and A >ex B (cf. Lakatos 1970: 33f; Schurz 2013: §5.6.1). The notion of expected truthlikeness assumes an epistemic probability function over the set of all constituents or possible worlds, C, that are expressible in the given language. Let P(c|E) denote the probability that constituent c is true conditional on the total set of evidence

E, and assume that t(A,c) =def 1− dist(A,c) denotes the truthlikeness of theory A relative to constituent c:

(13) Expected truthlikeness of theory A given epistemic probability function P and total evidence E (for a countable set of constituents C,6 cf. Niiniluoto 1987: 269):

ExpP(t(A|E)) = Σc∈C P(c|E) · t(A,c), where t(A,c) is defined as in (10).

8 The inference from empirical success to truthlikeness The two epistemic notions of truthlikeness introduced in section 7 are measures of empirical suc- cess. How are they related to (semantic) truthlikeness?

• Inference question 1: Can we infer from a higher empirical success higher (semantic) truth- likeness of a theory? • Inference question 2: Or, at least, can we infer from higher truthlikeness a higher empirical success?

These two questions have been investigated by Kuipers (2000: §7.3.3). He argues that if infer- ences of type 2 were possible, then we could infer inversely from a higher empirical success to higher truthlikeness of a theory by means of an inference to the best explanation. The latter inference would be a “truthlikeness-version” of Putnam’s no-miracles argument, according to which the best if not the only reasonable explanation of the empirical success of a scientific theory is the assumption that it is close to the truth. In other words, defenders of the inference from empirical success to truthlikeness will typically favour the position of scientific realism, while opponents of this inference will advocate some sort of anti-realism. Unsurprisingly, both the inference from greater empirical success to greater truthlikeness as well as the inverse inference are too good to be generally true. However, the second inference, from semantic to evidential truthlikeness, holds at least under certain conditions. The reason why this inference may fail is that strongly confirmed content parts of a theory need not be true and strongly disconfirmed content parts need not be false. We call this the problem of

143 Gerhard Schurz inductive risk. One may hope that if empirical evidences are accumulated, the inductive risk gets smaller and smaller and eventually vanishes. Under this assumption one can indeed prove that greater (semantic) truthlikeness entails greater evidential truthlikeness. In what follows

“≥” and “≥ev” stand for (semantic) and evidential truthlikeness (in the proof of (14), “Atp” and “Afp” stand for the true and false conjunctive parts of theory A, respectively, in the sense of SW, G or CF)7:

(14) Empirical success theorem 1:

If the following restriction (R1) holds, then A ≥ B implies A ≥ev B. (R1): All strongly confirmed relevant elements of A and B are true, and all strongly dis- confirmed ones are false (“no inductive risk”)

Proof: Assume A ≥ev B fails. Then: • Either case 1 holds: Btp-ev contains some strongly confirmed and hence (by assumption) true statement b that is not contained in Atp-ev. Since b ∉ Atp-ev and A is strongly con- firmed, it follows that b∉ Atp. But b ∈ Btp. So A ≥ B cannot hold. • Or case 2 holds: Afp-ev contains some strongly disconfirmed and hence (by assumption) false statement a that is not contained in Bfp-ev. Since a ∉ Bfp-ev and a is strongly discon- firmed, it follows that a∉ Bfp. But a ∈ Afp. So A ≥ B cannot hold. Q.E.D.

Restriction (R1) of success theorem (14) is unrealistic in the short run. Can we hope that (R1) holds at least in the long run? The answer to this question and the significance of the result (14) depends crucially on the distinction between the empirical (non-theoretical) and the theoret- ical part of a theory. That scientific theories contain theoretical concepts, such as “force” in physics, that go beyond what is directly observable or definable through observation concepts was one of the major findings of contemporary (post-positivistic) philosophy of science.8 It is difficult to draw a sharp distinction between empirical and theoretical concepts, but the following considerations do not depend on exactly how one draws the distinction between the two. The only important assumption is that the values of some concepts are directly measured, that is, contained in the evidence, while the values of other (theoretical) concepts can only be indirectly inferred, in a theory-dependent way, from the values of the directly measurable concepts. In statistical science the observational concepts are also called manifest and the the- oretical ones are called latent variables. L L Let e be the empirical (non-theoretical) sublanguage of the language of theory A, that is, L L e is the set of -formulas that are solely built up from empirical (non-theoretical) and logi- co-mathematical concepts. While the full (empirical and theoretical) content of A is simply given as C(A), that is, the set of all A-consequences, the empirical content E(A) is defined as the set of L empirical A-consequences: E(A) = C(A)∩ e. A purely empirical hypothesis or theory A is a set of sentences without any theoretical concepts; for such a theory E(A) = C(A) holds. In many respects, purely empirical hypotheses or theories behave differently from genuine theories, by which we mean theories containing theoretical concepts. In Bayesian confirmation theory, Gaifman and Snir’s (1982) convergence theorem holds for purely empirical hypotheses but not for genuine theories (cf. Schurz 2013, prop. 4.7–6). Let W(L) be the space of first-order models (possible worlds) over a fixed countably infinite domain D with standard names inL . The L evidence stream (e:w) = (l1,l2, . . . ) of a world w∈W( ) is the sequence of all empirical literals L that are true in w, ordered according to a fixed enumeration of the atomic -sentences. E(e:w) = ∪i ∈ ω {li} is the limiting evidence set of (e:w). The evidence stream (e:w) is called a w-complete iff E(e:w) equals the set of all literals true in w.

144 Truthlikeness and approximate truth

(15) Probabilistic convergence for purely empirical theories (Gaifman and Snir 1982): For every given hypothesis H, the set of possible worlds in which the probability of H given

E(e:w) converges to H’s truth value has probability 1, provided that the evidence stream (e:w) of every world w is w-complete.

The proviso in (15) means nothing but that the considered hypotheses H are purely empirical. Theorem (15) breaks down for genuine theories, since the extension of their theoretical concepts is not determined by the limiting evidence sets: the same limiting evidence set (or empirical sub- model) can be expanded into different and even incompatible theoretical structures. Since Quine (1951), this fact has been called the empirical underdetermination of theories. Similar results are obtainable for the relation between the evidential and semantic truthlikeness of theories. We write A ≥w B for the semantic truthlikeness relation in world w and A ≥w-ev B for the evidential truthlikeness relation in world w. From theorem (15) we obtain the following two success theorems for the epistemic notions of truthlikeness:

(16) Success theorems for purely empirical theories: (16.1) With probability 1, the evidential truthlikeness relation between purely empirical theories, conditional on the limiting evidence set of a w-complete evidence stream, coincides with their semantic truthlikeness relation in w. Formally: P({w∈W(L):

A ≥w-ev B ↔ A ≥w B)}) = 1. (16.2) With probability 1, the P-expected truthlikeness of theories formulated in a finite empirical language, conditional on the limiting evidence set of a w-complete evidence stream, coincides with their semantic truthlikeness in w, or formally: P({w∈W(L):

ExpP(tw(A|E(e:w))) = tw(A)}) = 1. L Proof: For (16.1): E(e:w) contains all -literals that are true in w. So evidential truthlikeness conditional on E(e:w) coincides with semantic truthlikeness in w. L For (16.2): contains finitely many empirical constituents cu, each corresponding to one L possible world u∈W( ). By theorem (15), with probability 1, P(cu|E(e:w)) = 1 if u = w, else P(cu|E(w)) = 0. This implies that ExpP(tw(A|E(e:w))) = Σu∈W(L)P (cu|E(w))·tu(A) = tw(A). Q.E.D.

For genuine theories these success theorems break down. Even an empirically ideal theory may have a systematically false theoretical superstructure, because it postulates theoretical entities that do not exist in reality. A case in point is the transition from the phlogiston theory to the oxygen theory of combustion (cf. Laudan 1981; Schurz 2009). According to the phlogiston theory (developed in the late 17th and early 18th centuries), every material which is capable of being burned or calcinated contains phlogiston. When combustion or calcination takes place, the burned or calcinated substance delivers its phlogiston in the form of a hot flame and a dephlogisticated substance-specific resid- ual remains. In the 1780s, Lavoisier introduced his alternative oxygen theory, according to which combustion and calcination consists in the oxidation of the substance being burned or calcinated, that is, in the formation of a chemical bond of its molecules with oxygen. The “hot flame” was identified as light-emitting carbon dioxide gas, and the assumption of phlogiston as the bearer of combustibility became superfluous. Although the theoretical superstructure of phlogiston theory is false, it was remarkably successful in qualitative explanations of chemical reactions such as combus- tion, calcination and salt-formation through the solution of metals in acids. As the example shows, the assumption of success theorem (16), that strongly confirmed content parts are in the long run overwhelmingly true, is tenable only for the empirical but

145 Gerhard Schurz not necessarily for theoretical content parts of theories. Does this mean that for genuine the- ories an inference from evidential to semantic truthlikeness is impossible? In this situation, the notion of inter-theoretic correspondence relations may come to help. Building on Boyd (1984), Schurz (2009) argues that if an outdated theory A and a presently accepted theory B share their strong (use-novel) empirical success in a given domain of application but have different theoretical superstructures (or ontologies), there must exist certain relations of correspond- ence between them. In application to the phlogiston-oxygen example Schurz (2009) arrives at the following result:

(17) Correspondence relation between phlogiston and oxidation theory: Dephlogistication of a substance corresponds (in modern chemistry) to the donation of electrons of the substance’s atoms to their (oxidizing) bonding partner.

The semantic interpretation of “dephlogistication” as donation of electrons to the bonding partner is called a reference shift. Based on this result Schurz (2009: §7.3) argues that by assum- ing that reality is described by an ideally true theory T, the following weak inference from empirical success to theoretical truthlikeness is possible: those theoretical concepts of a the- ory A that are responsible for A’s empirical success correspond to certain (possibly complex) concepts of the ideally true theory T. By means of semantic reference shifts, the concepts of theory A acquire reference which makes at least some (though not necessarily all) theoretical consequences of theory A true. This means that also A’s theoretical superstructure must have a certain amount of truthlikeness.

9 Conclusion and outlook Let us summarize the contents of this article. Scientific realists argue that with increasing empirical success, scientific theories get closer to the truth. To explicate this notion Popper (1963) introduced the notion of verisimilitude or truthlikeness (sections 1–3). Beginning in the midst of the 1970s, improved accounts of truthlikeness were developed that solve certain flaws inherent to Popper’s original definition and have been presented in sections 3 through 5. In section 6 we discussed the language dependence of truthlikeness under disjunctive lin- guistic transformations. Standard definitions of truthlikeness are semantic, that is, explicate the truthlikeness of a theory in relation to the complete truth T, which is usually unknown. In scientific practice the truthlike- ness of a theory can only be assessed relative to the set of empirical evidences. This leads to the notion of epistemic truthlikeness. In section 7 we explained two notions of epistemic truthlike- ness: the notion of evidential truthlikeness, which is motivated by the account of Lakatos (1970), and the probabilistic notion of expected truthlikeness. In section 8 we asked how one can infer from evidential truthlikeness to the semantic truthlikeness of theories. This is possible only in the long run, and only for evidence streams that cover all basic facts expressible in the language of the theory. For theories that contain theoretical concepts (expressing facts not contained in the evidence), this inference breaks down. However, something weaker can be established: under certain conditions, the theoretical superstructure of an evidentially truthlike theory T stands in a structural relation of correspondence to the complete truth, which confers to T a certain amount of truthlikeness even at the theoretical level. Summing up, the notion of truthlikeness proves to be an indispensible means to explicate the central tenet of scientific realism: the connection between empirical progress and convergence to the truth.

146 Truthlikeness and approximate truth

Further reading T.A.F. Kuipers, What Is Closer-to-the-Truth? (Amsterdam: Rodopi, 1987) is a volume on the spec- trum of truthlikeness accounts up to the 1980s. I. Niiniluoto, Truthlikeness (Dordrecht: Reidel, 1987) is the most comprehensive work on disjunction-of-possibilities accounts to truthlikeness. S. D. Zwart and M. Franssen, “An Impossibility Theorem for Verisimilitude,” Synthese 158 (2007): 75–92, states an impossibility result for disjunction-of-possibilities accounts; Schurz and Wein- gartner (2010) is in part a reply to this paper. G. Oddie, “Truthlikeness” (The Stanford Encyclopedia of Philosophy, summer 2014 edition) gives an overview of contemporary accounts. T. Kuipers and G. Schurz, Belief Revision Aiming at Truth Approximation (Erkenntnis 75, 2011, guest-edited volume) discusses recent results on the relation between truthlikeness and belief revision.

Acknowledgement For valuable help I am indebted to G. Cevolani, T.A.F. Kuipers, I. Niiniluoto, G. Oddie and J. Saatsi.

Notes 1 The “no miracles” argument is discussed further in K. Brad Wray, “Success of science as a motivation for realism,” ch. 3 of this volume. See also I. Niiniluoto, “Scientific progress,” ch. 15 of this volume. 2 See Popper (1963: 233, 1972: 52); Tichý (1974: 156); and Miller (1974: 167). 3 Definition 1 is a simplified but equivalent version of the more general definition given in Schurz and Weingartner (2010: sec. 4, def. 4). By convention, the only relevant element of a logically true respectively

an inconsistent theory is the logical constant ⊥ respectively . In the numerical definition (def. (7) in ⊥ ⊥ what follows) it is assumed that t( ) = 0, and t(⊥) = –(n+1). 4 Schurz and Weingartner (2010: sec. 2) misclassified Kuipers’s account as a disjunction-of-possibilities account, because they misunderstood Kuipers’s models as disjunctive global models. 5 In a monadic first-order language with n predicates and m individual constants the number possible (singular) state descriptions is 2n·m, and the number of (quantified) monadic constituents is 2(2n)− 1. The number of (quantified) depth-3 constituents in a first-order language with just one binary relation is already 11 10(10 ) (cf. Niiniluoto 1987: 71). 6 An extension to uncountably many constituents is possible if constituents can be represented by real numbers: then the sum has to be replaced by an integral:

t(A|E) = ∫∫c∈C t(A,c) dP(c|E). 7 Kuipers’s account (2000: §9.1.1) needs the following stronger assumption for his success theorem: for

every true empirical submodel me of the theory A there exists a true theoretical model m of A that expands me. This is necessary since Kuipers’s theories are conjunctions of nomic possibilities, but an empirical submodel is not a full nomic possibility. In the other conjunctive-part accounts, empirical submodels are directly cashed out in terms of conjunctive parts; so this assumption is not needed. 8 See, e.g., Carnap (1956); Hempel (1958); and Sneed (1971).

References Boyd, R. (1984) “The Current Status of Scientific Realism,” in J. Leplin (ed.), Scientific Realism, Berkeley: University of California Press, pp. 41–82. Carnap, R. (1956) “The Methodological Character of Theoretical Concepts,” in H. Feigl and M. Scriven (eds.), The Foundations of Science, Minneapolis: University of Minnesota Press, pp. 38–76. Cevolani, G., Crupi, V. and Festa, R. (2011) “Verisimilitude and Belief Change for Conjunctive Theories,” Erkenntnis 75, 183–202. Cevolani, G. and Festa, R. (2009) “Scientific Change, Belief Dynamics and Truth Approximation,”La Nuova Critica 51(52), 27–59.

147 Gerhard Schurz

Gaifman, H. and Snir, M. (1982) “Probabilities Over Rich Languages,” Journal of Symbolic Logic 47, 495–548. Gemes, K. (2007) “Verisimilitude and Content,” Synthese 154, 293–306. Hempel, C. G. (1958) “The Theoretician’s Dilemma,” reprinted in C. G. Hempel (ed.), Aspects of Scientific Explanation, New York: The Free Press, 1965, pp. 173–228. Hilpinen, R. (1976) “Approximate Truth and Truthlikeness,” in M. Przelecki, A. Szaniawski, and R.Wó- jcicki, Ryszard (eds.), Formal Methods in the Methodology of Empirical Sciences, Dordrecht: Reidel, pp. 19–42. Kuipers, T.A.F. (1982) “Approaches to Descriptive and Theoretical Truth,” Erkenntnis 18, 343–378. ——— (2000) From Instrumentalism to Constructive Realism, Dordrecht: Kluwer. Lakatos, I. (1970) “Falsification and the Methodology of Scientific Research Programmes,” reprinted in I. Lakatos (ed.), Philosophical Papers, vol. 1, Cambridge: Cambridge University Press, 1978, pp. 8–101. Laudan, L. (1981) “A Confutation of Convergent Realism,” reprinted in D. Papineau (ed.), The Philosophy of Science, Oxford: Oxford University Press, 1996, pp. 107–138. Lewis, D. (1973) Counterfactuals, Oxford: Blackwell. Miller, D. (1974) “Popper’s Qualitative Theory of Verisimilitude,” British Journal for the Philosophy of Science 25, 166–177. ——— (1978a) “On the Distance from the Truth as a True Distance,” in J. Hintikka et al. (eds.), Essays on Mathematical and Philosophical Logic, Dordrecht: Reidel, pp. 166–177. ——— (1978b) “The Distance between Constituents,” Synthese 38, 197–212. Niiniluoto, I. (1977) “On the Truthlikeness of Generalizations,” in R. Butts et al. (eds.), Basic Problems in Methodology and Linguistics, Dordrecht: Reidel, pp. 121–147. ——— (1987) Truthlikeness, Dordrecht: Reidel. Oddie, G. (1981) “Verisimilitude Reviewed,” British Journal for the Philosophy of Science 32, 237–265. ——— (2013) “The Content, Consequence and Likeness Approaches to Verisimilitude: Compatibility, Trivialization, and Underdetermination,” Synthese 190, 1647–1687. Popper, K. (1962) “Some Comments on Truth and the Growth of Knowledge,” in E. Nagel, P. Suppes and A. Tarski (eds.), Logic, Methodology and Philosophy of Science, Stanford: Stanford University Press, pp. 285–292. ——— (1963) Conjectures and Refutations, London: Routledge. ——— (1972) Objective Knowledge, Oxford: Clarendon Press. Putnam, H. (1975) “What Is Mathematical Truth?” in H. Putnam, Mathematics, Matter and Method, Cam- bridge: Cambridge University Press, pp. 60–78. Quine, W.V.O. (1951) “Two Dogmas of Empiricism,” Philosophical Review 60, 20–43. Schurz, G. (2009) “When Empirical Success Implies Theoretical Reference: A Structural Correspondence Theorem,” British Journal for the Philosophy of Science 60(1), 101–133. ——— (2013) Philosophy of Science: A Unified Approach, New York: Routledge. ——— (2014) “Criteria of Theoreticity: Bridging Statement and Non Statement View,” Erkenntnis 79, 1521–1545. Schurz, G. and Weingartner, P. (1987) “Verisimilitude Defined by Relevant Consequence-Elements,” in T. A. Kuipers (ed.), What Is Closer-to-the-Truth? Amsterdam: Rodopi, pp. 47–78. ——— (2010) “Zwart and Franssen’s Impossibility Theorem Holds for Possible-World-Accounts but Not for Consequence-Accounts to Verisimilitude,” Synthese 172, 415–436. Sneed, J. D. (1971) The Logical Structure of Mathematical Physics, Dordrecht: Reidel. Tichý, P. (1974) “On Popper’s Definition of Verisimilitude,” The British Journal for the Philosophy of Science 27, 25–42. ——— (1976) “Verisimilitude Redefined,”British Journal for the Philosophy of Science 27, 25–42.

148 PART III

Perspectives on contemporary debates

12 GLOBAL VERSUS LOCAL ARGUMENTS FOR REALISM

Leah Henderson

1 Introduction There has been considerable discussion in recent years over the right level of generality at which to conduct the scientific realism debate. In the 1960s, the debate took a turn towards a more naturalistic approach, which took scientific realism to be a high-level empirical hypothesis that could be ‘tested’ against the history of science. Much of the ensuing debate came to revolve around two opposing arguments: the No Miracles Argument (NMA) in favour of realism and the Pessimistic Induction (PI) against realism. However, this whole debate has met with a persistent strain of criticism. The NMA (and PI) are said to be too ‘global’, too ‘sweeping’, too ‘general’, and insufficiently attentive to the details of science on the ground. It has been urged that a more effective and productive approach for the scientific realist is to ‘go local’. Localists urge that realists should make the case for realism about particular elements of scientific theories on a case-by-case basis based primarily on the first-order scientific evidence. In this chapter, I will examine a particular case which is often invoked as a prime example by localists. This is the case of Perrin’s experimental arguments for the existence of molecules in the early 20th century. In this case, as Peter Achinstein points out, local arguments can give some traction against certain kinds of anti-realist challenge, in particular challenges from constructive empiricism. This is because the realist may use local arguments to undermine the significance of the observable-unobservable distinction. However, I will argue that the local approach is less successful in evading anti-realist challenges based on the history of science. In order to evade these challenges, the local realist has to give up valuable resources for making her own case. Thus, in fact the global arguments play a key role in supporting any arguments localists could make. The plan for the chapter is the following. In section 2, I will outline the global approach to the scientific realism debate. Section 3 introduces the local approach. Section 4 gives a short description of the Perrin case, and section 5 outlines the local arguments for realism about mol- ecules that can be built on it. Discussion of the relative merits of the local and global approaches follows in section 6.

151 Leah Henderson

2 The global approach Hilary Putnam has given the classic formulation of what has become known as the No Miracles Argument (NMA). Scientific realism, Putnam said, is the only way to account for the striking predictive and explanatory success of science as a whole:

The positive argument for scientific realism is that it is the only philosophy that does not make the success of science a miracle. That terms in a mature science typ- ically refer (this formulation is due to Richard Boyd), that the theories accepted in a mature science are typically approximately true, that the same terms can refer to the same even when they occur in different theories – these statements are viewed not as necessary truths but as part of the only scientific explanation of the success of science, and hence as part of any adequate description of science and its relations to its objects. (Putnam 1975)

The NMA was developed, primarily by Richard Boyd and later by Stathis Psillos, into the ‘expla- nationist defence of realism’ (Boyd 1980; Boyd 1983; Psillos 1999). The key idea here is that scientific realism is the best explanation of scientific success. Realism is treated as a high-level empirical hypothesis, which is supported, just as ordinary scientific hypotheses, by an inference to the best explanation. The realist defends a general inference from success to approximate truth of something like the following form:

R: If theory T is from a mature science and successful, then T is [probably] approximately true.

Since the argument is presented as an empirical one, ‘to be tested in the court of experience’, it becomes possible, as Laudan points out, that it is refuted rather than confirmed (Laudan

1981: 20). Laudan points to cases in the history of science where a theory T1 at time t1 is replaced by theory T2 at a later time t2, both of which are mature and successful, yet T2 is sufficiently different from T1 that it is not possible that both are approximately true. Thus, such cases are counterexamples to the realist inference from success to approximate truth. For example, Laudan argues that theories such as the caloric theory of heat, the theory of the electromagnetic aether, and theories of spontaneous generation all experienced considerable success yet could not have been approximately true by the lights of subsequent theories (Laudan 1981: 33). With enough counterexamples, one can even argue that since theories from the past with much success were so often replaced or abandoned, probably current the- ories will be too, even if they have been very successful. Therefore, we should not have too much confidence that our best theories are approximately true. This argument has become known as the ‘Pessimistic Induction’, or PI for short (see P. Vickers, “Historical challenges to realism”, ch. 4 of this volume). The explanationist defence of scientific realism is a ‘global’ argument in the sense that it argues for a hypothesis which says that any theory that is mature and successful is likely to be approximately true. There is thus a criterion for realist commitment which can be applied generally to theories in science, regardless of their field or specific subject matter. In responding to PI, realists have typically tried to refine the criterion in the realist hypothesis in a principled way to exclude the counterexamples while still maintaining the idea of a global criterion which

152 Global vs. local arguments for realism applies across science. Thus, there have been a number of new proposals by realists, which all take the following general form:

R’: If Crit( T, E) holds, then T is [probably] approximately true.

Here Crit( T, E) is some criterion for realist commitment which concerns how a theory T relates to the evidence E and which is accessible to us. Since the criterion can be applied to theories across science, the realist hypothesis is still testable against the history of science. One of the first attempts to refine the realist criterion was to impose a strong requirement that T’s predictive success should be ‘novel’ (Leplin 1997). However, since this did not appear to rule out all the counterexamples, another prominent strategy has been to argue for realism at the sub-theory level, an approach called ‘selective realism’. Rather than looking for criteria for taking theories as a whole to be approximately true, selective realists specify criteria for taking some subset of theoretical statements in a theory to be approximately true. For example, some claim that the elements of a theory which should be picked out are structural in nature (Worrall 1989; Ladyman 1998; French 2006) or that realist commitment should be reserved for entities (Hacking 1982; Cartwright 1983; Chakravartty 1998). Others argue that the criterion is to pick out those parts of the theory which are critically involved in explaining the predictive or other success which the theory has enjoyed; this is called the divide et impera strategy (e.g. Kitcher 1993; Psillos 1999).

3 Going local The global approach to the scientific realism debate has faced criticism on a number of fronts. The NMA and the explanationist defence of realism have been criticised as ineffective arguments (e.g. Laudan 1981; Fine 1986; Matheson 1998; Frost-Arnold 2010; see also J. Saatsi, “Realism and the limits of explanatory reasoning”, ch. 16 of this volume). In addition to the charge that these arguments are circular, there have also been recent accusations that the NMA commits the base rate fallacy (Magnus and Callender 2004; Howson 2000, 2013). There has been a sense in some quarters that the dialectic between the NMA and the PI has led to a degenerating debate merely fueled by diverging and unjustifiable intuitions about base rates (Magnus and Callender 2004). Some have argued that the proposals for refined realist criteria are not sufficiently principled and, in particular, not prospectively applicable, to serve the realist need (Stanford 2003). There has been a sense that the whole discussion has become rather baroque, with no obvious and clear candidate for a workable realist criterion emerging.1 Some have suggested that the underlying reason for the lack of consensus is that the selective realist criteria are still too broad, insofar as they are supposed to be applicable to science generally. Science, they urge, is not a unified kind of thing but is very inhomogeneous (Saatsi 2009; Saatsi 2017; Fitzpatrick 2013; Asay 2017). There are huge differences throughout different parts of science in terms of methods, explanations, and reasoning. Saatsi asks,

Why think that we are apt to latch onto reality in the same way throughout the sciences, or even in a single discipline? Quite plausibly, some areas of science are more likely to exhibit underdetermination than others. Some subject matters are very far removed from everyday reality (e.g. quantum fields), while others are relatively close to it despite being about thoroughly unobservable entities and processes (e.g. causal-mechanistic

153 Leah Henderson

systems in microbiology). In the face of all the diversity, why think that one (or even a handful) of recipes uniformly and fairly captures – across the board – the way in which theories’ empirical success is correlated with the way they latch onto reality? (Saatsi 2017: 6)

Magnus and Callender are motivated by a similar thought:

Reflecting on the vast complexities of various historical episodes in science, there is no reason to think that the general assumptions one finds will be at all simple, natural, or even non-disjunctive; in short, there is no guarantee that the criterion one finds will be either interesting or useful. (Magnus and Callender 2004: 335)

All this has provided the spur for an influential movement to adopt a different kind of approach to realism – a local approach. The key idea is that the realist will be on stronger ground if she focuses on the ‘first-order’ scientific evidence. After all, the scientific community appears, often after extended periods of disagreement, to be able to decide on whether to take a realist atti- tude towards a theory. And the scientific community appears to be moved not by philosophical meta-arguments but by force of particular evidence. A realist then can make a case for particular entities or properties on a case-by-case basis. Localists frame the realism debate as a debate over particular cases. However, they do not necessarily suggest that there is no legitimate question to be asked about the epistemic status of science more broadly. Rather they recommend a ‘local strategy’ for answering it. Insofar as there is a general position to be had, it will be had by conjoining the results of all the particular inves- tigations. So we may end up realist about electrons but anti-realist about quarks, and so forth. The type of position that the localist recommends is supposed to be of a more modest kind, then. The view is usefully summarised by Simon Fitzpatrick (2013) as follows:

[according to the local approach] the defence of realism is best constructed on a case- by-case basis. The idea is that the best foundation for a realist attitude towards a particu- lar theoretical claim of modern science (e.g. that there are atoms, that past and present organisms on earth are the product of evolution by natural selection, that the continents move laterally on tectonic plates, etc.) is the weight of the particular first-order evidence that led scientists to accept the claim in the first place. Realism is thus to be defended through close consideration of the specific theoretical claims that realists want to be realists about, the particular empirical evidence for such claims, and questions about what epistemic attitude towards these claims is licensed by this evidence, with anti- realist challenges to be rebutted as they arise. (143)

4 The case of Perrin The local approach is best understood by considering an example. Almost without exception, those advocating the local approach have suggested that realists could make a strong argument for the existence of unobservable molecules based on Perrin’s experiments in the early 20th century. Therefore, it is worth looking at this example more closely. Perrin conducted his ground-breaking experiments in 1908–1909. By that time the atomic theory had been developing over about a century. It had been used to account for the masses of

154 Global vs. local arguments for realism substances produced in chemical reactions in terms of the putting together or breaking up of molecules composed of basic building blocks (atoms) of definite types. This required a number of more specific hypotheses about the nature of atoms and molecules. Prominent among these was Avogadro’s hypothesis, which stated that

Equal volumes of different gases under the same conditions of temperature and pressure, contain equal numbers of molecules. (Perrin 1916: 18)

Scientists then defined a ‘gram molecule’ of a substance to be the mass of the substance in the gaseous state which occupies the same volume as 32 grams of oxygen at the same temperature and pressure. Then, according to Avogadro’s hypothesis, every gram molecule contains the same number of molecules. This number, N, is called ‘Avogadro’s number’. Atomic theory also had the potential to explain a variety of other phenomena by assuming that they arose due to the motion of molecules composing the substances in question. Thus, for example, the movement of molecules amongst one another explains why two gases in contact diffuse into one another, even if they start off in two layers with the denser gas below. The pres- sure exerted by a gas or fluid on the walls of its container can be explained by impacts of the moving molecules on the walls. A substantial body of ‘kinetic theory’ had been built up, based on applying the laws of mechanics to molecules, together with some assumptions which allowed the consequences for the bulk properties of the substance to be determined. Motion of molecules also offered an explanation for the phenomenon of Brownian motion. This is the seemingly haphazard jiggling of small particles suspended in a liquid, visible under a microscope. It appears not to depend on the nature of the particles, apart from being more pronounced for smaller particles. The atomic theory explains the jiggling of the relatively large visible particles by taking it to be a consequence of the jostling exerted upon them by the mol- ecules of the liquid in which they are suspended. Perrin realised that this qualitative explanation could be extended to allow a more quantitative investigation of the phenomenon. He suggested that if the movement of molecules is really the cause of Brownian motion, it could provide a way to measure the dimensions of the molecules, even though the molecules themselves are unobserv- able. In fact, it was possible to derive an expression for Avogadro’s number in terms of quantities which could be measured on an emulsion of Brownian particles. In his famous ‘law-of-atmos- phere’ experiments, Perrin carried out the ingenious and painstaking experimental procedures required to determine the measurable quantities, thus arriving at an estimate for Avogadro’s number. He also repeated the procedure under a wide range of experimental conditions, varying the volumes of the particles, the liquid used for the emulsion, and the density of the particles. His collaborator conducted experiments to vary the temperature. The result was:

In spite of all these variations, the value found for Avogadro’s number N remains approximately constant, varying irregularly between 65 × 1022 and 72 × 1022. (Perrin 1916)

These results were also in good agreement with a measurement of Avogadro’s number, made in a completely different experiment, based on a kinetic theory treatment of the viscosity of gases, which had yielded 62 × 1022. Perrin’s own interpretation is that ‘such constant results [found with different versions of the emulsion experiment] would justify the very suggestive hypotheses that have guided us’, and thus ‘the objective reality of the molecules therefore becomes hard to deny’ (Perrin 1916: 105).

155 Leah Henderson

But Perrin did not stop there. A theoretical treatment of the diffusion of Brownian molecules had been given by Einstein and Smoluchowski in 1905–1906. This theory also provided another expression for Avogadro’s number in terms of measurable quantities, including the viscosity, temperature, and rate of diffusion. In deriving this expression, Einstein had made the supposi- tion that the Brownian movement is completely irregular. This was a supposition that Perrin checked experimentally, by verifying that the displacements of the Brownian molecules followed a Gaussian distribution. He was then able to confirm Einstein’s prediction that the displacements would be proportional to the square root of the time elapsed. Perrin then determined Avogadro’s number according to the expression provided by Einstein and Smoluchowski’s theory. Again, there was striking agreement with the value of Avogadro’s number obtained by other methods. In the 19th century, atoms and molecules had been widely regarded in the scientific commu- nity as hypothetical entities, but by early in the 20th century, all but the most diehard empiricists were convinced of their existence. Perrin’s work is widely thought to have played a key role in the change in attitude towards atoms and molecules.2 In the presentation speech for Perrin’s Nobel Prize in 1926, Professor Oseen said that his work had “put a definite end to the long struggle regarding the real existence of molecules”. Thus, indeed Perrin’s work seems to be a good place to look for local arguments in favour of realism about molecules.

5 The local realist approach to Perrin Localists suggest that the realist can build an argument for the existence of molecules by focusing on first-order scientific evidence and reasoning such as Perrin’s. But what does this involve exactly? The realist may well be inclined to endorse Perrin’s reasoning, but she cannot simply repeat Perrin’s arguments. She must also make the case that these arguments are convincing. After all, Perrin was not the only scientist involved, and different scientists held different opinions. Some, like Maxwell, were already convinced of the reality of atoms even before Perrin’s experiments. Others, like Ost- wald and Poincare, were indeed persuaded that atoms were real by the early-20th-century evidence. But some, like Duhem, resisted even after Perrin’s work.3 Making a case for realism then requires giving a philosophical reconstruction of the reasoning to show why it is compelling. It also involves defending the reasoning against possible anti-realist challenges, which can be raised in the specific context. It will be helpful to distinguish between two different kinds of anti-realist challenges. One is the challenge from the constructive empiricist. The constructive empiricist urges that it is less risky and therefore preferable to take a well-supported theory such as the atomic theory to be empirically adequate rather than approximately true. This is, for the constructive empiricist, a matter of epistemic modesty, which stems from the general thought that conclusions about unob- servables really go beyond what can be properly justified by observations and experiments. Van Fraassen claims that what Perrin really accomplished was to complete the task of ‘empir- ical grounding’ for the atomic theory. Empirical grounding has two main requirements:

Determinability: any theoretically significant parameter must be such that there are conditions under which its value can be determined on the basis of measurement. Concordance, which has two aspects: • Theory-Relativity: this determination can, may, and generally must be made on the basis of the theoretically posited connections • Uniqueness: the quantities must be ‘uniquely coordinated’, there needs to be concord- ance in the values thus determined by different means. (van Fraassen 2009: 11)

156 Global vs. local arguments for realism

According to van Fraassen, Perrin’s experiments critically pinned down key theoretical parame- ters, in particular Avogadro’s number, and also produced concordance in its value. However, he sees no need to adopt an interpretation which reads Perrin’s results as providing evidence for the reality of molecules. The second kind of anti-realist challenge differs from the constructive empiricist challenge, in that it does not necessarily depend on the observable-unobservable distinction. This is the challenge presented by the history of science. The original argument here is the Pessimistic Induction. Although the local realist is not attempting to defend a general connection between success and approximate truth, the PI might still be raised against an instance of the local strategy. For example, the anti-realist might argue that the successful prediction of Avogadro’s number and the striking concordance of its measured values cannot be a good reason for thinking that molecules exist, since in the past similar successes have been experienced by theories which were later substantially replaced. Another historically based challenge comes from the problem of ‘unconceived alternatives’. This problem arises from an induction over the history of science similar to the Pessimistic Induc- tion but focusing on the capacity of scientists for eliminative reasoning. Kyle Stanford (2006) has produced a number of cases in the history of science in which he claims that scientists thought they had eliminated all the alternatives, and this later turned out to be untrue. Therefore, Stanford thinks, we should not be confident that theory T is approximately true, since it may in fact be an unconceived alternative theory which gives the correct account of how things are. We will now consider how the local realist can respond to the constructive empiricist and to the historical anti-realist challenge.

5.1 The local response to the constructive empiricist challenge The observable-unobservable distinction is crucial for constructive empiricism, and realists have long attempted to show that it lacks the epistemic significance constructive empiricists attribute to it (Maxwell 1962; Churchland 1985; Hacking 1985). Achinstein has suggested that the localist can offer a distinctive kind of attack on the observable-unobservable distinction in the context of Perrin’s arguments (Achinstein 2002). Achinstein argues that there can be local reasons for going beyond the observable with respect to a specific property. These are based on what Kitcher has called the ‘Galilean strategy’ (Kitcher 2001). When Galileo encountered skepticism about whether the newly invented telescope could provide reliable information about celestial bodies, as well as terrestrial bodies, he argued in the following way. What is seen through a telescope when a terrestrial object is so far away that it cannot be observed with the naked eye can be confirmed by moving closer to the object. There is no reason to think that it would be any different for the stars and other heavenly bodies, which are even farther away. In general, if varying the conditions or properties that make an object change from observable to unobservable makes no difference to the property whilst the object is still observable, we can expect that it will not make a difference when the object is unobservable either. Achinstein argues that the local realist can use this type of argument to show that the observable-unobservable distinction does not mark a significant epis- temic difference for particular properties – for example, the property of having mass.4

5.2 The local response to historical anti-realist arguments We now consider how the localist may deal with anti-realist arguments based on the history of science.

157 Leah Henderson

In response to the pessimistic meta-induction Simon Fitzpatrick (2013) has suggested that realists pursuing the local strategy will have an easier time dealing with the PI. Based on the PI, the anti-realist recommends a more skeptical epistemic attitude towards molecules, since in the past success of the sort exhibited by the atomic theory has not always resulted in retention of the theory. The localist response to this is to argue that the cases of failure of successful theories in the history of science are not relevantly similar to the case in hand, because realist arguments are highly dependent on context-sensitive detail. The general criterion of ‘success’ in the PI is too blunt and misses out on important further reasons for realism which may apply in some cases but not in others. In the specific case of Perrin, according to Fitzpatrick, the epistemic force of Perrin’s argu- ment from his multiple convergent estimates of N “can only be appreciated within the context of a rich network of background beliefs” (2013: 146). Achinstein’s reconstruction of Perrin’s ‘legitimate mode of reasoning’ brings out what is missing. Achinstein identifies ‘two important components’ of the reasoning. One is “an argument to the conclusion that the particular exper- imental results obtained are very probable given the existence of the postulated entity and its properties” (Achinstein 2002: 492). The fact that in various experimental circumstances the determinations of Avogadro’s number coincide on a value approximately equal 6 × 1023 can be argued to be very probable, given the hypothesis that molecules exist and that the number of molecules in a gram molecular weight is approximately 6 × 1023. This coincidence or concordance of measured values is recognisable as the kind of ‘success’ that the No Miracles argument appeals to. However, Achinstein also identifies another important component of Perrin’s reasoning, on which this argument depends. This is an appeal to causal-eliminative reasoning to the existence of the postulated entity, and to certain claims about its properties, from other experimental results. For example, Perrin appealed to the experiments of Gouy, which had aimed at elimi- nating the possibility that Brownian motion was caused by external causes such as vibrations or convection currents. Achinstein argues that such arguments fit the following causal-eliminative scheme:

(1) Given what is known, the possible causes of effect E (for example, Brownian motion) are

C, C1 , . . ., Cn (for example, the motion of molecules, external vibrations, heat convection currents) . . . .

(2) C1, . . ., Cn do not cause E (since E continues when these factors are absent or altered). So probably (3) C causes E. (Achinstein 2002: 474)

Achinstein points out that these causal-eliminative arguments are deployed by Perrin to establish that the probability for the atomic hypothesis is not insubstantial, even before the concordance of values for Avogadro’s number is taken to boost the probability of the hypothesis even further (see Achinstein 2002: 475–476). Overall then, Achinstein’s work clearly shows that the concordance is only one component of Perrin’s reasoning. Even if the PI demonstrates that such concordance is not a reliable indicator of approximate truth, Perrin’s full case for the existence of molecules relied on much more than just that. Most importantly, Perrin also had considerable evidence for the elimination of other possible causes of Brownian motion than the motion of molecules.

158 Global vs. local arguments for realism

In response to the problem of unconceived alternatives However, this still leaves the problem of unconceived alternatives to deal with. Stanford has sug- gested that eliminative reasoning (such as that used by Perrin) can be shown to be historically unreliable. One strategy that the localist may adopt here is to deal with the problem of uncon- ceived alternatives in a similar way to the PI. The idea here would be to argue that although at some level of generality scientists’ reasoning may be characterised as elimination of alternatives, in reality the force of the reasoning will be highly dependent on the background context. In the case of atomic theory, by giving more details one may be able to show that the eliminative argument is compelling. An example of how this might be done is provided by Sherri Roush’s (2006) discussion of Perrin. According to Roush, even before Perrin demonstrated the striking concordance between measured values for Avogadro’s number, he had already provided very compelling evidence for a ‘modest hypothesis’ about atoms. This hypothesis states that

there are atoms and molecules, understood merely as spatially discrete submicroscopic entities moving independently of each other, i.e. at random. (Roush 2006: 219)

Roush argues that Perrin had done more than just eliminating possible causes of Brownian motion (vibrations, light temperature gradients, etc.) one by one. Perrin was also able to directly measure the motions of the Brownian particles and show that they did indeed move in a random walk. Random motion is exactly what would be expected if the motion was caused by the jos- tling of molecules also moving at random. Roush points out that establishing that the Brownian motion was random eliminated a whole swathe of possible alternative causes – namely all those causes which might be expected to produce a systematic effect, “dependencies or correlations between the motions of one particle and another or tendencies in the motion of a single particle” (Roush 2006: 219).5 Thus, Roush argues that the only remaining causes to consider are those which would not produce systematic effects. The atomic hypothesis is one such, but there might be others that remained unconceived. However, the atomic hypothesis was able to account for more than just this result. It also could explain “the constant ratios of chemical combination, the independently suspected inexactness of the Carnot principle, and the perpetuity of the Brown- ian motion” (Roush 2006: 221). Thus there were already substantial constraints on what could possibly serve as an alternative to the atomic hypothesis. The local realist could then argue that the ‘bad’ cases in Stanford’s New Induction are not relevantly similar to the Perrin example, because the reasons involved in each case are specific to that case. For example, it is true that Maxwell also had arguments for eliminating any alternative theories that did not postulate the existence of an ether. He remained convinced that the behav- iour of light could only be caused by waves propagating in an ether, arguing that the interference of waves could only be produced by ‘a process going on in a substance’:

That light is not itself a substance may be proved from the phenomenon of interference? Now, we cannot suppose that two bodies when put together can annihilate each other; therefore light cannot be a substance. Among physical quantities we find some which are capable of having their signs reversed, and others which are not. Thus a displace- ment in one direction is the exact opposite of an equal displacement in the opposite direction. Such quantities are the measures, not of substances, but always of processes

159 Leah Henderson

taking place in a substance. We therefore conclude that light is not a substance but a process going on in a substance, the process going on in the first portion of light being always the exact opposite of the process going on in the other at the same instant, so that when the two portions are combined no process goes on at all. (Maxwell 1878)

Whereas Perrin’s causal-eliminative arguments were experimentally based, Maxwell’s were rather theoretical. Thus, the local realist might argue that Stanford’s new induction is missing essential fine-grained detail by lumping all eliminative reasoning together. Each case should simply be addressed on its own merits.

6 Discussion Localists regard the whole dialectic that has followed from the naturalistic turn in the scientific realism debate as misguided. Part of the reason for this is rejection of the NMA. The status of this argument has been much debated, but recent attempts to show that it commits the base-rate fallacy miss their mark (Henderson 2015). Overall, it has become clear by close examination of the Perrin example that the case for realism about a particular entity can rest on more than just the success criterion that figured in the original global NMA. In such a given case, there are further arguments available to the realist. Galilean arguments may help the realist to undermine the observable-unobservable dis- tinction. There may also be other arguments, such as causal-eliminative arguments, which form an important part of the case for realism. The outstanding question that separates localists and globalists is whether it is possible to regiment the further considerations into general criteria for realist commitment. Despite their exhortations to go local, localists have tended to argue against the viability of general criteria at a very general level, by making broad claims about the diversity of science (Magnus and Callender 2004; Saatsi 2009; Saatsi 2017; Asay 2017) or by appealing to analogies with particularist positions on confirmation, such as Norton’s material theory of induction (Saatsi 2009; Asay 2017). However, there are also general reasons to think that some considerable commonality in reasons for realist commitment is not precluded. Arguably there is quite a lot of common ground in scientific methods, despite the subject-specific differences. And particularist accounts of confirmation fly in the face of a relatively long tradition of confirmation theory. It is not clear that the question of whether useful realist criteria exist can be effectively resolved at this level of generality. More pivotal is the question of the prospects for the globalist research programme, particularly the selective realist programme. In this programme, realists attempt to identify a more refined criterion for what counts as a ‘good’ argument for realist commitment, as opposed to a ‘bad’ argument, without giving up on the idea that this criterion may be applicable to cases across science. The project requires both careful study of specific cases and a comparison of cases to test whether particular conjectured criteria might6 Thework. global realist needs to show that the proposed refined criterion is not subject to counterexamples from the history of science. Cases continue to be gathered which challenge various proposed realist criteria (e.g. Saatsi and Vickers 2011; Vickers 2013; Lyons 2006). A full assessment of how successful the various proposals from selective realists are cannot be undertaken here. (See I. Votsis, “Structural realism and its variants”, and M. Egg, “Entity realism,” chs. 9 and 10 of this volume.) Going local is often presented as an exciting new approach to the realism debate. For instance, Magnus and Callender (2004) suggest that it will make the realism debate more ‘profitable’.

160 Global vs. local arguments for realism

However, in my view this is not the case. Granted, the local realist appears to have some promising resources for dealing with the constructive empiricist, since she can appeal to the type of Gali- lean manoeuvre suggested by Achinstein to undermine the observable-unobservable distinction in some cases. However, the localist has fewer resources than the globalist for dealing with the historical anti-realist. The globalist hopes to identify the features which reliably indicate that a theory or component of a theory is likely to be approximately true. The absence of counterexamples in the history of science then provides a kind of objective support for the realist argument. It also provides a resource for dealing with disagreements over a particular case, since the realist can argue that the features they are identifying in a given case have a good track record historically. It is plausible that this kind of ‘second-order evidence’ is always, as Psillos suggests, an integral part of the eval- uation of first-order evidence (Psillos 2011). By contrast, the localist response adopts a particularist position, insisting that the reasons for realist commitment are highly case specific. The localist then faces something of a dilemma. As we have seen, it is important that the local realist provides some argument that the scientific reasoning is legitimate or philosophically respectable. This can be done by showing that the argu- ments in question conform to a recognised general form of good scientific reasoning. For exam- ple, Achinstein legitimates Perrin’s preliminary arguments by claiming that they are instances of a causal-eliminative reasoning scheme (see section 5.2.1). In a similar vein, Salmon argues that Perrin’s concordance arguments are ‘philosophically impeccable’ because they are instances of a legitimate use of the principle of common cause (Salmon 1984). However, when the reasoning is legitimated in this way, it becomes reasonable to consider whether other cases in which the reasoning took the same general form were so successful. Thus, the threat of pessimistic historical arguments needs to be faced. There is in general a tension between providing a compelling justification for the scientific reasoning and having to face historical counterexamples. By going completely local, the realist avoids having to deal with anti-realist historical arguments, but at the price of losing resources for legitimation of the scientific reasoning. There is then not much more to say than that the scientific arguments ‘look reasonable’. Such an approach may well seem compelling enough in a case like atomic theory, where we have the full weight of a further century of experience with atoms backing up our intuitions, and where IBM scientists are now able to manipulate atoms one by one to spell their company name. Although the specific arguments a local realist can present may indeed sound convincing in a historical case like Perrin’s, it is not clear how far this strategy will get us in more contemporary or controversial cases. There is a significant cost to localism as a response to anti-realist challenges based on the his- tory of science. By going local, one can no longer appeal to historical evaluations of reliability of particular features in order to support a realist argument. The prospects for the success of the global realist project are still unclear, but localism should be seen as a fall-back position rather than as something desirable in itself to grasp.

Notes 1 Such questions have been raised even by those working on the projects. For example, Peter Vickers says, “In all this, we find ourselves even 30+ years after Laudan (1981) unsure of the extent to which thedivide et impera strategy can succeed” (Vickers 2013: 209). 2 Though see van Fraassen (2009) for a different interpretation. 3 See Achinstein (2007) and Psillos (2011) for discussions of the changing attitudes of various scientists. 4 Saatsi argues that this result for mass underpins the claim that the law of conservation of momentum also extends into the unobservable realm (Saatsi 2009: 15–16).

161 Leah Henderson

5 The hypothesis that Kyle Stanford (2009) suggests is an alternative overlooked by Roush, namely that Brownian motion is caused by “the interplay of electrostatic forces among the particles themselves, in conjunction with exchange forces with the medium” (260), and in fact would be eliminated by the demonstration that Brownian motion is random since it would introduce correlations between particles. See also Egg (2016: 18). 6 Examples are Psillos (1999); Kitcher (1993); and Egg (2016).

References Achinstein, P. (2002) “Is There a Valid Experimental Argument for Scientific Realism?”Journal of Philosophy 99, 470–495. ——— (2007) “Atom’s Empirical Eve: Methodological Disputes and How to Evaluate Them,” Perspectives on Science 15(3), 359–390. Asay, J. (2017) “Going Local: A Defense of Methodological Localism about Scientific Realism,” Synthese. doi:10.1007/s11229-016-1072-6 Boyd, R. (1980) “Scientific Realism and Naturalistic Epistemology,” in P. D. Asquith and R. N. Giere (eds.), PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, 1980, Chicago: University of Chicago Press, pp. 613–662. ——— (1983) “On the Current Status of the Issue of Scientific Realism,”Erkenntnis 19(1/3), 45–90. Cartwright, N. (1983) How the Laws of Physics Lie, Oxford: Oxford University Press. Chakravartty, A. (1998) “Semirealism,” Studies in History and Philosophy of Science 29, 391–408. Churchland, P. (1985) “The Ontological Status of Observables: In Praise of the Superempirical Virtues,” in P. M. Churchland and C. A. Hooker (eds.), Images of Science: Essays on Realism and Empiricism, with a Reply from Bas C. van Fraassen, Chicago: The University of Chicago Press, pp. 35–47. Egg, M. (2016) “Expanding Our Grasp: Causal Knowledge and the Problem of Unconceived Alternatives,” The British Journal for the Philosophy of Science 67(1), 115–141. Fine, A. (1986) “Unnatural Attitudes: Realist and Instrumentalist Attachments to Science,” Mind 95(378), 149–179. Fitzpatrick, S. (2013) “Doing Away with the No Miracle Argument,” in V. Karakostas and D. Dieks (eds.), EPSA11 Perspectives and Foundational Problems in Philosophy of Science (The European Philosophy of Sci- ence Association Proceedings), vol. 2, New York: Springer, pp. 141–151. Fraassen, B. C. Van (2009) “The Perils of Perrin, in the Hands of Philosophers,” Philosophical Studies 143, 5–24. French, S. (2006) “Structure as a Weapon of the Realist,” Proceedings of the Aristotelian Society 106(2), 1–19. Frost-Arnold, G. (2010) “The No-Miracles Argument for Realism: Inference to an Unacceptable Explana- tion,” Philosophy of Science 77, 35–58. Hacking, I. (1982) “Experimentation and Scientific Realism,”Philosophical Topics 13, 71–87. ——— (1985) “Do We See through a Microscope?” in P. M. Churchland and C. A. Hooker (eds.), Images of Science: Essays on Realism and Empiricism, with a Reply from Bas C. van Fraassen, Chicago: The University of Chicago Press, pp. 132–152. Henderson, L. (2017) “The No Miracles Argument and the Base Rate Fallacy,” Synthese 194(4), 1295–1302. Howson, C. (2000) Hume’s Problem: Induction and the Justification of Belief, Oxford: Oxford University Press. ——— (2013) “Exhuming the No-Miracles Argument,” Analysis 73(2), 205–211. Kitcher, P. (1993) The Advancement of Science, Oxford: Oxford University Press. ——— (2001) “Real Realism: The Galilean Strategy,” The Philosophical Review 110(2), 151–197. Ladyman, J. (1998) “What Is Structural Realism?” Studies in History and Philosophy of Science 29, 409–424. Laudan, L. (1981) “A Confutation of Convergent Realism,” Philosophy of Science 48(1), 19–49. Leplin, J. (1997) A Novel Defense of Scientific Realism, Oxford: Oxford University Press. Lyons, T. D. (2006) “Scientific Realism and the Stratagema de Divide et Impera,” The British Journal for the Philosophy of Science 57(3), 537–560. Magnus, P. D. and Callender, C. (2004) “Realist Ennui and the Base Rate Fallacy,” Philosophy of Science 71(3), 320–338. Matheson, C. (1998) “Why the No-Miracles Argument Fails,” International Studies in the Philosophy of Science 12(3), 263–279. Maxwell, G. (1962) “On the Ontological Status of Theoretical Entities,” in H. Feigl and G. Maxwell (eds.), Scientific Explanation, Space, and Time, Minnesota Studies in the Philosophy of Science, vol. 3, Minneapolis: University of Minnesota Press, pp. 3–27.

162 Global vs. local arguments for realism

Maxwell, J. C. (1878) “Ether,” Encyclopedia Britannica 8, 568–572. Perrin, J. (1916) Atoms (D. L. Hammick, trans.), New York: D. van Nostrand Co. Psillos, S. (1999) Scientific Realism: How Science Tracks Truth, London: Routledge. ——— (2011) “Moving Molecules above the Scientific Horizon: On Perrin’s Case for Realism,”Journal for General Philosophy of Science 42(2), 339–363. Putnam, H. (1975) Philosophical Papers: Volume 1, Mathematics, Matter and Method, Cambridge: Cambridge University Press. Roush, S. (2006) Tracking Truth, Oxford: Oxford University Press. Saatsi, J. (2009) “Form vs. Content-Driven Arguments for Realism,” in P. D. Magnus and J. Busch (eds.), New Waves in Philosophy of Science, London: Palgrave-Macmillan, pp. 8–28. ——— (2017) “Replacing Recipe Realism,” Synthese 194(9), 3233–3244. Saatsi, J. and Vickers, P. (2011) “Miraculous Success? Inconsistency and Untruth in Kirchhoff’s Diffraction Theory,” The British Journal for the Philosophy of Science 62(1), 29–46. Salmon, W. C. (1984) Scientific Explanation and the Causal Structure of the World, Princeton: Princeton Uni- versity Press. Stanford, P. K. (2003) “No Refuge for Realism: Selective Confirmation and the History of Science,”Philos- ophy of Science 70(5), 913–925. ——— (2006) Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives, Oxford: Oxford University Press. ——— (2009) “Scientific Realism, the Atomic Theory, and the Catch-All Hypothesis: Can We Test Fun- damental Theories against All Serious Alternatives?” The British Journal for the Philosophy of Science 60(2), 253–269. Vickers, P. (2013) “A Confrontation of Convergent Realism,” Philosophy of Science 80(2), 189–211. Worrall, J. (1989) “Structural Realism: The Best of Both Worlds?” Dialectica 43, 99–124.

163 13 PERSPECTIVISM

Michela Massimi

1 Locating perspectivism in the landscape of realism Among the many varieties of realism in contemporary philosophy of science, perspectivism – or better, perspectival realism – is one of the latest attempts at a middle ground in between scientific realism and antirealism.1 What kind of middle ground can perspectival realism possibly deliver which has not already been explored by structural realism, semi-realism, entity realism, and selective realism, among others? In this chapter, I clarify (i) what perspectivism is, (ii) whether it can be made compatible with realism, and (iii) what it has to offer in terms of novel middle ground. Some definitions first. The term perspectivism is used to denote a family of positions that in different ways place emphasis on our scientific knowledge being situated. Thus, I take perspectivism to be first and foremost an epistemic view about the nature of our scientific knowledge. It is not intended to be a metaphysical view about scientific facts being perspectival or natural kinds being relative to scientific perspectives.Scientific knowledge is here broadly understood to include scientific representations, modeling practices, data gathering, data analysis, and, more in general, scientific theories involved in the production of scientific knowledge.Being situated is understood in two main ways:

(1) Our scientific knowledge ishistorically situated, that is, it is the inevitable product of the his- torical period to which those scientific representations, modeling practices, data gathering, and scientific theories belong.

And/Or

(2) Our scientific knowledge is culturally situated, that is, it is the inevitable product of the pre- vailing cultural tradition in which those scientific representations, modeling practices, data gathering, and scientific theories were formulated.

The “and/or” is important. Some authors (notably Ron Giere, the main advocate of the position) understand (1) and (2) as part of a unified picture about how scientific knowledge is situated. But several other authors would privilege and focus on either (1) or (2). Most of the recent discussions on perspectivism have indeed focused on (2) rather than on (1) for reasons that – as I explain in

164 Perspectivism the next two sections – have got to do with two different and seemingly independent rationales for perspectivism. Thus, the first distinction to draw is between a diachronic version of perspectivism, along the lines of (1), which places emphasis on the historical component, and a synchronic version of perspectivism, along the lines of (2), which explores how different research programmes or alternative modeling practices (within the same historical period) may give rise to perspectival knowledge. The arguments and rationales for endorsing the diachronic or synchronic version of perspectivism are sufficiently distinct and independent of each other (see sections 2 and 3). Some of the challenges that perspectivism (in either version) faces are similar, and similar are the answers that – in my view – perspectivism ought to give to these challenges, as I argue in section 4. That our scientific knowledge is situated (either historically or culturally or both) may not strike as breaking news. It is at least as old as Kant’s Copernican Revolution and Nietzsche’s perspectivism. No wonder, then, that both are often indicated as forefathers to the position. Contemporary perspectivism shares indeed the same Kantian roots with Putnam’s internal real- ism. Like Putnam’s internal realism, perspectivism too is reacting against metaphysical realism and the so-called God’s eye view that claims, “the world consists of some fixed totality of mind-independent objects. There is exactly one true and complete description of ‘the way the world is’” (Putnam 1982: 49). There cannot be an objective, unique, true description of the way the world is as soon as we acknowledge that our scientific knowledge is always from a specific vantage point – either in the sense of (1) or (2) or both. Even more so, if we further acknowledge that these specific vantage points are often in conflict with each other (e.g. Ptolemaic astronomy vs. Copernican astronomy; electrons as corpuscles vs. electrons as elements of an ether in the nineteenth-century physics, and so on). Kant’s rediscovery of the human vantage point – the only vantage point from which knowl- edge of nature is possible for us – chimes with perspectivism’s insistence on the situated nature of our scientific knowledge. In this sense, contemporary perspectivism and Putnam’s internal realism are two branches of the same Kantian tree. Yet what is most interesting is not their (common) Kantian roots but their ‘branching’, so to speak. The two positions part their ways in the under- pinning rationales and reading of Kant and his legacy. Internal realism is primarily prompted by semantic considerations, namely by Putnam’s famous permutation argument (Putnam 1982: ch. 2). The argument is designed to show that – even granted the truth-values of sentences can be held fixed in every possible world – the reference of the main terms would nonetheless be indeterminate. For it is possible to interpret the language so that ‘cat*’ refers to cherries and ‘mat*’ refers to tree. Thus, the sentence “A cat* is on a mat*” is still true in every possible world where “A cat is on a mat” is true, although the reference of cat* is permuted to pick out cherries and the reference of mat* is permuted to pick out tree (for a recent critical discussion, see Button 2013). No similar semantic concern prompts perspectivism in philosophy of science. Instead, epistemic and methodological concerns primarily motivate perspectivism (as we discuss in sections 2 and 3). But there is more. Putnam read Kant as the first philosopher to advocate an ‘internal realist’ view of truth, whereby truth is “a statement that a rational being would accept on sufficient experience of the kind that it is actually possible for beings with our nature to have” (Putnam 1982: 64). Perspectivism – I take it – sees in Kant the first philosopher to defend the view that

Reason, in order to be taught by nature, must approach nature with its principles in one hand, . . . and, in the other hand, the experiments thought out in accordance with these principles – yet in order to be instructed by nature not like a pupil, who has recited

165 Michela Massimi

to him whatever the teacher wants to say, but like an appointed judge who compels witnesses to answer the questions he puts to them. (Kant 1781/87/1997: Bxiii–xiv)

The relevance of Kant’s Copernican Revolution for perspectivism lies in the emphasis placed on the human vantage point from which questions about nature can be asked. That Kant defended truth as idealized rational assertibility (à la Putnam) is not clear from textual evidence. That Kant advocated objective and necessary knowledge of nature – incompatible with perspectivism’s pluralism – is also uncontroversial. Thus, the Kantian legacy for perspectivism lies, in my view, neither in the notion of truth nor in the notion of objectivity. It lies instead in the acknowledge- ment of the human vantage point (as opposed to the God’s eye view) from which only knowledge of nature becomes possible for us. Perspectival knowledge is knowledge from a human vantage point (although not necessarily along the Kantian lines of postulating a priori conditions of possibility of experience in the faculty of sensibility and the faculty of understanding, of course). In the next two sections, I unpack the epistemic grounds (section 2) and the methodological grounds (section 3), which are respectively at work in motivating the two (diachronic and syn- chronic) versions of perspectivism.

2 Epistemic grounds for diachronic perspectivism: Giere’s perspectivism That scientific knowledge is perspectival is evident from historical records. Knowledge produced by different scientific communities across different historical periods is perspectival: it isthe product of specific choices of instruments, theoretical apparatuses, and measurement techniques idiosyncratic to any given scientific community at any given historical time. Thomas Kuhn first brought to the general attention this perspectival feature of scientific knowledge. He did not quite call it such and opted for the idiom of ‘scientific paradigms’ or ‘scientific lexicons’ (rather than ‘scientific perspectives’). Yet Kuhn’s rationale is pretty much the same rationale that moti- vates the most prominent advocate of scientific perspectivism: Ron Giere (2006). But it is not the late Kuhn of semantic incommensurability that provides the inspiration for Giere’s perspectivism (2006: 83): “It is clear, I think, that there are no problems of linguistic incommensurability for perspectives”. Instead, it is the early Kuhn, who defended epistemic incommensurability as rooted in perspectival theories, data, methodologies, and values of differ- ent historical communities. Key to Giere’s perspectivism is the extension of the relational and perspectival metaphor of color vision to the whole body of scientific knowledge. Our ability to discern colors is due to our human vantage point and the ensuing relation among the refractive index of any given surface, light rays, and our retinas. Similarly, measurements in science are the outcomes of relations between the target system and our technical apparatus (i.e. with its settings, relevant parameters, etc.). Giere argues that scientific knowledge is perspectival all the way up, not just at the level of observation and measurement. Models and scientific theories are perspectival too: “Newton’s laws characterize the classical mechanical perspective; Maxwell’s laws characterize the classical electromagnetic perspective; the Schrödinger Equation characterizes a quantum mechanical per- spective” (Giere 2006: 14). According to Giere’s model-based view of scientific theories, perspec- tivism affects each level of the hierarchy of models (from data models to representational models, all the way up to scientific principles). Experimental data are perspectival because they reflect the nature of the chosen measuring instrument (e.g. what counts as relevant or irrelevant parameter; how statistical errors and background noise are controlled, etc.). The representational models are

166 Perspectivism themselves perspectival in idealizing some factors and abstracting from others (e.g. in the model of the harmonic oscillator, mass is idealized as a point-mass, and displacement from equilibrium abstracts from disturbing factors). Finally, the general principles (e.g. Newton’s laws) that inform the choice of the representational models are themselves perspectival. Giere’s perspectivism shares with Kuhn’s scientific paradigms the idea that “Claims about the truth of scientific statements or the fit of models to the world are made within paradigms or perspectives” (Giere 2006: 82). Or “truth claims are always relative to a perspective” (Giere 2006: 81). Giere’s perspectivism takes on board the Kuhnian insight that there is no cross-paradigmatic (or cross-perspectival) notion of truth at the end of scientific inquiry. What counts as true (or false) is simply a function of how particular data models fit particular theoretical models. And since both kinds of models are perspectival, as a result, any model-based knowledge claim is either true or false only within the boundaries of the chosen scientific perspective. Under this influen- tial epistemological reading of perspectivism, truth is relativized to scientific perspectives. Giere’s epistemological argument for perspectivism can be summed up as follows:

(A) Our scientific knowledge isperspectival because scientific knowledge claims are only possible within a (historically) well-defined family of models (e.g. the Newtonian perspective, the Maxwellian perspective, etc.), which constrain both the data available (via data models) and the interpretation of those data (via theoretical models and principles of the scientific per- spective adopted). No knowledge of nature is possible outside the boundaries of historically well-defined scientific perspectives.

Despite clear similarities with the Kuhnian picture, it should be clear that a “scientific perspec- tive”, in Giere’s own use of the term, differs from a scientific paradigm in some relevant respects. A scientific paradigm – or, better, what Kuhn called a “disciplinary matrix” – is broader than a scientific perspective (Giere 2006: 82) in including (in addition to data models and representational models) also systems of values and metaphysical beliefs of a given community at any given time. Beyond Kuhn, the second greatest influences for Giere’s perspectivism are science studies, with the post-Vietnam disillusion about the liberating power of science, and the contingency thesis defended by sociology of scientific knowledge (SSK). From this second main source, Giere’s per- spectivism inherits the rejection of ‘absolute objectivism’, that is, the idea that scientific knowl- edge is the body of objective, non-contingent, socially neutral, and value-free truths. Diachronic perspectivism takes on board the lesson from the history of science and from scientific practice in acknowledging that no such objective, God’s eye scientific knowledge is ever available to us. Hence, perspectival realism is defined by Giere as the view that says,

“According to this highly confirmed theory (or reliable instrument), the world seems to be roughly such and such”. There is no way legitimately to take the further objec- tivist step and declare unconditionally: “This theory (or instrument) provides us with a complete and literally correct picture of the world itself”. The main thrust of the argu- ments to be presented in this book is to show that the practice of science itself supports a perspectival rather than an objectivist understanding of scientific realism. ( Giere 2006: 6)

To conclude, diachronic perspectivism is the view that takes the lessons from the history of sci- ence and scientific practice to support the conclusion that scientific knowledge is perspectival. The arguments behind this view are epistemic: they concern the elaboration and justification for scientific knowledge claims within scientific perspectives – broadly understood as hierarchies of

167 Michela Massimi models typical of any given historical period (e.g. Newtonian perspective, Maxwellian perspec- tive, and so on). The realist’s truth and objectivity are both called into question. Truth is relativized to perspectives and objectivity abandoned altogether. Is Giere’s perspectival realism realist enough to qualify for the title ‘realism’? Doubts arise as to whether relativizing truth to perspectives makes the position closer to relativism than to realism (Massimi 2015a raises and discusses this critique). We will go back to both truth and objectivity in section 4, where we discuss whether perspectivism can in fact be made compatible with realism. Before doing so, we must turn our attention to other methodological (more than epistemic) considerations at work behind perspectivism.

3 Methodological grounds for synchronic perspectivism: incompatible models Engagement with the history of science is not the only possible avenue to perspectivism. After all, perspectivism in its synchronic version is a thesis about how – within the same historical period – different research programmes or alternative modeling practices may give rise to per- spectival knowledge. Unsurprisingly, perspectivism has recently been advocated in the context of the vast literature on scientific models. In a seminal paper, Alexander Rueger (2005) addresses the problem of how scientific realism can handle the problem of incompatible models for the same target system. Consider for example atomic models. They come in at least four main families (quark models, cluster models, shell models, and liquid drop models). Each of these families pro- vides a different (incompatible) description of the atomic structure, its properties, and dynamics. Scientific realism holds that there are mind-independent facts about atoms. Thus, one would expect our best models to deliver a description of these facts. Instead, we are left with a plethora of models providing incompatible representations of the atom and its intrinsic properties and dynamical processes. To solve the problem, Rueger makes a perspectival move. For different models seem to offer dif- ferent perspectives on the same target system (e.g. the atom, or else), whereby intrinsic properties of the target system turn out to be in fact relational properties. In this way, models do not deliver incompatible images of the same target system. Rather, they deliver only partial and perspectival images that can still be unified into a final coherent image, as realism would have it. Along similar lines, Paul Teller (2011) has argued that since models are idealizations (hence, inevitably imprecise), it is possible to have more than one model for the same target system (e.g. hydrodynamics and statistical mechanics for the description of water’s properties) without having to forgo realism. Neither of these models can legitimately be regarded as delivering the exact truth about the properties of water, because truth remains a non-attainable goal of scientific inquiry. (For a similar defense of the perspectival nature of scientific representation, see also van Fraassen 2008). In its place, science offers a series of partial, idealized, imprecise perspectival images, which nonetheless succeed in advancing our knowledge of nature. The Rueger–Teller argument for perspectivism – as I am going to call it – is then a methodological argument (rather than an epistemological one) that goes as follows:

(B) Our scientific knowledge isperspectival because scientific knowledge claims are only possible within (culturally) well-defined families of models of any given scientific perspective at any given time (e.g. hydrodynamics and statistical mechanics in the contemporary treatment of water; quark model, liquid drop model, shell model, and cluster model for the contemporary atomic theory). No knowledge of nature is possible outside the boundaries of culturally well-defined scientific perspectives with their pluralism about models.

168 Perspectivism

There are analogies and disanalogies between (A) and (B). The epistemological and the methodo- logical arguments share the Kantian insight about the central relevance of our conditions of possi- bility of knowledge. Our human (historical and cultural) vantage point shapes and makes possible our scientific knowledge claims about nature. Hence, the Kantian nature of perspectivism. How- ever, (A) and (B) differ in the way they explicate the perspectival nature of our scientific knowl- edge. While (A) and (B) share a common rejection of objectivity, they diverge about scientifictruth . (A) argues that the truth of our scientific knowledge claims is relative to historically defined scientific perspectives (e.g. what was true for Ptolemy proved false for Copernicus). By con- trast, (B) takes pluralism about models as an indication that truth is either (Bi) a non-attainable and (maybe not-so-desirable either) goal of scientific inquiry (Teller); or, (Bii) that truth can be preserved if we understand property assignment to a target system as perspectival (Rueger). I have briefly mentioned at the end of section 2 the somehow relativist flavor that seems to affect Giere’s (A). Here I want to mention two responses to (B) that drive a wedge in the choice between (Bi) and (Bii), with the result of making perspectivism fall back onto either a form of instrumentalism or a form of scientific realism. Margaret Morrison (2011) has challenged the claim behind (Bi) that partial, idealized, impre- cise models can nonetheless expand our overall knowledge of the target system, even if truth remains a non-achievable goal. Consider nuclear physics. It currently features thirty alternative and inconsistent (not just incompatible) models that neither individually nor jointly are able to answer basic questions about the nature of nucleons and of the nuclear force. Morrison (2011: 351) concludes,

So, we are left in an epistemic quandary when trying to evaluate the realistic status of these nuclear models and the information they provide. (. . .) What is perhaps signif- icant for philosophical purposes is that this is not a situation that is resolvable using strategies like partial structures, paraconsistent logic or perspectivism. No amount of philosophical wizardry can solve what is essentially a scientific problem of consistency and coherence in theoretical knowledge.

On Morrison’s reading then, the methodological argument for perspectivism (Bi), with its pluralism about models, slides onto a dangerous form of ‘view from everywhere’, which – like in Piran- dello’s play One, No One and One Hundred Thousand – make perspectivism akin to a sophisticated form of instrumentalism about science. Maybe none of the available models is the true model of the nucleus; and maybe that is not a problem insofar as practitioners can avail themselves of several models as they see fit to their purposes every time. Yet if this is indeed the situation, it is bad news for the prospect of cashing out perspectivism as a kind of realism. The opposite risk with (Bii) is that by understanding all properties of scientific objects as per- spectival/relational qualities, the door is open to the scientific realist’s rejoinder made by Anjan Chakravartty (2010: 410) that “putatively perspectival facts may be straightforwardly understood as non-perspectival facts regarding how behavioural dispositions are manifested under different stimulus conditions. Let us label these ‘dispositional facts’”. The prima facie inconsistency among properties assigned by different models need not be understood in terms of perspectival images of the same target system, ascribing different relational properties to the same entity. It can more easily be understood in the idiom of scientific realism – in terms of dispositional properties of objects manifesting themselves in various stimulus conditions. Different contexts of investigation, different measurement systems and modeling techniques can simply elicit different non-perspectival, dispositional facts about the scientific entities, their properties, and manifesta- tions. Prima facie relational qualities are nothing over and above different manifestation conditions

169 Michela Massimi for the same non-perspectival, dispositional properties of scientific entities. Perspectivism along the lines of (Bii) falls back onto dispositional realism. Trapped in between the charge of instrumentalism on the one hand and the risk of falling back onto dispositional realism on the other hand the chances of cashing out a promising version of perspectivism seem dwindling. Can perspectivism fulfill the Kantian promise of knowledge from a human point of view and, at the same time, be made compatible with realism? Can it deliver a middle ground in between realism and anti-realism in science? The next section explains why defenders of perspectivism might still have reasons for optimism.

4 Can perspectivism be made compatible with realism? The epistemological argument for perspectivism left us with doubts about a lingering form of relativism (e.g. is truth really relative to perspectives, as Giere maintains?). The methodological argument landed perspectivism in the quandary of being either a form of instrumentalism about science (Bi) or collapsing onto dispositional realism (Bii). Where does the Kantian insight behind perspectivism go astray? And can it be made compatible with realism? In what follows I offer my own diagnosis of the problem and suggest a possible way forward (programmatic as it is at this stage). Philosophers on either side of the debate on perspectivism have at best failed to draw a clear-cut distinction between objectivity and truth; at worst, they have conflated the two. For often enough perspectivism is presented as a view about facts being perspectival; or about properties being relational; or about truth being relative. Couched in this language, it is no surprise that perspectivism verges on either fact-constructivism or alethic relativism – and, needless to say, either option is a non-starter for cashing out a viable form of per- spectival realism (if the title realism is still meaningful). If perspectivism has to be made compatible with realism, something ought to be said about facts not being shaped by scientific perspectives or truth relativized to them. The culprit of the muddy waters surrounding contemporary dis- cussions of perspectivism is, in my view, the tendency to understand the rejection of scientific objectivity (qua God’s eye view on nature) as tantamount to a much stronger (and non sequitur) claim about worldly states of affairs being relative to scientific perspectives. Perspectivism is often cast in the Kuhnian mould, whereby scientific communities are taken as producers and validators of their own knowledge claims, with no mind-independent states of affairs or norms for truth outside the boundaries of historically defined perspectives. In my view, this is an erroneous take on the perspectivist insight that no God’s eye view of nature is ever available to us. For one can accept and fully endorse that scientific inquiry is indeed pluralistic and that there is no unique, objective, and privileged epistemic vantage point without necessarily having to conclude that perspectives shape scientific facts or relativize truth. In fact, I have argued elsewhere (Massimi 2015b) that not even Kuhn endorsed these bold and question- able views. And certainly, no contemporary perspectivalist should endorse them either. As both the epistemological and the methodological arguments jointly show, all there is to the Kantian/ Kuhnian perspectivalist stance is that no knowledge of nature is possible outside the boundaries of historically and culturally well-defined scientific perspectives. Epistemic pluralism speaks against ‘objectivist’ realism as the view that there is a unique, objective, privileged standpoint for scien- tific investigation. But epistemic pluralism – per se – does not also speak against truth or against perspective-independent facts. Thus, I suggest that we understand perspectival realism (PR) as a form of realism about science along the following threefold lines:

(I) PR endorses the realist metaphysical tenet about a mind-independent (and perspective- independent) world;2

170 Perspectivism

(II) PR endorses the realist semantic tenet about a literal construal of the language of science;3 (III) Finally, PR endorses the realist epistemic tenet in thinking that acceptance of a theory implies the belief that the theory is true (and even shares the realist intuition that truth has to be cashed out in terms of correspondence, rather than coherence, warranted assertibility and so forth).4

PR – as I understand it – can share with scientific realism the view that worldly states of affairs, the language of science, and truth as correspondence are all perspective independent. What makes PR perspectival then? How does it differ from scientific realism? In my view, the rejection of the God’s eye view in PR leads to a genuinely novel and fruitful notion of perspectival truth and scientific progress across perspectives (among several other relevant aspects). Let us take a quick look at both of them (more details can be found in Massimi 2016a, 2016b, respectively). What is it like to be true within a perspective? Perspectival truth can be regarded as a form of perspective sensitivity, whereby scientific perspectives provides the circumstances or context of use defining the truth-conditions for knowledge claims in science. For example, we can interpret scientific models (with their inaccurate idealizations of the target system) as filling in contextual truth-conditions (understood as rules for determining the truth-values based on features of the context). Perspectival truth is then truth (qua correspondence with mind-independent states of affairs) but contextualized within the limits afforded by rival scientific models or rival historical perspectives. To use a toy example taken from Teller (2011), when asking what the properties of water are (for example, whether water is indeed a fluid with viscosity), one might get different answers depending on whether the context of use for this scientific knowledge claim is defined by hydrodynamics or by statistical mechanics (which treats water as a statistical ensemble of mol- ecules). However, this is not a case of relativized truth, where the same propositional content can be assigned different truth-values (true or false) by different perspectives. It is in fact possible to retrieve true knowledge claims about what appears as a primitive property of water (viscosity) in hydrodynamics from the statistical properties of molecules’ mean flow in the statistical-mechan- ical perspective without having to conclude that the very same knowledge claim is true in one perspective, and false in the other. There are facts about water and its properties that are inde- pendent of scientific perspectives. And, more to the point, a perspectival realist can get these facts right. Perspectives – under the reading I am suggesting – provide contextual truth-conditions, which often enough can be ‘translated’ so to speak from one scientific perspective to another.5 As a result, we need to rethink the semantic requirement (II). It is still the case that – under this reading of PR – the language of science is interpreted literally (pace Putnam’s permu- tation argument) so that ‘viscosity’ refers to viscosity (and cannot be concocted to refer to electric charge, for example). However, contextual truth-conditions imply that the exact ref- erence of the term ‘viscosity’ is somehow undetermined until the contextual truth-conditions are put in place.6 Thus, I take PR to share important aspects with contextualism. Truth-conditions for sci- entific knowledge claims vary in interesting ways depending on the context in which they are uttered and used. At the same time, PR differs from a linguistically geared view such as contextualism in acknowledging with (I) that there are perspective-independent worldly states of affairs that ultimately make our scientific knowledge claims true or false. There are worldly states of affairs about water’s viscosity that ultimately make (X) either true or false. Yet our ability to know these states of affairs (and hence to ascribe a truth-value to the relevant knowledge claim) depends inevitably on the perspectival circumstances or context of use. For example, for me now to know that , some perspectival truth-conditions have to be met. For example, I may want to check that samples of water satisfy the Navier-Stokes

171 Michela Massimi equations; I can look at rheology for predicting the mechanical behaviour of water under the action of particular forces and stresses; I can run tests on water samples (relying on different viscoelastic models); and so forth. Contextual truth-conditions can then be understood in terms of perspectival standards of perfor- mance adequacy that a scientific knowledge claim has to satisfy (for details on this notion, please see Massimi 2016a, 2016b). While perspective-independent states of affairs are ultimately the tribu- nal that decides whether any knowledge claim is true or false, for us to know that (X), for example, is true, it has to be the case not only that (X) matches some worldly state of affairs but also that it meets the relevant perspectival standards of performance-adequacy in its context of use. This is where the perspectival story becomes salient. While a scientific realist would consider ‘correspondence with the world’ enough for the purpose of realism, a perspectival realist (who takes the situated nature of our scientific knowledge at heart) would not consider ‘correspond- ence with the world’ enough. After all, scientific knowledge claims are not truthssub specie aeter- nitatis. They are instead the expression of particular communities at particular historical times, working within well-defined intellectual traditions. Hence the importance of perspectival stand- ards of performance adequacy in the original context of use in defining whether any scientific knowledge claim is (perspectivally) true. Mutatis mutandis, perspectival standards of performance adequacy (in and of themselves) are not enough to define whether a scientific knowledge claim is true or false. PR has to be able to assess falsehood and to make sure that our perspectival knowledge claims do indeed latch onto perspective-independent states of affairs. Thus, the perspectival standards adopted by any given community at any given time per se are necessary but not sufficient to establish whether any given scientific claim is indeed true. Scientific perspectives cannot sanction their own scientific truths. For this reason, we need to bring in another element: the notion of scientific progress across perspectives. Scientific progress can be characterized in perspectivalist terms if we take scientific perspec- tives not just as contexts of use —laying down specific standards of performance-adequacy for knowledge claims—but also, and most importantly, as contexts of assessment offering standpoints from which claims of other (past) scientific perspectives can be evaluated (in terms of their ongoing performance adequacy as set out by their original standards). Thus, scientific claims of our historical predecessors can be retained or withdrawn, depending on whether they continue to satisfy their original standards of performance-adequacy when assessed from another (sub- sequent) perspective. Fresnel’s equations are still (to some extent) part of our current scientific perspective because of their ongoing performance adequacy when assessed from our current vantage point. Ancient Greek crystalline spheres are no longer part of our current scientific perspective because they have long lost their performance adequacy with respect to their own original standards (e.g. stability of circular orbits, agreement with astronomical data, neat division between celestial and terrestrial phenomena, etc.). By the time of Kepler, Galileo, and Newton, crystalline spheres had proved incapable of delivering on their own original standards, since orbits turn out to be elliptic; the Prutenic tables replaced the Alfonsine tables, which could not deliver the right date for Easter; the orbital stability was now explained by the parallelogram law; and the rationale and evidence for the divide between terrestrial and celestial physics had been removed once and for all. Hence, crystalline spheres failed by their own standards (following Kuhn on anomalies and periods of crisis) and were eventually withdrawn (rather than retained) in the repertoire of the new Newtonian perspective. Were crystalline spheres ever (perspectivally) true in their own original scientific perspective? No, insofar as there are (as far as we can tell) no crystalline spheres in nature (the ancient Greek perspective cannot sanction its own truths).

172 Perspectivism

Yet, crystalline spheres performed in the original perspective an important function in pro- viding a first hypothetical mechanism for planetary motion, for which a more adequate mecha- nism was later provided by Newton’s gravity (as a force acting at a distance), and an even more adequate mechanism was subsequently provided by Einstein’s general relativity (in terms of mass-energy tensor curving spacetime). In what sense were these mechanisms ‘more adequate’ in subsequent perspectives? Simply because they took a primitive, hypothetical, non-further- explainable mechanism for orbital motion (crystalline spheres) and replaced it with new mech- anisms which satisfied standards of performance adequacy such as experimental accuracy (e.g. via Brahe’s observational data for Kepler’s elliptical orbits or Eddington’s solar eclipse expedition for general relativity), projectibility (e.g. Newton’s law of gravity; Einstein’s principle of equivalence), and so forth. Thus, scientific progress tracks (to the best of our fallible and revisable knowledge in any given historical period) worldly states of affairs across scientific perspectives. Scientific knowledge claims of our predecessors are not discarded out of hand as simply false. Instead, their important role in the experimental, theoretical, and conceptual evolution of our current scientific toolkit is acknowledged and embraced. Most importantly, PR does not take our current scientific toolkit as the privileged, unique available one. It recognizes instead that our current vantage point is just one among many others that have preceded us and that will follow us. Hence, a defender of PR would not say, “our current scientific truths are thebest ones, full stop. We got it right!” Instead, she ought to be saying, “our current scientific truths are the best ones we are entitled to by our own lights as of today”. PR amounts to a form of epistemic humility when it comes to truth and progress in science.

5 Pluralism and pragmatism Epistemic humility is not, however, a feature unique to PR. It is common to varieties of real- ism that share with PR a commitment to pluralism, more in general. John Dupré’s (1981) promiscuous realism, Sandra Mitchell’s (in Mitchell and Gronenborn 2017) pragmatic realism, Chakravartty’s (2011) sociability-based pluralism, Kitcher’s (1993) real realism, among many oth- ers, are all equally committed to a form of pluralism and hence epistemic humility. What kind of pluralism is distinctive of PR? And how does it differ from the pluralism at stake in these other positions? As the analysis in the preceding sections indicates, the pluralism at stake in PR originates from the perspective-sensitivity of our scientific knowledge claims. This is neither pluralism at the level of (perspective-independent) facts nor at the level of taxonomic classifications. It is instead a distinctively epistemic form of pluralism in bringing in the very many contextual/perspectival conditions of knowing. How does this form of epistemic pluralism square with others when it comes to metaphysical import? Consider the following three cases:

(i) Chakravartty’s sociability-based pluralism can be regarded as a minimal pluralist require- ment that can be made compatible with realism. Groupings of properties can be carved and re-carved in many ways (although some of these properties have a tendency to ‘hang out’ together more than others). Thus, pluralism about taxonomic classifications (qua sociable groups of properties) implies pluralism about mind-independent kinds in nature. (ii) Dupré promiscuous realism embraces a form of taxonomic pluralism too, whereby the same object may feature in cross-cutting classifications (e.g. the gardener’s and the botanist’s clas- sifications for lily; hemlock as a vegetable and a poison). Yet Dupré’s taxonomic pluralism is non-committal as to the mind-independence of kinds. Indeed, it is best understood as

173 Michela Massimi

realism about individuals (e.g. this particular lily) but not about natural kinds qua higher taxa (e.g. Liliaceae). (iii) Pragmatic realism also embraces a form of taxonomic pluralism, which, on the face of it, seems very close to the contextualist considerations behind PR. For the pragmatic realist commits herself to different descriptions of the same target system, depending on the particular per- spectival practice relevant to any given situation. For example, in the case of protein folding, there are three different models (in vivo, in vitro, and in silico) that can interchangeably be used to describe the same phenomenon for different purposes. The prima facie incompati- bility of these different modeling practices does not undermine realism because of the purely pragmatic stance towards the metaphysics of science.

PR differs from (i) - (ii) - (iii) as follows. By contrast with the ontological pluralism seem- ingly implied by Chakravartty’s sociable properties, PR keeps epistemic pluralism safely detached from ontological pluralism. While agreeing with sociability-based pluralism about patterns of mind-independent properties in nature, PR’s pluralism does not buy into its ontological impli- cations. Our multiple ways of knowing do not track multiple ontologies in nature. At best, they track the same historically and culturally evolving kinds as per (A) and (B). While promiscuous realism is ultimately nominalist about natural kinds, PR is committed to a realist view about kinds. Natural kinds are real because they map onto mind-independent clusters of properties in nature. It is our conditions of possibility of knowing these clusters of properties that depend (in relevant ways) on our scientific perspective. By contrast with pragmatic realism, PR holds that a commitment to pluralism is not purely pragmatic – that is, it is not just in terms of utility or serving particular functions in particular contexts. The epistemic pluralism at the heart of PR serves instead the purpose of tracking truth across perspectives. As such, it is kosher to the realist’s plea for truth rather than the pragmatist’s quest for fulfilling the epistemic needs of agents in scientific research.

Acknowledgements I thank Juha Saatsi for giving me the opportunity to contribute to this volume and for careful editorial comments. This article is part of a project that has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement European Consolidator Grant H2020-ERC-2014-CoG 647272 Perspectival Realism. Science, Knowledge, and Truth from a Human Vantage Point).

Notes 1 Other prominent middle grounds in this debate are structural realism (Worrall 1989) and semirealism (Chakravartty 1998), just to mention two. 2 This requirement is essential to avoid conflating PR with a form of constructivism or idealism. Scientific perspectives do not act as cookie cutters in the worldly dough; they do not shape facts or condition states of affairs. That electrons have negative charge or that carcinogenesis involves mutant k-ras genes has nothing to do with any currently accepted scientific perspective – these are perspective-independent facts that exist and would exist even if J. J. Thomson had not existed or we had not developed the medical theory about carcinogenesis that we have. 3 This second requirement captures the aforementioned semantic difference with Putnam’s internal realism, which by contrast is motivated by the permutation argument (among others). A perspectival realist takes the language of science at face value, pretty much as a scientific realist would do (e.g. ‘electron’ has to refer to electron, and it cannot refer to cherries under some suitable construal of the language in some other possible world).

174 Perspectivism

4 This third requirement safeguards PR from the threat of alethic relativism. See, for example, Rorty’s (1993) famous objection to Putnam’s internal realism, whereby the adoption of a theory of truth as idealized rational assertibility is vulnerable to the rejoinder that truth is nothing but assertibility ‘for us at our best’, as ‘tolerant wet liberals’ able to feel solidarity with a community that views p as warranted. 5 It is worth stressing that the mechanisms for ‘translating’ scientific knowledge claims between perspectives range from having inter-reduction rules (as in the example of water) to becoming a limiting case of a more general scientific knowledge claim in a new perspective among many other possible mechanisms. 6 For example, in hydrodynamics, ‘viscosity’ refers to a primitive dynamic property in the distribution of the liquid flow. But in statistical mechanics, ‘viscosity’ refers to the momentum transport across laminae of molecules. Reference being undetermined is very different from reference being indeterminate (as with Quine and Putnam): ‘viscosity’ cannot be construed to refer to cherries in some other possible world, according to PR.

References Button, T. (2013) The Limits of Realism, Oxford: Oxford University Press. Chakravartty, A. (1998) “Semirealism,” Studies in History and Philosophy of Science 29, 391–408. ——— (2010) “Perspectivism, Inconsistent Models, and Contrastive Explanation,” Studies in History and Philosophy of Science 41, 405–412. ——— (2011) “Scientific Realism and Ontological Relativity,”The Monist 94, 157–180. Dupré, J. (1981) “Natural Kinds and Biological Taxa,” Philosophical Review 90, 66–90. Fraassen, B. van (2008) Scientific Representation: Paradoxes of Perspective, Oxford: Oxford University Press. Giere, R. (2006) Scientific Perspectivism, Chicago: University of Chicago Press. Kant, I. ([1781–87] 1997) Critique of Pure Reason, Cambridge: Cambridge University Press. Kitcher, P. (1993) “Real Realism: The Galilean Strategy,” Philosophical Review 110, 151–197. Massimi, M. (2015a) “Walking the Line: Kuhn between Realism and Relativism,” in A. Bokulich and W. Devlin (eds.), Boston Studies in the Philosophy of Science Kuhn’s Structure of Scientific Revolutions: 50 Years On, New York: Springer, 135–152. ——— (2015b) “Working in a New World: Kuhn, Constructivism and Mind-Dependence,” Studies in History and Philosophy of Science 50, 83–89. ——— (2016a) “Four Kinds of Perspectival Truth,” Philosophy and Phenomenological Research. Doi: 10.1111/ phpr.12300 ——— (2016b) “Three Tales of Scientific Success,” Philosophy of Science 83, 757–767. Mitchell, S. D. and Gronenborn, A. M. (2017) “After Fifty Years, Why Are Protein X-Ray Crystallographers Still in Business?” British Journal for the Philosophy of Science 68, 703–724. Morrison, M. (2011) “One Phenomenon, Many Models: Inconsistency and Complementarity,” Studies in History and Philosophy of Science 42, 342–351. Putnam, H. (1982) Reason, Truth, and History, Cambridge: Cambridge University Press. Rorty, R. (1993) “Putnam and the Relativist Menace,” Journal of Philosophy 90, 443–446. Rueger, A. (2005) “Perspectival Models and Theory Unification,”British Journal for the Philosophy of Science 56, 579–594. Teller, P. (2011) “Two Models of Truth,” Analysis 71, 465–472. Worrall, J. (1989) “Structural Realism: The Best of Both Worlds?” Dialectica 43, 99–124.

175 14 IS PLURALISM COMPATIBLE WITH SCIENTIFIC REALISM?

Hasok Chang

1 Introduction Scientific realism is usually associated with monism: when we say that science is a search for truth, it is typically assumed that there is only one truth about nature. When put in such a colloquial way, monism is difficult to dispute within a realist framework. However, a more careful look reveals that there are interesting ways in which realism is quite compatible with pluralism. I will even argue that realism for realistic people, or “realistic realism” to borrow Peter Kosso’s phrase (1998: 177–118), is inherently a pluralist doctrine, echoing Israel Scheffler’s (1999) plea for a pluralist realism (or “plurealism”). This chapter will deliver an argument for pluralist realism, with the secondary aim of provid- ing a critical commentary on various views in the existing literature on relevant issues. My initial focus will be on showing that arguments for realism from the success of science are fully com- patible with pluralism and, indeed, conducive to it. Success-based arguments, broadly conceived, still constitute the most popular and most persuasive arguments for scientific realism under the broad umbrella of empiricism,1 despite their well-known difficulties (see Psillos 1999: ch. 4, for an overview). Most people have come to accept the success-based arguments for realism in a fallibilist way, and the important next step is to understand them in a pluralist way. Pluralism has been inter- preted in various ways by its advocates in recent decades (e.g. Kellert, Longino, and Waters 2006; Mitchell 2003; Ruphy 2016). What exactly I mean by “pluralism” will be spelled out further later, but the following definition will serve well enough for now: “the doctrine advocating the cultivation of multiple systems of practice in any given field of science” (Chang 2012: 260). A “system of practice” is formed by a coherent set of epistemic activities performed with a view to achieve certain 2 aims. I will build up to pluralist scientific realism in two steps. Section 2 will show that the success of science is multi-dimensional and best pursued through a pluralist methodology. Section 3 will argue that the recognition of the plurality of success leads to a metaphysical pluralism if we want to maintain some sort of realist argument based on success. And then section 4 will make a more careful statement of pluralist realism on the basis of a strongly revisionist account of the very notions of truth and reality. Finally, section 5 will provide a perspective on pluralist realism as a practical doctrine for science.

176 Is pluralism compatible with realism?

2 The many ways of success Let us begin by dissecting the notion of success, which is a very vague and general term. Success, taken in its broadest sense, is the satisfaction of any aim held by the relevant agents. It is aston- ishing that “the success of science” and other similar phrases are used without precise definitions in important arguments by various philosophers ranging from Larry Laudan (1981) to Hilary Putnam (1975: 73).3 If we adopt any version of the argument for realism from the success of science, the most basic source of pluralism is the plurality of the aims of science. It is broadly agreed that description and understanding can be different types of aims. And the production of new phenomena is also a most important aim of science, apart from how well we describe and understand them. The plurality of aims within science gives plurality to the meaning of “success”, for different scientists and even for an individual scientist. There are also various aims of science definable according to the various epistemic values we hold. While Bas van Fraassen (1980: 87) considers empirical adequacy as the main aim of science, he does value other, “pragmatic” virtues of a theory, such as mathematical elegance, simplicity, scope, completeness, unifying power, and explanatory power. Van Fraassen, of course, was saying all this in the course of framing his anti-realist position, but most realist scientists and philoso- phers also consider such aspects of theories as important measures of success alongside empirical adequacy per se. If we should accept the primacy of empirical adequacy and see other virtues as subordinate conditions, that may seem to indicate a basic monism concerning success. However, a closer look immediately reveals that empirical success itself has multiple dimensions. This may be difficult to recognize in a fantasy-world version of perfect empirical adequacy, but in real life it is plain that degrees of empirical success can and should be assessed along various dimensions. In fact, what Thomas Kuhn (1977: 321–322) identified as basic epistemic values shared by all scientists can mostly be understood as dimensions of empirical success. First of all, accuracy expresses the basic idea of empirical fit. What he calls the consistency of a theory is “not only internally or with itself, but also with other currently accepted theories”, and the latter are presumably them- selves only accepted on the basis of empirical success. Scope indicates the number and variety of observation that a theory can account for. The picture of empirical success becomes even more complicated if we consider the dia- chronic dimension, as it brings in what Kuhn calls “fruitfulness”. As van Fraassen defines it, empirical adequacy means being correct about all observable phenomena, including what hasn’t yet been observed: “all the phenomena . . . not exhausted by those actually observed, nor even by those observed at some time, whether past, present, or future” (1980: 12). If we were to try making actual assessments of a theory’s empirical adequacy, that would involve making guesses about its future performance, not only about what the verdict of observations will be on the novel predictions that it has already made but also about what new predictions it might be making in the future and whether those predictions would be confirmed. And does the past track record of a theory in making successful novel predictions matter? And shouldn’t iterative resilience shown by a theory in improving its scope and precision be seen as an important aspect of its empirical success, possibly even more important than how well it happens to perform at a given moment, which could be due to a simple kind of luck? Such diachronic considerations of empirical success go clearly beyond the snapshot of success taken at any given moment. Having dissected the notion of success, let us return to the realist attempt to infer truth from success. Realists should be asking: how should we cultivate success so that we may reach truth? An indisputable and anodyne piece of advice would be that we should seek to reach the highest

177 Hasok Chang degree of empirical success. But as we have seen, a simple “degree” of success is an illusory concept; successfulness, in science as in life, is not something for which we can have a coherent one-dimensional ordinal measure. Successfulness is something that comes in various shapes as well as degrees. Some unambiguous quantification of successfulness would be required if we were to say “it would be a miracle for the theory to be so successful without being true” – just how successful is “so successful”? What is the threshold level of success beyond which the “no miracles” intuition should kick in? If the answer is “nobody knows”, then that ought to be the demise of any simple-minded “no-miracles” argument. But if the no-miracles type attempts to infer truth from the past and present empirical success of science must be abandoned, is there any course of argument still open to the realist starting from empirical success? One clear consequence of the multi-dimensional nature of success is a clear warrant for a certain degree of liberality in scientific methodology. If the success of science has many dimen- sions, it is not likely that various competing scientific systems of practice can be ranked in a single order of successfulness. In that situation it will be very difficult to argue that any particular system of practice is surely the royal road to truth. So it will be difficult to avoid epistemic pluralism, and there will be a methodological dimension to epistemic pluralism, since different systems of practice will typically involve different methods. Could epistemic pluralism be avoided? There are several possibilities but none palatable. It is in principle possible to have a uniquely successful system of practice more successful than all competing systems in all respects, but this is too unrealistic a hope to entertain in practice. In the absence of an all-round winner, could we have an assurance that any given dimension of empir- ical success will by itself allow us to infer from it the truth of the whole theory? That would be possible if the various dimensions co-varied reliably, but it is well known that there is often a significant trade-off between different dimensions of empirical success. For example, the trade- off between the scope of a theory’s applicability and the precision of its agreement with observa- tions was at the basis of Nancy Cartwright’s early view on the relationship between fundamental and phenomenological laws of physics (Cartwright 1983: esp. chs. 3, 6). Or is there a particular dimension of empirical success that is especially truth-conducive, so that realists can focus their pursuit on that in preference to others? The burden of argument is on those who think there is such a privileged dimension. It seems that the prudent thing for realists to do, then, is to engage in a pluralist pursuit of empirical success in all of its dimensions.4 In an ill-defined shot-in-the-dark quest for truth via success, some basic level of tolerance will be necessary. But can’t we do better than that? The short answer is “no”, but it is worth our while to explore the shape of that negative answer, because that will actually help us discern the shape that our positive pluralism needs to take. In order to do better here, we would need to know what the world is like at a deep metaphysical level. In a “dappled” or disordered world (Cartwright 1999; Dupré 1993), we should anticipate that scope will conflict with accuracy and accuracy with simplicity. In contrast, in the kind of world that Einstein envisaged, pursuing the right kind of simplicity would lead us to every other aspect of success. For Imre Lakatos and others who believe that the ability to make successful novel predictions is the sine qua non of a good scientific theory or research program, perhaps there is a deep-seated assumption of an underlying regularity of nature, in which general reliability is indicated by the ability to foresee what will be found in the immediate next bit of the universe that we will encounter. But it is very difficult to have confidence about any of these conflicting metaphysical viewpoints, each deeply held by some formidable intellects. The only prudent thing seems to be to allow various metaphysical possibilities and let different scientists pursue different methodological strategies to suit their different metaphysical convic- tions. For those who lack any metaphysical convictions, the only thing to do is to pursue each

178 Is pluralism compatible with realism? dimension of success while recognizing that it might not be compatible with other dimensions of success. For example, “lumpers” and “splitters” have to find some way to co-exist, granting some division of labor into specialist disciplines and sub-disciplines, at the same time also accommodat- ing the most promising projects for theories of broader scope and unificationist and reductionist attempts to link up the disciplines.

3 From pluralistic success to metaphysical pluralism So far I have argued: empirical success is multidimensional, and epistemic or methodological plu- ralism is the correct attitude for the pursuit of success as a proximate aim in the pursuit of truth. It is now time to consider more carefully the nature of the ultimate realist aim. So just what can we infer from empirical success if the success is substantial enough? The monist–realist answer will be that even though empirical success is multifarious, it should still point us to the unique truth about the world. That way, we may retain a basic metaphysical monism while accepting methodological pluralism, and this metaphysical monism can maintain its usual link with realism. I will argue, on the contrary, that scientific realism is best served by a certain kind of metaphys- ical pluralism as well as methodological pluralism. Even though the yearning for metaphysical monism (or at least an anxious attachment to it) is deep and widespread, I believe that realists can productively overcome it. If we accept that success is multidimensional, then there is a distinct possibility that there might be quite successful theories that contradict each other. If we apply a crude and straight- forward version of inference from success to truth, then we must conclude that the mutually contradictory theories are each true. Does this mean that we must give up a monist notion of truth? I do ultimately want to argue for a pluralist reconceptualization of “truth”, but there is no need to jump to such a conclusion just yet. The less drastic step, of course, is to reject a simplistic inference from success to truth. There is nothing problematic about such a rejection – after all, the crude inference from success to truth is at best a colossal case of the logical fallacy of affirming the consequent. And as Timothy Lyons (2003: sec. 3) shows, it is not even as good as that, since truth does not guarantee success, either, unless conditions (such as auxiliary hypotheses) are right. And the pessimistic induction, best read as an irony verging on a reductio,5 remains an embarrassing reminder of the general invalidity of this inference. What I want to advance here, on top of the pessimistic induction, is another piece of ironic caution: if the success-to-truth inference works too well, the monist realist should be worried, as long as success is pluralistic. One could say, of course, that inferring truth from pluralistic success does not lead to contradictions because what is inferred in each case is only approximate truth. But approximate truth is toothless if two “approximate truths” can freely contradict each other, not really worth having from a monist–realist point of view. Again, I will revisit the notion of truth later, but I think we must start by re-training our sights onto other realist end points that are more realistic than truth. So, is there something other than truth that we might be able to infer more plausibly from empirical success? Here Ian Hacking’s (1983) “experimental realism” offers an attractive pros- pect. According to Hacking, the most impressive kind of scientific success can be found in spe- cific practical operations that we perform in the laboratory, and the assumptions that enable such interventions are particularly deserving of realist credence. I want to re-conceptualize Hacking’s own presentation slightly as an argument from success. For example, the success of the positron-spraying operation gives credence to the assumption that positrons exist, because without the assumption of the reality of positrons (or their existence) it would be very difficult to explain how the positron-spraying operation can be successful.6

179 Hasok Chang

Confidence about this solution then gives Hacking license to be relaxed about the problem of the underdetermination of theory by evidence. Don’t worry, he tells us, about which of the various theories is really true, because we won’t know. But that doesn’t really matter, since there is some- thing we can know about for sure, which is what kinds of things exist. So “experimental realism” ends up in “entity realism” as opposed to “theory realism”. Now, even though Hacking’s posi- tion has some clear difficulties (see Resnick 1994), at least it seems to generate confidence about the most essential kind of thing we want in realism, namely the reality of some basic theoretical entities (many of which are unobservable according to van Fraassen’s definition). What seems to be ignored by Hacking and his critics alike, however, is the fact that entity realism is subject to the pessimistic induction just as much as theory realism is. The history of science is full of very successful practical interventions using, in the minds of the experiment- ers themselves, entities that we now do not regard as real. The history of phlogiston (i.e. the substantive embodiment of inflammability and metallicity) provides an excellent illustration of this point (Chang 2012: ch. 1, esp. 53–54). The phlogiston theory made sense of a key aspect of age-old smelting techniques, in which calx (metal oxide, in modern terms) was transformed into metal by taking up phlogiston from a combustible (i.e. phlogiston-rich) substance such as charcoal. Even Kant greatly admired Georg Ernst Stahl’s laboratory operations that transformed one substance to another and back to the original one by giving phlogiston to it and taking it back out. Commenting on Joseph Priestley’s experimental achievements in phlogiston chemis- try, Matthew Boulton (James Watt’s business partner) wrote excitedly to Josiah Wedgwood (the famous porcelain maker) in 1782: “We have long talked of phlogiston without knowing what we talked about, but now Dr Priestley hath brought ye matter to light. We can pour that Element out of one Vessell into another, can tell how much of it by accurate measure is necessary to reduce a Calx to a Metal . . . .” The most impressive success was Priestley’s successful prediction that a calx can be transformed into metal when it is heated in an enclosed space containing “inflammable air” (hydrogen), which he at the time thought was pure phlogiston. Now, isn’t this just the kind of experimental intervention that Hacking has in mind as the basis of his experimental realism? Doesn’t Priestley’s experiment count as a successful “spraying” of phlogiston (and a novel pre- diction to boot)? The lesson here is not that phlogiston is a weird and exceptional case. Other similar exam- ples abound, in fact involving many of the entities featuring in Laudan’s pessimistic-induction argument. When William Herschel discovered infrared radiation coming from the sun in 1800, he thought he was using the prism to separate rays of caloric (heat) from rays of light in the sun- beam; the caloric rays in the dark part of the solar spectrum beyond the red were literally sprayed onto a thermometer, which duly rose (Chang and Leonelli 2005). Count Rumford, pioneer of the kinetic theory of heat, believed in the existence of “frigorific” radiation, which he reflected and focused using metallic mirrors to cool things down effectively (Chang 2002). For a modern example, take electron orbitals: not only a great deal of theoretical reasoning but numerous exper- imental interventions rely on chemists’ detailed knowledge of the number and shapes of various types of orbitals and overlaps between them, while orbitals have no reality if we take quantum mechanics literally (Ogilvie 1990). All of these examples may appear to show that experimental success is no guarantee of the reality of the entities that the experimenters in question presume to be manipulating when they carry out their experiments. You may be able to spray something without knowing much at all about what it is that you’re spraying. So are we back to square one, with Hacking’s attempt to save realism all in vain? My take on this is just the opposite. Rather than despairing about Hacking’s experimental realism because it would rule phlogiston in, I suggest that we should continue to appreciate the cogency of Hacking’s position but learn to accept that phlogiston is real, as are

180 Is pluralism compatible with realism? caloric, frigorific rays, and orbitals – at least in the same kind of way in which any other unob- servable theoretical entity can be regarded as real. It is useful to recall another part of Hacking’s argument here. Facing those who would doubt that success in practical intervention can form a secure enough basis for our knowledge of unobservable reality, Hacking hits the ball right back to their court by asking, Why do you think anything is real? Forget positrons and phlogiston for the moment, and think about what philosophers fondly call “medium-sized dry goods”: tables and chairs or cats and dogs and the like. Why are we so sure that those things are real? In that context Hacking (1983: 189) refers to an unexpectedly instructive source: George Berkeley’s An Essay Towards a New Theory of Vision, which points out that our 3-D vision only works by a conjunction of the senses of vision and muscular tension (involved in moving around, in handling objects, and also in focusing our eyes at various distances). Hacking takes Berkeley to have given a prescient emphasis on the role of hands-on intervention in perception. I also want to take it with a slightly different emphasis, on the convergence of different modalities of sensation. So I would like to conclude that phlogiston is as real as tables and chairs and cats and dogs; the concept of phlogiston was able to support a convergent kind of manipulability, as capable as the concept of “cat” to facilitate a coherent set of epistemic activities (including but not exclusively of the interventionist kind). And if phlogiston is real, so is caloric, even though the phlogiston theory and the caloric theory say mutually contradictory things, for instance about what happens in combustion. These are strong conclusions, which require a better articulation and a more careful defence, which I will attempt to give in the next section.

4 Pragmatist coherence as the basis of reality and truth If we grant the reality of phlogiston and positrons in the same sense as the reality of “medi- um-sized dry goods”, we also need to ask, what exactly do we mean when we say that tables and chairs are real? It is easy enough to agree that “Reality” is that mind-independent something, which has the capacity to resist our attempts to deal with it in any way we want. But this sense of Reality (which I will write with a capital R) is not useful for scientific realism, because it is only as specifiable as the Kantian thing-in-itself, about which we can and should say nothing. Elsewhere I have advanced a doctrine of “active realism”, a commitment to learn as much as we can about Reality (Chang 2012: ch. 4). But, like Nelson Goodman (1978: 4), I have come to recognize that it hardly makes sense to say that we learn anything “about” Reality if Reality is something we cannot say anything about. At best, we could say that we learn from Reality, or rather, from our experience of being in contact with it; even that is a metaphorical way of trying to express what is inexpressible. When Hacking says positrons are real or when I say phlogiston is real, the sense of it is that a specificmanifestation of that unspecifiable overall Reality is somehow being captured in our con- ceptions. “Manifestation” is a clumsy metaphor, but it is better than the metaphor of “representa- tion”, which only makes sense when we have access to both the represented and the representing sides. What I am trying to express by the image of “manifestation” is, in somewhat Kantian terms, the ineffable causation of phenomena by noumena via the cognitive agent, which is an entirely different sort of thing from the causation of one phenomenon by another. And what we do when we parse out Reality for our cognitive purposes is to look for little parcels of reality that seem to share the same character of mind-independence as the whole Reality. As Charles Sanders Peirce put it, the “fundamental hypothesis” of the method of science is that “there are real things, whose characters are entirely independent of our own opinions about them” (quoted in Scheffler 1999: 425). Those parcels constitute the identifiable components of our phenomenal

181 Hasok Chang world. Those “realities” (which I will write with a lowercase r) are what we sensibly and express- ibly regard as “real” and attribute “reality” to. This parsing out of Reality into operational realities is crucial for any kind of cognitive activity. If we cannot identify sensible parts (or aspects) of nature, we cannot say anything intelligible, make any kind of analysis, or engage with nature in any specific and directed way. In that sense, entity realism (or at least a sort of property realism) is prior to any truth realism (i.e. realism concerning the truth of the statements that we make about the entities in question). So we have no choice but to worry about whether we are able to do the parsing well. Treating some things as real and others as not is an important part of how we live in the material world; it is not something we can do without, any more than we can afford to live on the basis of global skepticism. We have to regard some things as real while admitting that our judgment in that regard is fallible. I would like to propose what I will call the “coherence theory of reality”, which has some paral- lels to the coherence theory of truth as it is usually understood but also some significant differences. First of all, what I mean by “coherence” here is not mere consistency. Rather, it is a pragmatist notion indicating a fitting together of epistemic activities that enables the successful achievement of certain aims. (I will sometimes speak as if coherence and success were synonymous; to be more precise, coherence is not success itself but what enables success.) As it is a relationship between epis- temic activities, coherence is not reducible to the logical consistency of the propositions involved in the activities, though it would often be helped by consistency. Coherence is an attribute of a set of epistemic activities, which, together, can be said to form a system of practice if there is sufficient coherence among them (see Chang 2014 and Chang 2017 for further discussion). With this notion of coherence, I propose: we should, and usually do, consider as real the presumed referents of concepts that play a significant role in a coherent system of practice. This notion of reality is context-dependent, or rather, system-dependent.7 For example, constellations are real within traditional systems of positional astronomy but not in modern cosmology; the unique atomic weight of a chemical element is real in most chemical contexts but not in nuclear physics. Reality comes in degrees, it is defeasible, and it is continuous with everyday usage as in “Ghosts aren’t real”. The demand for coherence rules out many things but also rules in many things. In the absence of what else we might operationally mean by “real” and with the recognition that such a concept of reality is not something we can do without, we should have the courage to admit that a lot of different things are real – or the audacity to embrace a notion of reality (or real-ness, or exist- ence) according to which many different kinds of things are real, even if the concepts pointing to them belong to mutually incommensurable systems of practice. This is a similar attitude to John Dupré’s (1993) “promiscuous realism”. Add the coherence theory of reality to epistemic pluralism, and we get a modest sort of meta- physical pluralism. In the effort to discern the theoretical elements responsible for success, we should be open-minded and generous to the investigators involved. That means granting reality, provisionally and defeasibly, to whatever theoretical conceptions that have actually been needed for our success. This is what we ought to do if we really take success as our only reliable guide in deciding what to be realist about. We should regard various different sorts of things as real because there are various coherent systems of practice featuring these things. What I am advocat- ing here is a modest and piecemeal practice of naturalist metaphysics, not giving a grand view of “how the world is” arising from a priori reflections but allowing metaphysical beliefs to emerge from well-established practices. If we accept that pragmatist coherence is a good criterion of reality, then it will be very dif- ficult to avoid metaphysical pluralism. That is a matter of contingent fact rather than a matter of a priori necessity. It just turns out that we can achieve empirical success by employing many

182 Is pluralism compatible with realism? different kinds of presumed entities. Moreover, it often happens that even within a given domain there are many success-bearing entities that cannot easily be conceived in terms of each other, such as phlogiston versus oxygen, or caloric fluid versus frigorific radiation versus molecular kinetic energy, or electrons neatly pigeonholed into orbitals versus a “gas” of mutually indis- tinguishable electrons. I believe that a similar insight is shown in Niels Bohr’s interpretation of wave–particle duality in quantum physics and in his doctrine of complementarity more generally. If we want to stay true to the spirit of Hacking-style realism, then we should admit the reality of all of these entities that are apparently mutually incompatible. That is the metaphysical pluralism we arrive at by following both the best scientific practice and the most secure philosophical reasoning we have available. But do we not get into contradictions with such pluralism? My short answer is “no”, but it needs to be spelled out carefully, at several levels, as I will illustrate through the case of oxygen and phlogiston. (1) No plausible definition of “phlogiston” and “oxygen” can give us a logical deduc- tion that “phlogiston is real” implies “oxygen is not real,” or vice versa. So there is no direct logical contradiction in affirming that both entities are real. (2) There were in fact chemists who, in the midst of the Chemical Revolution, made coherent hybrid systems of chemistry which affirmed the reality of both, using oxygen in the tracking of weights in chemical reactions and phlogiston in the explanation of what we now recognize as energy relations (see Chang 2012: 32, and refer- ences therein). (3) The phlogistonist system of chemistry and the oxygenist system of chemistry in their common versions did contain some mutually contradictory statements (such as “Water is an element” in one and “Water is a compound” in the other). However, if we sufficiently dissect the meanings of “element” and “compound” in those sentences, we find that semantic incommensu- rability prevents any direct contradiction (see Chang 2012: 208–212; cf. Goodman 1978: 110). And even if there are some genuinely contradictory statements between the systems, the contradiction does not necessarily carry over to the statements directly implicated in generating our confidence about the reality of phlogiston and oxygen. Recall the basic lesson from Hacking: our judgment about the reality of an entity is not tied to all the theoretical statements that we may make about it. Having addressed some immediate worries about plurality, let us now turn to a more positive appreciation of it. All theories facilitating successful epistemic activities provide ways of learning from engagement with Reality and should be maintained and developed as much as possible, even if they conflict with each other in some ways. What we regard as responsible for success should be preserved so that it may continue giving us that success. From the active-realist point of view mentioned earlier (Chang 2012: ch. 4), what success gives is merely a credible promise of more success, a promise accepted with our eyes wide open to the problem of induction. When a theory has, time and again, supported successful empirical activities, it makes sense to keep that theory for future use. That is a modest and reasonable argument for preserving a successful theory. Such preservation, because it is firmly rooted in practical experience and the kind of basic induction that Hume taught us we cannot do without, should be robust in the face of another theory that does something else well (or even the same thing well in a different way), which deserves its own credence. All these considerations lead to a policy that I have labeled “conservationist pluralism” (Chang 2012: 224): retain previously successful systems of practice for what they were (and are) still good at, and add new systems that will help us identify and create new realities, making new and fresh contacts with Reality. With conservationist pluralism we can, once again, understand the progress of science as cumulative: not an accumulation of simple unalterable facts from which more and more general theories would be formed, but of various locally effective systems of practice which somehow continue to be successful. Contrary to what one might expect, physics shows this pattern of development more starkly than any other science. If we examine what physicists (and others who

183 Hasok Chang use physics) actually do, it is easy to recognize that they have retained various successful systems that are good in particular domains: geocentrism (for navigation), Newtonian mechanics (for other terrestrial activities and for space travel within the solar system), ordinary quantum mechanics (for much microphysics and almost all quantum chemistry), as well as special and general relativity, quantum field theory, and more recent theories. For those who would feel upset by the suggestion that the applicability of, say, general relativity is “local”, I can only point to all the situations in which we would not dream of using general relativity (including rocket science, though not GPS), as well as the fact that there have been very few specific empirical tests of general relativity. Realism as I am advocating here is not the search for an ultimate kind of truth, “Truth with a capital T ”, concerning the Kantian ding an sich, which we will never be able to infer from success (or from anything else). But a more modest kind of truth is within our reach, and its pursuit has some crucial usefulness. If our use of a theory has led to successful outcomes, and if this is not the result of some strange coincidence as far as we are able to ascertain, then we can and should say, modestly and provisionally, that the relevant statements made in this theory are “true” in the same sense as we say that it is true that rabbits have whiskers and live in underground burrows. This “truth” is of the operational, verifiable kind, and it is the same thing as empirical confirmation taken in a broad sense.8 Such intuitions about what we might call “truth” point to a coherence theory of truth, but, again, taking “coherence” not as mere logical consistency but as an effective fitting together of epistemic activities (see Chang 2017). We may say that a statement is true if it plays a significant role in a coherent system of practice. It is crucial to note that such pragmatist coherence carries within it the constraint by Real- ity. This gives to my notion of coherence-truth the mark of mind-independence that realists value most in the correspondence notion of truth. At the same time, mine is an operable notion of truth that can be a worthy and realistic object of pursuit. The pragmatist move I am making on the notion of truth is not the same as watering it down as “approximate truth”. Coherence truth can be exact, too. Though it does come in degrees, the degree of it does not indicate an “approximation” to a fixed point. And it is an “internal” notion, meaningful within a system of practice, not without it. Such a conception of truth easily allows plurality while not allowing arbitrariness.

5 Pluralism in the practice of realism What difference does realism make? As Arthur Fine (1984: 97) has articulated, an important source of disappointment with the brand of scientific realism usually promoted by philosophers is that it basically amounts to a “desk-thumping, foot-stamping shout of ‘really!’” about what scientists have already affirmed. That sort of realism does not make any difference to what scien- tists do. I aspire to construe realism as an ideology that one can actually put into practice. Realism in this sense is a living doctrine that can guide the evolution of scientific knowledge. “Active realism” is a commitment to learn as much as possible from Reality. If we want to honor that commitment, what should we do? Is the work done by scientists who take a realist attitude any different from the work done by scientists who do not? Realists and anti-realists have certainly clashed within science – recall Mach versus Planck on the existence of atoms (Blackmore 1992). But not all such disagreements have been very meaningful for scientific practice. A scientists’ shouting match about realism is no more productive than a philosophers’ shouting match. And in some other cases, such as much of 19th-century organic chemistry, realists and anti-realists did exactly the same work, with an appropriate suspension of (dis)belief.

184 Is pluralism compatible with realism?

It is useful to take note of actual scientific practices that are contrary to active realism, as a reminder that active realism is not an empty doctrine in practice. At the rationalist end of non- realism, we may observe a clear lack of concern to make empirical contact with Reality, which can happen even while the practitioners are shouting about truth. Would string theory be an example of this? At least it has been accused of being one (for a nuanced discussion, see Dawid 2013). At the positivist end of non-realism, there may be a refusal to theorize about unobserv- ables. Plenty of good and creative scientific research has actually been done in such anti-realist manner, for example by Fourier on heat conduction or Carnot on heat engines or Mach on acoustics, but this does constitute a restraint on research. The active-realist attitude is different. It is to make hypotheses about unobservable nature, to take the trouble of deriving new predictions about their observable consequences, to create new observational capacities, to ask new questions, to create new concepts, to enhance theoretical connections (by unification or reduction), and so on – and to take all these activities as far as possible. Here we can see realism and “constructivism” dovetailing in an unexpected way, which helps us make sense of statements of realism such as the following by Ludwig Boltzmann from 1890: “According to my feeling, the task of theory lies in the construction of an image of the exterior world, which exists only in our mind and should serve to guide us in all our thoughts and all our experiments” (quoted in Nye 1972: 20). Realism, as I see it, is not an ampliative doctrine in the sense of somehow philosophically infer- ring from empirical success something that is more than empirical success. Rather, it is a truly “ampliative” doctrine in the sense of encouraging the creation of more knowledge about more reality. In this drive toward the maximal creation of knowledge, plurality is an unproblematic consequence. Various successes of science will easily lead us to the knowledge of various realities (or various versions or aspects of Reality). Such abundance of knowledge and reality should not embarrass us, and we may create more knowledge yet again if we can establish links between the different realities. True realism will pursue knowledge unafraid of plurality, freed up from the unnecessary constraints of monism.

Notes 1 Non-empirical access to reality (anything from divine revelation to mathematical Platonism) would allow other kinds of arguments for realism. 2 See Chang (2012: 15–16), where I also define an epistemic activity as “a more-or-less coherent set of mental or physical operations that are intended to contribute to the production or improvement of knowledge in a particular way, in accordance with some discernible rules (though the rules may be unar- ticulated)”. See also Chang (2014), 71ff. 3 James Brown (1985: 49–50) makes a creditable effort to define what success means. K. Brad Wray (2013: 1725–1727) notes that scientific success has been a “moving target” and not easily defined in a way that would support the success-to-truth inference. 4 And, of course, for non-realist purposes, it is easily seen that each dimension of empirical success is worth pursuing as a goal in itself. 5 If we assume that success implies truth, then later and better success will often deprive the “true” status of earlier success. Wray (2013: 1720) notes that Psillos and Saatsi among others have read the pessimistic induction in this way. 6 Hacking (1983: 203) himself invokes a kind of no-miracles argument when he says that to attribute the success of modern interventionist microscopy to coincidence would be to invoke a “Cartesian demon of the microscope”. 7 There is certainly some affinity between my position and Hilary Putnam’s “internal realism” articulated in, e.g., Putnam (1981).

8 This is what I have designated as “truth5” in Chang (2012: 242).

185 Hasok Chang

References Blackmore, J. T. (ed.) (1992) Ernst Mach – a Deeper Look, Dordrecht: Kluwer. Brown, J. R. (1985) “Explaining the Success of Science,” Ratio 27, 49–66. Cartwright, N. (1983) How the Laws of Physics Lie, Oxford: Clarendon Press. ——— (1999) The Dappled World: A Study of the Boundaries of Science, Cambridge: Cambridge University Press. Chang, H. (2002) “Rumford and the Reflection of Radiant Cold: Historical Reflections and Metaphysical Reflexes,”Physics in Perspective 4, 127–169.

——— (2012) Is Water H2O? Evidence, Realism and Pluralism, Dordrecht: Springer. ——— (2014) “Epistemic Activities and Systems of Practice: Units of Analysis in Philosophy of Science after the Practice Turn,” in L. Soler, S. Zwart, M. Lynch and V. Israel-Jost (eds.), Science After the Practice Turn in the Philosophy, History and Social Studies of Science, London and Abingdon: Routledge, pp. 67–79. ——— (2017) “Operational Coherence as the Source of Truth,” Proceedings of the Aristotelian Society 117, 103–122. Chang, H. and Leonelli, S. (2005) “Infrared Metaphysics: The Elusive Ontology of Radiation (Part 1); Infrared Metaphysics: Radiation and Theory-Choice (Part 2),” Studies in History and Philosophy of Science 36, 477–508, 686–705. Dawid, R. (2013) String Theory and the Scientific Method, Cambridge: Cambridge University Press. Dupré, J. (1993) The Disorder of Things: Metaphysical Foundations of the Disunity of Science, Cambridge, MA: Harvard University Press. Fine, A. (1984) “The Natural Ontological Attitude,” in J. Leplin (ed.), Scientific Realism, Berkeley and Los Angeles: The University of California Press, pp. 83–107. Fraassen, B. van (1980) The Scientific Image, Oxford: Clarendon Press. Goodman, N. (1978) Ways of Worldmaking, Indianapolis: Hackett. Hacking, I. (1983) Representing and Intervening, Cambridge: Cambridge University Press. Kellert, S. H., Longino, H. E. and Waters, C. K. (eds.) (2006) Scientific Pluralism, Minneapolis: University of Minnesota Press. Kosso, P. (1998) Appearance and Reality: An Introduction to the Philosophy of Physics, New York and Oxford: Oxford University Press. Kuhn, T. S. (1977) “Objectivity, Value Judgment and Theory Choice,” in T. S. Kuhn, The Essential Tension, Chicago: University of Chicago Press, pp. 320–339. Laudan, L. (1981) “A Confutation of Convergent Realism,” Philosophy of Science 48, 19–49. Lyons, T. D. (2003) “Explaining the Success of a Scientific Theory,”Philosophy of Science 70, 891–901. Mitchell, S. D. (2003) Biological Complexity and Integrative Pluralism, Cambridge: Cambridge University Press. Nye, M. J. (1972) Molecular Reality: A Perspective on the Scientific Work of Jean Perrin, New York: American Elsevier. Ogilvie, J. F. (1990) “The Nature of the Chemical Bond—1990: There Are No Such Things as Orbitals!,” Journal of Chemical Education 67, 280–289. Psillos, S. (1999) Scientific Realism: How Science Tracks Truth, London and New York: Routledge. Putnam, H. (1975) “What Is Mathematical Truth?” in Mathematics, Matter and Method (Philosophical Papers, Vol. 1), Cambridge: Cambridge University Press, pp. 60–78. ——— (1981) Reason, Truth, and History, Cambridge: Cambridge University Press. Resnik, D. (1994) “Hacking’s Experimental Realism,” Canadian Journal of Philosophy 24, 395–412. Ruphy, S. (2016) Scientific Pluralism Reconsidered, Pittsburgh: Pittsburgh University Press. Scheffler, I. (1999) “A Plea for Plurealism,”Transactions of the Charles S. Peirce Society 35, 425–436. Wray, K. B. (2013) “Success and Truth in the Realism/Anti-Realism Debate,” Synthese 190, 1719–1729.

186 15 SCIENTIFIC PROGRESS

Ilkka Niiniluoto

1 Introduction Scientific realists and antirealists largely agree that science is in many ways successful in explaining and predicting observable phenomena and in guiding human action. While antirealists suggest that such empirical and pragmatic success is the aim of science, realists argue by abductive reasoning that such success is an indicator of the ability of scientific theories to give true or truthlike representations of unobservable reality. This difference is reflected in their accounts of scientific progress: while antirealists define progress in terms of empirical success or practical problem solving, realists characterize progress by using some truth-related criteria. This chapter defends the view that the notion of truthlikeness or verisimilitude provides the realist the best toolbox for characterizing different aspects of scientific progress and responding to rival antirealist approaches.

2 Philosophical problems of progress Empirical studies of the growth and advancement of science may concern for example the education and skills of the scientists, the improved methods of scientific inquiry, the funding and organization of research, the number of refereed scientific publications, and the impact of science on social progress with new technologies and innovations, economic prosperity, qual- ity of life, and justice in society. Such accounts of educational, methodical, economical, and institutional progress in science have to be distinguished from the task of the philosophy of science to analyze science from a cognitive perspective as an attempt to improve and increase scientific knowledge. Historically speaking, the philosophical problems of scientific progress were preceded by two important debates about human knowledge, and – as we will see – these issues are intertwined even today. First, in ancient Greece, Plato defined knowledge (Gr.episteme ) as true belief with justification, and his student Aristotle added that, besides knowing that, science is interested in explanation or knowing why. They were attacked by the skeptics, who argued that knowledge is impossible: truths cannot be formulated in human languages, and the evidence for any claim and its negation will be equally strong. This is the skeptical problem of knowledge.

187 Ilkka Niiniluoto

Secondly, the nature of physical and astronomical theories was debated between the realists and the instrumentalists: the former required that such theories are true or false, the latter that they merely “save the phenomena”. While the Aristotelians were realists, Ptolemy’s geocentric theory was often interpreted in instrumentalist terms. The controversy was highlighted by the Copernican revolution. Pierre Duhem, who in 1908 gave a historical account of this debate, argued that “the hypotheses of physics are mere mathematical contrivances for the purpose of saving the phenomena”, and “logic sides” with the instrumentalists Osiander, Bellarmine, and Urban VIII, not with the realists Kepler and Galileo (see Duhem 1969). The dominant view of science in the Middle Ages was static: scientia is the possession and contemplation of knowledge, and its method is needed to organize the already-established truths for the purpose of teaching. However, the idea that science is a cumulative effort of successive generations gradually gained support. In the early 17th century, Francis Bacon and René Des- cartes advocated the dynamic view that the aim of science is to findnew truths by inquiry. After this shift in the aims of science, it became relevant to raise the conceptual problem about scientific progress: What is meant by progress in science? Both the empiricist Bacon and the rationalist Descartes adhered to the requirement that scien- tific knowledge is completely certain to be true – at least after the employment of right inductive or deductive methods – and hence their answer to the conceptual problem was: scientific progress is the accumulation of knowledge. This ideal of constant growth of certified knowledge was for a long time the most popular view of science. For example, Georg Sarton, the first professor of his- tory of science at Harvard University, claimed in 1936 that “the acquisition and systematization of positive knowledge are the only human activities which are truly cumulative and progressive” (see Sarton 1936: 5). The conceptual problem of progress cannot be answered simply by studying the actual pro- cesses of science. It is natural to expect that major historical examples of scientific advancement have been progressive, but we should not assert this a priori for all cases, since at least temporary regress is also a possibility. In this sense, ‘progress’ is an axiological or a normative concept which should be distinguished from neutral descriptive terms like ‘change’ and ‘development’. To say that the step from stage A to stage B constitutes progress means that B is an improvement of A, or better than A relative to some standards or criteria. In particular, the notion of scientific progress should be based on the aims of good science, and the task of specifying such values and aims is a genuinely philosophical endeavor. The cumulative view of progress is based on the traditional idea that science seeks and finds knowledge, which is both true and justified. But skeptical arguments leave room for the doubt that we do not have foolproof methods of showing that truth has been reached in particular cases. The history of modern science also shows that old and once generally accepted theories may be over- thrown by new ideas and approaches. Even Sarton (1936: 8), who endorsed the cumulative view of progress, added that “any branch of science may be completely revolutionized at any time by a dis- covery necessitating a radically new approach to the subject”. These observations motivate the meth- odological problem about scientific progress: how can we recognize progressive changes in science? What are the most reliable indicators of scientific progress? This problem is again philosophical, since its answer depends on epistemological views about the best methods of knowledge acquisition. Relative to solutions of the conceptual problem (i.e., definition of progress) and the methodo- logical problem (i.e., indicators of progress), one can study the factual problem about progress: Has science made and will it make progress? Retrospectively this question may concern past instances in the history of science, and prospectively it gives advice to those who wish to promote scientific progress. In these ways, the philosophical accounts of scientific progress are relevant to other fields of science studies and science policy.

188 Scientific progress

3 Fallibilism and truthlikeness Progress as accumulation of knowledge combines the realist interpretation of theories with the rejection of all skeptical doubts. Let KH mean that theory H is known in the sense that H is true, justified, and accepted by the scientific community. If new knowledge KG is added without giving up H, then we have KH & KG which is equivalent to K(H&G). So in the typical step of accumulation the later theory H&G entails the former theory H. The special case, where H is a tautology, covers progress from ignorance to a known theory. Critics argue that the accumulation of knowledge presupposes an outdated epistemology which is committed to the quest of certainty. An alternative via media between dogmatism and skepticism, the first steps of which were taken already in Plato’s later Academy, was called fallibi- lism by Charles S. Peirce in the late 19th century. According to fallibilism, all factual human knowledge is uncertain or corrigible. Some prag- matists and intuitionists concluded that the classical realist notion of truth should be replaced by an epistemic surrogate, such as verification or warranted assertibility, and semantical antirealists have tried to find rescue from skepticism by the denial of unknown or recognition-transcendent truths altogether. The challenge to scientific realism has been to combine fallibilism with the view that truth is defined by a relation of correspondence with reality, which also means that many truths are at least so far unknown and targets of open cognitive problems. Such fallibilist versions of realism have been developed in weaker and stronger forms. Fallibilism in the weak sense analyzes uncertainty in terms of epistemic probabilities, developed since the 18th century by the Bayesian school. Here the epistemic posterior probability P(H/E) of theory H given evidence E expresses a rational degree of belief in the truth of H on E. For weak fallibilists, theories can be at best probably true given the observational evidence. Thus, scientific knowledge may be true, even though we cannot be completely sure of this in any particular instance. Fallibilism in the strong sense is ready to assert that typically theories are false, but still one hypothetical (even false) theory may be ‘closer to the truth’ than another theory. Sometimes the- ories are known to be false, so that their probability is zero, since they contain approximations and counterfactual idealizations. An early formulation of this view was given by Nicholas Cusanus in 1440. According to his simile, the relation of our finite intellect to the infinite truth is like that of a polygon to a circle: by multiplying the number of angles, the polygon will approach the circle without becoming identical with it. Algebraic examples of convergence are given by sequences of numbers which asymptotically approach a limit value. In the 18th century several philosophers observed that convergence to the truth may be modelled by iterative methods of solving mathematical equations. Here the truth is not reached in a finite number of steps but only at an ideal limit. Peirce himself, who also used weakly fallibilist formulations of his views, characterized truth at the limit of endless inquiry: “the opinion which is fated to be ultimately agreed by all who investigate, is what we mean by truth” (CP 5.407), and science “approaches to the truth” with its “self-corrective method” (CP 5.575). Laudan (1973) claims that Peirce “trivialized” the self-corrective thesis: if truth is defined as the limit of inquiry, then the thesis that science approaches to the truth turns out to be a tautology. It is, indeed, correct that truth should not be defined as the limit of scientific opinion, as some ‘convergent realists’ propose on the basis of a consensus theory of truth. Peirce in fact saw this when he remarked that science is “destined” to converge to the truth only with high probability (CP 4.547n; cf. Niiniluoto 1984: 82), which implies that self-correction is a factual thesis about successful science. Karl Popper’s Logik der Forschung in 1934 and its English translation The Logic of Scientific Discovery in 1959 attacked weak forms of fallibilism by asserting that we can never prove that a

189 Ilkka Niiniluoto theory is true or probable. Popper’s insight was that science is interested in information content, that is bold rather than logically weak hypotheses. Even though he sharply criticized Carnap’s inductive logic, it turned out that his basic idea can be at least partly formulated within a prob- abilistic framework of cognitive decision theory. The basic idea, due to Carl G. Hempel, is to treat the values of science – such as truth, information, explanatory power, predictive power, and simplicity – as epistemic utilities which scientists are trying to maximize under uncertainty. Let D be a partition of hypotheses into mutually exclusive and jointly exhaustive alternatives. Then one and only one of the elements of D is true. Let us denote this unknown truth C*. The identification of this truth defines cognitivea problem, where C* is the target, the elements Ci of D are complete answers, and disjunctions of elements of D are partial answers. In the simplest case

D has two members {C1, ~C 1}. Let u(H,Ci) be the utility of accepting H, when Ci is true. Then the expected utility of accepting H on evidence E is

(1) U(H,E) = ∑ P(Ci /E)u(H,Ci) where the sum goes over all complete answers in the disjunction of H. Then the best partial answer H given E is the one which maximizes the expected utility (1). This approach can be interpreted in terms of scientific progress in the following way. The basic epistemic utility function defines conceptually what is meant by progress, and its expected value tells what is methodologically the best way of recognizing progress. On this basis we can distinguish real progress (relative to the unknown target C*) and estimated progress (relative to evidence E):

(RP) Step from H to H’ is progressive if and only if u(H,C*) < u(H´,C*) (EP) Step from H to H’ seems progressive on E if and only if U(H/E) < U(H’/E).

Regressive developments can be defined by reversing the inequalities. Suppose first that the basic epistemic utility istruth and nothing but the truth. Then it is natural to define u(H/Ci) as 1, if Ci is a disjunct of H, and 0 otherwise. In other words, u(H/Ci) equals the truth value of H, assuming that Ci is true. Then by (1) the expected utility U(H/E) is the sum of P(Ci/E) for the disjuncts Ci of H, which equals P(H/E). Thus, posterior probability equals expected truth value. This choice leads to an extremely conservative policy of favoring hypotheses with P(H/E) = 1, that is, completely certain tautologies or statements entailed by the evidence E. The only really progressive steps are from a falsity to a truth, and a step from H to H’ seems progressive on E if and only if P(H/E) < P(H’/E). This reconfirms Popper’s critique of weak fallibilism. Secondly, take the basic utility to be information content, measured by cont(H) = P(~H) = 1-

P(H). If u(H,Ci) = cont(H), then also U(H/E) = cont(H). This measure favors logically strong hypotheses, and it is maximized by a contradiction. A step from a false complete answer to the true one does not count as real or estimated progress. Popper sometimes seemed to claim that information content is the aim of science, but he qual- ified this with the requirement that an informative hypothesis has to be corroborated by severe tests (see Popper 1963: 217). As truth or information alone cannot be adequate epistemic utilities, the aim of science can be taken be to informative truth. This idea was formulated by Isaac Levi (1967) by defining the epistemic utility as a weighted combination of truth value and informa- tion content, where the weight of the information factor as ‘an index of boldness’ indicates how much the scientist is willing to risk error, or ‘gamble with truth’, in attempt to avoid agnosticism. It follows that the expected utility is then a weighted combination of posterior probability and

190 Scientific progress information content. In this sense, the weakly fallibilist demand of high posterior probability is balanced with the interest in bold (informative, a priori improbable) hypotheses. As Jaakko Hintikka observed, similar results are obtained if information content is treated as a truth-dependent epistemic utility (see Hintikka and Pietarinen 1966). If u(H,t) = cont(H) and u(H,f ) = – cont(~H), then U(H/E) = P(H/E) – P(H), which is Carnap’s difference measure for the confirmation of H by E. Popper’s (1963) comparative and quantitative definitions of truthlikeness or verisimilitude, as “the idea of approaching comprehensive truth” or “better or worse correspondence with reality” belong to the strong tradition of fallibilism. Treating theories as deductively closed sets of state- ments in an interpreted scientific language L, with the sets T and F of true and false statements, respectively, theory A is more truthlike than theory B if and only if B ∩ T ⊆ A ∩ T (A has larger truth content than B) and A ∩ F ⊆ B ∩ F (A has smaller falsity content than B), where at least one of these set inclusions is strict. Among the important properties of this definition are the principles that truthlikeness is maximal for the complete truth T, and truthlikeness covaries with logical strength for true theories:

(2) T is at least as truthlike as any other theory. (3) A ∩ T is more truthlike than A, if A is false. (4) A is more truthlike than B, if both A and B are true and A is logically stronger than B.

Here the condition of logical strength means that A entails B but not vice versa. Popper’s definition was motivated by the idea that fallibilism should be able to make sense of the idea of scientific progress as increasing truthlikeness. Principle (2) expresses the idea that complete truth is the ultimate aim of progress, and principle (4) shows that this account includes as a spe- cial case progress by the accumulation of knowledge. The truth content principle (3) shows that sometimes a step from a false theory to a true one is progressive. The weakest true theory is a tautology, which represents ignorance as a cognitive state. It is not allowed that a false theory is more truthlike than a tautology (i.e., better than ignorance), but on the other hand, a tautology cannot be more truthlike than a false theory. However, David Miller and Pavel Tichý proved in 1974 that Popper’s definition fails dramatically: if A is more truthlike than B in Popper’s sense, A must be true. This means that it cannot be used for its intended use of comparing false theories. After the failure of Popper’s attempt, a number of alternative and still debated precise defini- tions of truthlikeness have been given. Some attempts to rescue Popper’s approach in model-the- oretical terms stumble on the implausible consequence that truthlikeness covaries with logical strength among false theories, so that a false theory could be improved just by joining new falsities to it. A more promising alternative is to restrict the deductive content of theories to relevant con- sequences (see Schurz and Weingartner 2010; see also G. Schurz, “Truthlikeness and approximate truth,” ch. 11 of this volume). One major approach employs the concept of similarity between states of affairs (see Oddie 1986; Niiniluoto 1987; Kuipers 2000; cf. Aronson et al. 1994). The rest of this chapter reviews some of the key features of this approach and its virtues for the realist. In quantitative cases, similarity can be defined by the metric structure of the space of real numbers or real-valued functions. In qualitative cases, distance between statements can be defined by “feature matching”, that is by counting the agreement and disagreement between their claims.

This allows us to define distances ∆(Ci,Cj) between complete answers in various kinds of cog- nitive problems, including singular statements about individual objects, existence claims, gener- alizations, and laws. Then the next stage is to extend this distance to partial answers so that the distance ∆(H,Cj) of a disjunction H from a given Ci is a function of the distances between the disjuncts of H and Ci. If Ci is chosen as the target C*, which is the most informative truth in

191 Ilkka Niiniluoto the given conceptual framework, then ∆(H,C*) measures the closeness of H to the complete truth C*. When this distance varies between 0 and 1, the degree of truthlikeness Tr(H,C*) of H relative to C* is equal to 1 – ∆(H,C*). In agreement with Popper’s (2), Tr(H,C*) is maximal if and only if H is equivalent to the complete truth C*. This degree can be high even when H is false. The refutation of Poppers’s definition is avoided, since false theories can be compared for their truthlikeness. Most explications allow that some informative false theories may be so close to the truth that they are more truthlike than trivial truths like tautologies. Oddie (1986) defines the function∆ (H,C*) as the average distance of the disjuncts of H from C*. It follows that his explication fails to satisfy Popper’s requirement (4): in some cases logically stronger true theories are farther from the truth than weaker truths. As an account of scientific progress, this is problematic. Instead, Niiniluoto’s (1987) min-sum measure satisfies (4). It defines

∆(H,C*) as a weighted sum of the minimum distance ∆min(H,C*) of the disjuncts of H from C* and the (normalized) sum ∆sum(H,C*) of all these distances. The minimum distance expresses our goal of being close to the target C* (this is sometimes used as an explication of the notion of approximate truth), and the sum factor gives a penalty to all mistakes allowed by H with respect to C*. It is interesting to observe that this account of truthlikeness is an extension of Levi’s definition of epistemic utility to cases where the distance from the truth is relevant. If the measure ∆(Ci,Cj) is trivial in the sense that it equals 1 for different i and j, and 0 otherwise, that is all mistakes are equally bad (in Levi’s terms, “a miss is as good as a mile”), then the min-sum measure is reduced to Levi’s definition so that ∆min(H,C*) is the truth value of H and ∆sum(Hi,C*) expresses the information content of H. When the target C* is unknown, degrees of truthlikeness are likewise unknown. The epis- temic notion of truthlikeness attempts to tell how close we can estimate theory H to be to the target C*, given our background knowledge and available evidence E. Niiniluoto’s (1987) solution to the epistemic problem assumes that a rational probability measure P is defined for the language L, so that the posterior epistemic probability P(Ci/E) given evidence E is defined for each complete answer Ci in L. Then the unknown degree of truthlikeness Tr(H,C*) may be estimated by its expected value relative to the complete answers Ci and their posterior probabilities given evidence E:

(5) ver(H/E) = ∑ P(Ci/E) Tr(H,Ci).

It is important that ver(H/E) may be high even when P(H) = 0 or P(H/E) = 0. This approach is in agreement with the basic ideas of cognitive decision theory where truth- likeness, as a combination of the values of truth and information, is the epistemic utility. As a result, we have an explication of scientific progress for a fallibilist realist:

(TrP) Step from theory H to theory H’ is progressive if and only if Tr(H,C*) < Tr(H’,C*). (VerP) Step from theory H to theory H’ seems progressive on evidence E if and only if ver(H/E) < ver(H’/E).

According to definition (TrP), objective truthlikeness Tr gives an ahistorical standard for telling how close we really are from the target C*, even when we don’t know it, and likewise a standard of real progress in science. According to definition (VerP), estimated verisimilitude expresses our judg- ments about progress, sensitive to historically changing situations with variable evidence. In con- trast, decrease of objective or estimated truthlikeness is a mark of regressive development in science. Progressive theory change in science often combines continuity and improvement: the new theory corrects the superseded theory in some respects but retains part of its content. This is

192 Scientific progress illustrated by the Principle of Correspondence: the old theory is obtained from the new one when some parameters approach a counterfactual value. For example, while Einstein’s theory rejects Newton’s assumption that the mass of a moving body is independent of its velocity, some equations of classical mechanics follow from the special theory of relativity by letting the velocity of light c approach infinity. This principle is important when idealized theories are de-idealized or “concretized”. For example, Boyle-Mariotte law pV = RT for ideal gas can be de-idealized by van der Waals law (p + a/V2)(V − b) = RT, which takes into account intermolecular attractive forces a and the finite size b of gas molecules. This equation is a progressive improvement of the ideal gas law, but it has later been corrected in statistical thermodynamics (cf. Niiniluoto 1990). Measures of truthlikeness allow us to explicate the traditional idea of progress as approach to the truth. A sequence of theories H1, H 2, . . ., Hn approaches or seems to approach the truth C* if the values Tr(Hi,C*) or ver(Hi/E) approach 1 when n grows. Such convergence results about ver can be studied when the epistemic probabilities in (5) are defined in Hintikka’s systems of inductive logic, where universal generalizations may receive non-zero probabilities. Definitions (TrP) and (VerP) presuppose that theories are assessed relative to the same target C*. This is motivated by the idea that the compared theories have to be rival answers to the same cognitive problem – there is no point of comparing the truthlikeness of Darwin’s theory of evo- lution and Einstein’s theory of relativity. If H and H’ are expressed in different languages L and L’, (TrP) has to be modified by translating these theories into a common conceptual framework. If the vocabulary of L’ is an extension of L, this common framework can be the richer language L’, and in other cases it might be the union of the vocabularies of L and L’. If the languages L and L’ have different meaning postulates, then for the purposes of comparison a theory H in L should include the specific meaning postulates of L and similarly for a theory H’ in L’. In the richer framework, one may find continuity between theories H and H’ so that it is possible to compare their degrees of truthlikeness and speak about convergence to the truth. Gerhard Schurz (2011) shows this with his notion of structural correspondence, which links theoretical expressions responsible for the success of a superseded theory to some theoretical expressions of the superseding theory. Reference invariance in spite of meaning variance and incommensurability can be defended also on the basis of appropriate theories of reference. For the realist it is important that for example rival theories of electrons by Lorentz and Bohr can be construed so that they refer to the same theoretical entity, even though both theories give in some respects mistaken descriptions of its nature. Philip Kitcher (1993) and Stathis Psillos (1999) have used causal accounts of reference for this purpose. While the ordinary Fregean descriptive account allows a theoretical term to refer only to those entities which the theory describes truly, so that a false theory cannot refer to anything, one can combine the descriptive theory of reference with a Principle of Charity: a theory refers to those real things which it describes in the most truthlike way (see Niiniluoto 1999: 128–132). On this account, it is meaningful to state that rival false theories refer to the same entity and one of them gives a more truthlike description of it. The relativization of truthlikeness to targets is in harmony with the key idea of conceptual pluralism. All inquiry is relative to some conceptual framework which is used by the scientists for describing reality. Immanuel Kant argued this in his critical philosophy, but he thought that we are prisons of our native forms of sensibility and understanding. Critical realists instead argue that such conceptual frameworks can be changed, revised, and enriched. If a language lacks expressive power, we can always add new terms to its vocabulary. Our representations of reality are always piecemeal in the sense that we need not accept with Sellars (1968) the existence of an ideally adequate “Peirceish” conceptual framework, that is there is no language L such that the mind-independent world is an L-structure. If such an ideal language would exist, as metaphysical

193 Ilkka Niiniluoto realists think, degrees of truthlikeness could be defined relative to its expressive vocabulary, but in practice the scientists have to work in languages which try to capture the most important aspects of the relevant fragment of reality. Moreover, even though the world can be categorized in alter- native ways with different conceptual frameworks, the Tarskian correspondence theory of truth is applicable to the relation of a language L and the world as conceptualized by L. Contrary to Putnam’s (1981) internal realism, which denies the idea of a ‘ready-made world’, conceptual plural- ism does not imply that the notion of truth is epistemic. Each language L has its own truths, but still truth is objective in the sense that we are free to choose the language L (with its vocabulary and interpretation), but the world decides the extensions of the L-terms and the truth values of L-sentences. Further, truth is not relative, since the truths about different L-worlds are all deter- mined by the same world and therefore cannot be incompatible with each other. Rowbottom (2015) argues against the definitions (TrP) and (VerP) that scientific progress is possible in the absence of increasing verisimilitude. He asks us to imagine that the scientists in a specific area of physics have found the maximally verisimilar theory C*. Then, by the criterion (TrP), no more progress is possible, but yet this general true theory could be used for further predictions and applications. One reply to this argument is that predictions from C* constitute new cognitive problems for the scientists. Moreover, on the basis of conceptual pluralism, in Rowbottom’s thought experiment it would still be possible for the physicists to achieve further progress by extending their conceptual framework in order to find a still deeper complete truth about their research domain. In axiological terms, definition (TrP) expresses the view that the primary aim of science is informative truth: science is a truth-seeking and falsity-avoiding activity, whose success is measured by the truthlikeness of its best theories. For applied research, one could extend this treatment by adding other secondary aims, such predictive power, simplicity, manageability, and social relevance. But as long as truth is included in the list, these accounts preserve a realist flavor.

4 Debates among realists Scientific realism has been formulated in several different ways in response to antirealist attacks. Most versions of ‘selective realism’ have not directly addressed the problem of scientific progress, but as such they are compatible with the idea of verisimilitude. Chakravartty (2009) makes this explicit in his treatment of ‘semirealism’. Psillos (1999) thinks that it is sufficient to use an intuitive notion of truthlikeness without a formal explication. If an ‘entity realist’ thinks with Cartwright (1983) that theories refer to reality but the laws of physics ‘lie’, this does not yet exclude progress in terms of truthlikeness. For epistemic and ontic ‘structural realists’, who wish to find out the true laws of nature or the structure of reality, closeness to the truth can be defined by measures of partial isomorphism (see da Costa and French 2003). Among the realists, a new debate on scientific progress was started with Alexander Bird’s (2007) defense of an epistemic definition of progress as increase of knowledge. Here knowledge is not defined as justified true belief, but still it is taken to entail truth and justification, so that Bird’s epistemic view in fact returns to the old cumulative model of progress. Rowbottom (2008) contends against Bird that justification is instrumental rather than constitutive of progress, that is justification is a means for establishing a link between truth seeking and truth finding. Bird defends the epistemic view by arguing against the semantic view (which construes progress as accumulation of truths or as increasing verisimilitude) by the following thought experiment. Imagine that a scientific community has formed beliefs B by an irrational method M, such as astrology, and B happens to be true. M is then shown to be unreliable, and the beliefs B are given up. He goes on to suggest that for the semantic view the acquisition of accidentally

194 Scientific progress true beliefs by an unreliable method is progress, and the rejection of unfounded but true beliefs is regressive, while the judgments of the epistemic view are opposite. Bird’s argument gives attention to a hidden assumption that the primary application of the notion of scientific progress concerns successive theories which have been accepted by the sci- entific community. Some sort of tentative justification for such theories is presupposed even by a radical fallibilist like Popper. So one response to Bird would be to point out irrational beliefs and beliefs without any justification simply do not belong to the scope of scientific progress (Niini- luoto 2014). Further, as reminded by Cevolani and Tambolo (2013), the verisimilitude approach handles issues about justification by means of the distinction between real and estimated progress (see (RP) and (EP)). By the standard of estimated truthlikeness, irrational adoption of true beliefs is not progressive, since their ver-value is low, and therefore it need not be regressive to give up such beliefs. Bird could modify his original thought experiment so that the true beliefs B are obtained by scientific but unreliable means, perhaps by derivation from an accepted theory which turns out to be false (cf. Bird 2016). Then B has some justification but not enough to qualify B as knowledge. If a scientist reaches true or truthlike beliefs by such mistaken reasoning, are we compelled to say that progress has been achieved? This example resembles the Gettier paradox, since here the beliefs B are true but for a wrong reason. Again we have the option of saying that the belief B was not justified by the expected verisimilitude measure. Alternatively, we might emphasize that the assessments of theories by the ver-measure are fallible, especially if false evi- dence is allowed, so that it should help to reflect also scientists’ mistakes. Further evidence may show that the initial estimate ver(H/E) was too high, which gives rational reason to reject H or suspend judgment about H. A fallibilist could also combine the conditions (TrP) and (VerP) by saying that the step from theory H to theory H’ is genuinely progressive if H’ is more truthlike than H both in the objective Tr-sense and estimated ver-sense. Then genuine progress would not be achieved in cases in which there is a discrepancy between objective and estimated truthlikeness. Consider the case of Blondlot’s N-rays, for example (cf. Bird 2007). Here the scientific community was misled to believe in these rays and write more than 300 articles on them (i.e., ver-value was high), but this period was not genuinely progressive, since these rays did not exist (Tr-value was low). There are also important historical cases in which the Tr-value is high but ver-value is initially low. The term ‘serendipity’ refers to situations in which a scientist makes a lucky discovery while she was searching for something else. In examples of anticipation a good theory is first suggested without sufficient justification and only much later is shown to be true (e.g., Aristarchus on heli- ocentric system, Wegener on continental drift). The initial evidence E for a such a hypothetical theory H may be weak, and the theory H is not yet accepted in science, but genuine progress is eventually achieved when new evidence E’ increases its expected verisimilitude and thus gives reasons to claim that H is truthlike and leads to the acceptance of H. A fallibilist alternative to Bird’s epistemic view would be to propose a notion of conjectural knowledge which does not presuppose truth. This terminology is in fact a common practice in everyday life, where the currently accepted, so-far-best theories are said to belong to “scientific knowledge” in spite of their falsity. For centuries Newton’s theory was believed to be true, even though it is at best approximately true. So let us say that the scientific community SC knows* that p if and only if p is truthlike and the estimated truthlikeness of p is larger than its rivals on available evidence (cf. Niiniluoto 1999: 84). But even this weakened form of knowledge* would capture only some examples from the history of science. Furthermore, this account would differ from Bird’s epistemic view of progress, since such sequences of known* theories would not be

195 Ilkka Niiniluoto cumulative in the sense that later theories entail the earlier ones. Bird’s own proposal for treating sequences of false theories (such as the transitions from Galileo to Newton to Einstein and from Ptolemy to Copernicus to Kepler) has problems in distinguishing progress and regress in science without relying on the notion of truthlikeness (see Niiniluoto 2014).

5 Antirealist accounts of scientific progress Antirealists deny that science makes progress by any realist standards, but some of them have pro- posed quite detailed accounts of their own views of scientific progress. These challenges are stim- ulating to scientific realists, who at least in some cases can reinterpret the antirealist approaches as compatible with realism or even supporting realism. Besides skepticism, which denies the possibility of knowledge altogether, the most classical antirealist approach is instrumentalism. In his 1906 work, Duhem denied that theories have truth values. Real progress occurs only slowly and constantly on the level of the increasing empiri- cal content of theories (see Duhem 1954: 38–39). Thus, progress means the accumulation of observational statements covered by fluctuating theories. However, Duhem added to his instru- mentalism the claim that physical theory makes progress by becoming “more and more similar to a natural classification which is its ideal end” (ibid.: 298). One may wonder whether such a peculiar form of convergent realism makes sense without assuming that the classified laws refer to real theoretical entities. Progress as empirical accumulation is formulated in some empiricist accounts of reduction, where a new theory includes all true or verified empirical consequences of its predecessor (see e.g., Kemeny and Oppenheim 1956). A similar account of progress could be viewed to be part of Bas van Fraassen’s constructive empiricism, which demands that theories aim at empirical adequacy, that is all of their observational consequences are true. Thomas Kuhn attacked Popper’s idea of progress by asking whether it really helps “to imagine that there is some one full, objective, true account of nature” so that “the proper measure of scientific advancement is the extent to which it brings us closer to that ultimate goal” (Kuhn 1970: 171). But we have seen that a theory of truthlikeness can be developed without assum- ing a unique “Peirceish” target of all truths. Conceptual pluralists can also find a realist and non-relativist response to Kuhn’s thesis that “there is no theory-independent way to reconstruct phrases like ‘really there’” (ibid.: 206). Kuhn’s treatment of puzzle solving inspired Laudan’s (1977) thesis that science is a prob- lem-solving rather than a truth-seeking activity. Progress for Laudan can be defined by the problem-solving effectiveness of a theory (the number of solved problems minus the anomalies and generated conceptual problems). As for him solving an empirical problem means that a ‘state- ment of the problem’ is deduced from a theory, the number of solved problems is equivalent to what Hempel called the systematic power of a theory. Laudan and van Fraassen are axiological non-realists in the sense that they do not include truth among the aims of science. However, both of them admit that scientific theories have unknown (and for them uninteresting or even utopian) truth values. So a strategy that a sci- entific realist can use against their view is to argue that empirical success or problem-solving ability is a fallible indicator of the truth of the theory. The Bayesian theory of confirmation shows that successful explanations and predictions of empirical phenomena confirm the the- ory by increasing the posterior probability that the theory is true (see Niiniluoto 2007). This treatment can be generalized to cases, where the empirical success is approximate and the conclusion via the ver-function concerns the estimated truthlikeness of the theory. The form

196 Scientific progress of such arguments is what Peirce called abduction, and they can be formalized as inferences to the best explanation. Another way of looking at problem solving is to replace the syntactical formulation of a theory as one overall claim about the world by the semantic or structuralist notion of a theory as a network of applications to specific domains (see Stegmüller 1976). But each such application constitutes a cognitive problem, and the success of a theory depends on the truthlikeness of its solutions of such problems. This leads to a kind of local realism with respect to the applications and models of a theory. An overall realist measure of the cognitive problem-solving ability of a theory can be given by the weighted sum of its local successes (in terms of the Tr- or ver-function; see Niiniluoto 1984). Scientific theories are also pragmatically successful as guides of our actions. Some philosophers have proposed this as a definition of scientific progress. Thus, Nicholas Rescher’s (1977) ‘meth- odological pragmatism’ characterizes scientific progress as ‘the increased success of applications in problem solving and control’. A similar proposal by Heather Douglas (2014: 62) defines scientific progress as “the increased capacity to predict, control, manipulate, and intervene in various con- texts”. A realist should resist such reduction of scientific progress to technological progress and again argue abductively that success in practice is an indicator of the truthlikeness of a theory. In his “confutation of convergent realism”, Laudan (1984) used the pessimistic meta-induction (PI) to infer from the premise that many theories in the history of science (e.g. ether and caloric theories) have been non-referring and false but yet to some extent empirically successful to the conclusion that this is the fate of our current and future theories as well (see P. Vickers, “Historical challenges to realism”, and K. Stanford, “Unconceived alternatives and the Strategy of Historical Ostension,” chs. 4 and 17 of this volume, respectively). But we have seen that a fallibilist realist can admit that historical sequences of false theories are still progressive in the sense of increasing truthlikeness. This is not a ‘Pyrrhic victory’ for scientific realism (see Stanford 2006) but rather gives support to comparative realism or the realist picture of progressive science (see Kuipers 2009; Niiniluoto 2017). The pessimistic meta-induction also challenges the explanatory connection between theo- retical truth and empirical success. Therefore, antirealists have used it as a weapon against the no miracle argument (NMA) which asserts that the success of a theory would be a miracle unless it is true or truthlike (Musgrave 1988). But the realist can still argue that the empirical and pragmatic success of a theory is due to the truthlikeness of some of its parts. For true theories, all of their deductive empirical consequences must be true as well. For truthlike theories, the matter is more complicated, but still closeness to theoretical truth is sufficient to guarantee at least approximate, average, and probable empirical and pragmatic success (see Niiniluoto 1984: 179–183; Kuipers 2000). Further, attempts to give antirealist explanations of the success of science on the basis of pragmatism or constructive empiricism have failed (see Niiniluoto 1999: 197–199; Psillos 1999), so that realism seems to give the best and the only explanation of such success.

Further reading I. Niiniluoto, “Scientific Progress,” www.oxfordbibliographies.com (2012) is a bibliographical review. I. Niiniluoto, “Scientific Progress,” in E. Zalta (ed.), The Stanford Encyclopedia of Phi- losophy, http://plato.stanford.edu (2015) gives a survey of philosophical debates on progress. Graham Oddie, “Truthlikeness,” in E. Zalta (ed.), The Stanford Encyclopedia of Philosophy, http:// plato.stanford.edu (2014), gives a survey on verisimilitude. I. Hacking (ed.), Scientific Revolutions (Oxford: Oxford University Press, 1981), collects classical articles on scientific change.

197 Ilkka Niiniluoto

References Aronson, J. L., Harré, R. and Way, E. C. (1994) Realism Rescued: How Scientific Progress Is Possible, London: Duckworth. Bird, A. (2007) “What Is Scientific Progress?”Nous 41, 92–117. ——— (2016) “Scientific Progress,” in P. Humphreys (ed.),Oxford Handbook of Philosophy of Science, Oxford: Oxford University Press. Cartwright, N. (1983) How the Laws of Physics Lie, Oxford: Oxford University Press. Cevolani, G. and Tambolo, L. (2013) “Progress as Approximation to the Truth: A Defence of the Verisimil- itudinarian Approach,” Erkenntnis 78, 921–935. Chakravartty, A. (2009) A Metaphysics for Scientific Realism: Knowing the Unobservable, Cambridge: Cambridge University Press. da Costa, N. da and French, S. (2003) Science and Partial Truth, New York: Oxford University Press. Douglas, H. (2014) “Pure Science and the Problem of Progress,” Studies in History and Philosophy of Science 46, 55–63. Duhem, P. (1954) The Aim and Structure of Physical Theory, Princeton: Princeton University Press. ——— (1969) To Save the Phenomena: The Idea of Physical Theory from Plato to Galileo, Chicago: Chicago University Press. Hintikka, J. and Pietarinen, J. (1966) “Semantic Information and Inductive Logic,” in J. Hintikka and P. Suppes (eds.), Aspects of Inductive Logic, Amsterdam: North-Holland, pp. 96–112. Kemeny, J. and Oppenheim, P. (1956) “On Reduction,” Philosophical Studies 7, 6–19. Kitcher, P. (1993) The Advancement of Science: Science without Legend, Objectivity without Illusions, Oxford: Oxford University Press. Kuhn, T. (1970) The Structure of Scientific Revolutions (2nd ed.), Chicago: The University of Chicago Press. Kuipers, T. (2000) From Instrumentalism to Constructive Realism: On Some Relations between Confirmation, Empir- ical Progress, and Truth Approximation, Dordrecht: Kluwer. ——— (2009) “Comparative Realism as the Best Response to Antirealism,” in C. Glymour, Wang Wei, and D. Westerståhl (eds.), Logic, Methodology and Philosophy of Science: Proceedings of the Thirteenth International Congress, London: College Publications, pp. 211–240. Laudan, L. (1973) “Peirce and the Trivialization of the Self-Corrective Thesis,” in R. N. Giere and R. S. Westfall (eds.), Foundations of Scientific Method: The Nineteenth Century, Bloomington: Indiana University Press, pp. 275–306. ——— (1977) Progress and Its Problems: Toward a Theory of Scientific Growth, London: Routledge and Kegan Paul. ——— (1984) Science and Values: The Aims of Science and Their Role in Scientific Debate, Berkeley: University of California Press. Levi, I. (1967) Gambling with Truth: An Essay on Induction and the Aims of Science, New York: Alfred A. Knopf. Musgrave, A. (1988) “The Ultimate Argument for Scientific Realism,” in R. Nola (ed.), Relativism and Realism in Science, Dordrecht: Kluwer, pp. 229–252. Niiniluoto, I. (1984) Is Science Progressive? Dordrecht: D. Reidel. ——— (1987) Truthlikeness, Dordrecht: D. Reidel. ——— (1990) “Theories, Approximations, and Idealizations,” in J. Brzezinski, F. Coniglione, T. Kuipers, and L. Nowak (eds.), Idealization I: General Problems, Amsterdam: Rodopi, pp. 9–57. ——— (1999) Critical Scientific Realism, Oxford: Oxford University Press. ——— (2007) “Evaluation of Theories,” in T. Kuipers (ed.), Handbook of Philosophy of Science: General Phi- losophy of Science – Focal Issues, Amsterdam: Elsevier, pp. 175–217. ——— (2014) “Scientific Progress as Increasing Verisimilitude,” Studies in History and Philosophy of Science 46, 73–77. ——— (2017) “Optimistic Realism about Scientific Progress,” Synthese 194, 3291–3309. Oddie, G. (1986) Likeness to Truth, Dordrecht: D. Reidel. Peirce, C. S. (1931–35) Collected Papers vols. 1–6 (C. Hartshorne and P. Weiss, eds.), Cambridge, MA: Harvard University Press. Popper, K. R. (1963) Conjectures and Refutations: The Growth of Scientific Knowledge, London: Hutchinson. Psillos, S. (1999) Scientific Realism: How Science Tracks Truth, London: Routledge. Putnam, H. (1981) Reason, Truth, and History, Cambridge: Cambridge University Press. Rescher, N. (1977) Methodological Pragmatism, Oxford: Blackwell. Rowbottom, D. (2008) “N-Rays and the Semantic View of Scientific Progress,” Studies in History and Phi- losophy of Science 39, 277–278.

198 Scientific progress

——— (2015) “Scientific Progress without Increasing Verisimilitude: In Response to Niiniluoto,” Studies in History and Philosophy of Science 51, 100–104. Sarton, G. (1936) The Study of the History of Science, Harvard: Harvard University Press. Schurz, G. (2011) “Structural Correspondence, Indirect Reference, and Partial Truth: Phlogiston Theory and Newtonian Mechanics,” Synthese 180, 103–120. Schurz, G. and Weingartner, P. (2010) “Zwart and Franssen’s Impossibility Theorem Holds for Possible-Worlds Accounts but Not for Consequence-Accounts to Verisimilitude,” Synthese 172, 416–436. Sellars, W. (1968) Science and Metaphysics, London: Routledge and Kegan Paul. Stanford, P. K. (2006) Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives, Oxford: Oxford University Press. Stegmüller, W. (1976) The Structure and Dynamics of Theories, New York: Springer.

199 16 REALISM AND THE LIMITS OF EXPLANATORY REASONING

Juha Saatsi

1 Introduction The nature and limits of scientific explanations and explanatory reasoning- are central topics in phi losophy of science. They are also important in the scientific realism debate, where much has been written about inference to the best explanation. In this chapter I will first- examine issues surround ing inference to the best explanation, its justification, and its role in different realism arguments before turning to more general issues concerning explanations’ ontological commitments. Throughout the chapter I stress the importance of thinking carefully about the nature of explanation in connection with issues related to explanatory reasoning and scientific realism. Undoubtedly this is easier said than done: the nature of scientific explanation is a broad and controversial topic in its own right. A decade and a half ago Newton-Smith (2000) rightly complained that philosophical analysis of scientific explanation is ‘an embarrassment for the philosophy of science’:

While we have insightful studies of explanation, we are a very long way from having this single unifying theory of explanation. [. . .] [W]e would like to be able to explain what it is that leads us to count different explanations as explanatory. This task is made all the more pressing as most philosophers of ascience main holdtask, thattheif not main task, of science is to provide explanation, whatever that may be. (p. 132)

Although studies of explanation have progressed in leaps and bounds since the turn of the mil- lennium, we are still far from having reached a broad consensus, and explanation-centred debates within the realism debate continue to suffer from lack of contact with the best work done in the philosophy of explanation. The challenge in pinning down a theory of explanation hardly reduces its significance, how- ever. Indeed, it is easy to appreciate the importance of havingwhat it a isgood to explaingrasp on prior to trying to determine the nature and limits bestof inferenceexplanation. to Asthe Newton-Smith ibid( .) forcefully points out:

[I]t is hard to see how we will be able to adjudicate the substantial claims about the relation of explanation to epistemology without such a unifying account. Realists, for 200 The limits of explanatory reasoning

instance, typically claim that the greater a theory’s explanatory power, the greater its likely truth or approximate truth. Without the backing of a unifying account of expla- nation, this claim is suspect. (p. 132)

It is an open question, of course, whether any such unifying account of explanation can be given or whether some sort of pluralism is in the offing (cf. Reutlinger and Saatsi 2018: §1). Either way, it is critical to emphasise the importance to the realism debate of a decent grasp of the nature of explanation. In this chapter I will discuss various issues to this effect, first in connection with the realists’ appeal to inference to the best explanation (section 3), and then with respect to more general questions concerning explanatory indispensability and ontological commitment (section 4). In both contexts I aim to bring out how recent developments in the philosophy of explanation fruitfully intersect with the scientific realism debate. But before we get to these matters I need to provide some context.

2 Some context Explanations and explanatory reasoning are at the heart of the sciences, which in large part seem to be in the business of helping us understand the world. For instance, the standard model of particle physics appeals to the Higgs boson and spontaneous symmetry breaking in order to explain why some particles have mass. For a much less theoretical and less contemporary example, consider Darwin’s theory of natural selection, which beautifully explains the evolution of replicating living things. Perhaps the perceived explanatory goodness of these theories is (at least partly) responsible for their high standing? Consider Darwin’s comment in On the Origin of Species on the wealth of evidence, including morphological and embryological data, supporting natural selection:

[I]t can hardly be supposed that a false theory would explain, in so satisfactory a manner as does the theory of natural selection, the several large classes of facts above specified. It has recently been objected that this is an unsafe method of arguing; but it is a method used in judging of the common events of life, and has often been used by the greatest natural philosophers. (Darwin 1962: 476)

Amongst the greatest natural philosophers one finds Newton (1687), whose first methodological rule in Principia states, “No more causes of things should be admitted than are both true and suffi- cient to explain their phenomena.” Whewell (1858) takes the quest for the best explanation to be at the heart of Newton’s method. He summarises his own idea of ‘consilience of inductions’ thus:

[T]he evidence in favour of our induction is of a much higher and more forcible char- acter when it enables us to explain and determine cases of a kind different from those which were contemplated in the formation of our hypothesis. The instances in which this has occurred, indeed, impress us with a conviction that the truth of our hypothesis is certain. No action could give rise to such an extraordinary coincidence. (pp. 87–88)

These are famous examples of commonplace methodological references to explanation in sci- ence, supporting the idea that theories’ perceived capacity to explain phenomena is often much 201 Juha Saatsi more than an output of successful science: it is also an input for scientists’ assessment of the evidential support for theories. The exact role of explanatory considerations in the scientific methodology is a contested topic, but one prominent school of thought views scientists’ evaluations of explanatory goodness as the very guide to making many ampliative (i.e. deductively invalid) inferences in science (e.g. Boyd 1981; Lipton 2004; Psillos 1999; McCain 2016). According to this tradition scientific inferences are (often) made to the best available explanation – that is, they are inferences to the best explanations.1 There are substantial questions about the notion of inference to the best explanation (IBE) – to be sharpened shortly (§3) – as a characterization of the scientific method, confirmation, and theory-choice. In confirmation theory the dominating Bayesian trend is probabilistic, and the relationship between Bayesianism and explanatory reasoning is a contested topic of significant current interest (see Douven 2017: §4). Independently of that specific issue, many contemporary philosophers put much less weight on explanatory reasoning in interpreting the methodological pronouncements of Newton or Darwin, or in making sense of theory-choice in contempo- rary physics, for example (e.g. Achinstein 2013; Dawid 2013). Some very prominent figures in the philosophy of science from Mach and Duhem onwards have downplayed the explanatory dimension of science. According to the latter, for example, “a physical theory [. . .] is an abstract system whose aim is to summarise and classify logically a group of experimental laws without claiming to explain these laws” (Duhem 1906; 7). Some of the most influential contemporary anti-realists belong to this tradition that downplays the epistemic importance of explanation. Neo-instrumentalists, such as Stanford (2006), regard fundamental scientific theories as effec- tive instruments for prediction, manipulation, and control of phenomena, not as explanatory descriptions of the reality beyond those phenomena. Similarly, empiricists to this day regard as suspect scientific explanations of observable phenomena in terms of unobservable causes and laws, and in as far as scientists themselves put weight on the explanatory virtues of theories, empiricists such as van Fraassen (1980, 2004) tend to regard these virtues as merely pragmatic, as opposed to epistemic. (Even if it is the case that according to the scientific standards theo- ries that explain better, really are better – generally speaking, other things being equal – it does not automatically follow that explanatory judgements are truth-conducive, or that explanatory understanding should be the aim of science.) Realists, by contrast, tend to put much more weight on the explanatory dimension of science, noting that the core explanatory ingredients in science routinely make indispensable reference to unobservable causes, mechanisms, symmetries, laws, and so on, all of which naturally involve epis- temological and ontological commitments that anti-realists denounce. Hence, standing for the explanatory aspirations of science and its aim to give us genuine understanding of the world behind the appearances arguably requires a realist commitment to whatever is doing the explaining. Empiricists’ desire to make sense of science without ‘inflationary metaphysics’ of laws of nature, natural kinds, and objective modality motivates a distinctly pragmatic account of explanation, according to which a theory’s explanatory goodness is just a matter of the theory providing answers to context-sensitive why-questions, which is something that false theories can also do (van Fraassen 1980). The tenability of such a deeply pragmatic account of explanation is highly questionable, however (e.g. Kitcher and Salmon 1987). How much metaphysics does a realist have to embrace in order to capture the alleged explan- atory success of science? This question is at the heart of some of the current controversies regard- ing scientific realism (see S. French, “Realism and metaphysics” and M. Slater, “Natural kinds for the scientific realist,” chs. 31 and 34 of this volume, respectively). Answering it requires engaging with philosophical accounts of causation, natural kinds, and explanation. For whether genuine explanatory success requires correctly representing non-Humean causal connections, say, depends

202 The limits of explanatory reasoning on what it is to explain. While some have defended more metaphysically laden views of scientific explanation (e.g. Salmon 1984; Craver 2007), others have attempted to stay much more neutral on the metaphysics of causation and explanation (e.g. Woodward 2003). If a metaphysically ‘thin’ modal account of explanation is defensible, it may also offer a way to capture the explanatory successes of metaphysically more troublesome areas of science, such as quantum physics, where the realist may want to avoid making any specific metaphysical commitments.2 Many realists have operated with ontologically more committing accounts of explanation in mind. A natural realist intuition is that ‘to explain’ is a success term: an actual explanation requires the (approximate) truth of the explanatory assumptions; else we merely have a potential explana- tion at best.3 As a natural consequence many realists have regarded explanatory indispensability as the key to determining what is real. This tradition goes back to Quine and Sellars, and it has been recently championed by, for example, Baker (2009), Colyvan (2013), Field (1989), and Psillos (2005, 2011b). According to this line of thought our best scientific explanations and their onto- logical requirements fix the realist commitments. This raises questions about the status of math- ematics, and abstract and idealised models, which can arguably play an indispensable explanatory role in accounting for empirical phenomena. The advocates of the explanatory indispensability argument have argued that we should extend our realist commitments from typical unobserva- ble realist posits, such as electrons, to mathematical and other abstracta. This holistic application of explanatory reasoning in defence of realism has become another point of contention in the scientific realism debate. So much for general context setting. We have seen how issues concerning scientific realism, explanation, and IBE naturally arise from a positive epistemic attitude towards scientific expla- nations. The rest of this chapter presupposes that IBE can provide a valid description of at least some significant inferences in science. The philosophical issues to be discussed focus on the question of justification: what can we justifiably believe or ontologically commit ourselves to given such explanatory practices? Next I will discuss how some of the key arguments for realism, more specifically, turn on explanatory reasoning and IBE.

3 IBE and realist arguments Let’s now examine IBE and its connection to scientific realism in a bit more detail. As already said, the basic gloss on IBE is that explanatory considerations play an evidential role in science: the explanatory goodness of theoretical hypotheses is an important factor in the assessment and justification of those hypotheses. Putting a normative spin on it, we can say that accord- ing to IBE, explanatory virtues – parsimony, unification, explanatory scope, and precision, for example – should be taken into account in assessing competing hypotheses’ comparative likelihoods. To borrow Peter Lipton’s (2004) turn of phrase: explanatory loveliness should be taken as a guide to likeliness. This basic idea offers only a starting point; further constraints and refinements are required to make IBE a workable form of inference (see, e.g., Lipton 2004; Douven 2017). It is natural to demand, for instance, that the competing hypotheses should be good enough (qua explanations) to be worth inferring to, and we can further demand that the best explanation should be sufficiently clearly the best to be a worthy winner. If realists could argue that a normative, justificatory idea of IBE is well grounded, so that love- lier explanations are indeed more probably (approximately) true, we would have a good reason to epistemically prefer the theories and hypotheses arrived at by this method. This would not yet quite deliver the realist conclusion, however, since IBE (thus construed) does not yet say anything about how likely the best explanations are to be (approximately) true. The reliability of using com- parative explanatory loveliness as a guide to comparative inductive likeliness is compatible with

203 Juha Saatsi the possibility that our best explanations are not very likely to be (approximately) true after all. Indeed, perhaps the best explanations (in fundamental science, say) are more often than not only the best of a bad lot that does not include an (approximately) true theory (van Fraassen 1989). And why should we regard a normative idea of IBE as well-grounded in the first place? This is where arguments for realism come in. Consider, to begin with, the original No-Mira- cles Argument (NMA), which capitalises on the intuition that only scientific realism can account for the impressive empirical success of science. (Exactly how ‘empirical success’ should be under- stood and how it can be leveraged into an argument for realism raise various issues that I am going to gloss over here; see K. Brad Wray, “Success of science as a motivation for realism,” ch. 3 of this volume.) The classic presentation of NMA, going back to Putnam (1978), simply puts the argument forward as an IBE: (i) the phenomenon to be explained is the empirical success of sci- ence; (ii) the realist idea that the best scientific theories and hypotheses are systematically latching onto reality gives the best explanation of that phenomenon; therefore (iii) realism is justified via the application of IBE. A number of authors have attempted to further articulate and defend this kind of IBE–based argument for realism (e.g. Boyd 1981; Musgrave 1988; Barnes 2002; Psillos 1999: §4, 2009: §3, 2011d). Further articulation is required, since there are a number of fairly obvious and well- known worries about NMA thus presented. Is the realist explanation really the best? Is it good enough? Doesn’t the realist’s application of IBE in her argument simply beg the question against those who are sceptical about IBE to begin with? There are equally well-known attempts to respond to these worries, detailed in, for example, Psillos (1999, 2011d). It is worth noting that some of these responses substantially hinge on broader issues in epistemology, for example in relation to internalism versus externalism regarding justification, and how knowledge is analysed. In particular, arguably from an externalist perspective the realist’s use of IBE in NMA need not beg the question against an IBE-sceptic, if our explanatory reasoning about the world (including science and its success) is de facto reliable (see A. Bird, “Scientific realism and epistemology,” ch. 33 of this volume). In response to the bad lot objection, more specifically, it has been argued that the cumulative nature of science – the auxiliary assumptions of today’s theorising are products of earlier IBEs – gives evidence of scientists’ ability to come up with hypotheses which include (approximately) true ones (Lipton 1993; cf. Dellsén 2017 for recent criticism). Let’s dig a bit deeper into a couple of interesting features of NMA. Many advocates of NMA from Putnam onwards have presented it as a methodologically naturalistic argument that is con- tinuous with scientific reasoning itself (Boyd 1981; Psillos 1999). That is, although the realist explanation takes as its data particular facts about science – namely, science’s empirical success, as well as the role of explanatory reasoning as a driver of this success – the argument itself is meant to exemplify the very qualities that good scientific reasoning exhibits. There is clearly an air of circularity in the way the realists aim to justify scientific IBEs as well grounded with this (meta- level) IBE about science, but realists have argued that this circularity is not pernicious. Instead of being viciously premise circular, NMA involves a kind of rule circularity, in that the rule of inference employed – namely, IBE – also appears in the conclusion of the inference (Psillos 1999). Arguably rule circularity need not be problematic, at least from the point of view of externalist epistemology. It may be the case that the argument only succeeds in ‘preaching to the converted’ – namely those willing to adopt a realist stance at the outset. But that need not be a futile upshot, and perhaps it is the best we can hope for (Psillos 2011a). An appropriate similarity between scientists’ IBEs and the realist’s meta-level IBE is critical for prominent vindications of NMA, but the status of NMA as a ‘methodologically naturalistic’ argument is far from straightforward. For example, Frost-Arnold (2010) argues that the realist explanation of the empirical success of science fails to satisfy the scientific criteria for a good

204 The limits of explanatory reasoning explanation, because it is neither unifying nor generates new predictions. Since NMA arguably fails to satisfy the scientifically proven canons of a good explanatory inference, Frost-Arnold argues that it is not sanctioned by the tenets of naturalistic philosophy. If you take your cue for assessing explanatory goodness from the sciences, the realist explanation of the empirical success of science arguably does not come out as a good (enough) explanation to be warranted by IBE. In my view Frost-Arnold’s criticism of NMA presupposes too strict a conception of natu- ralism. It should be allowed that the realist explanation can be science-transcending and purely philosophical, in the sense that it does not enjoy the degree of evidence there is for paradigmatically good scientific explanations. The mode of inference can nevertheless be the same in the realist argument and in various scientific instances of theory-choice, even when the overall evidence (or epistemological standard) is not. As far as I can see, there is nothing in the tenets of methodolog- ically naturalistic philosophy that commits the realist to the claim that her philosophical theory (about science) is supported to the same degree that scientific theories themselves are supported. One should not object to explanationism in the context of philosophy of science merely on the grounds that it does not have probative force on a par with the explanatory inferences in science (Saatsi 2016). After all, presumably there are reasons why the realist doctrine counts as a philo- sophical theory, as opposed to being scientific. There are other, related aspects of NMA that one perhaps should find genuinely problematic, however. Consider, for instance, the rather global character of NMA. This single realist argu- ment covers a lot of science in one fell swoop: all of (mature?) science that employs IBE-driven inferences and produces empirical successes of the requisite sort. It is notable that many of these sciences have radically different kinds of subject matters – for example cosmology, quantum physics, molecular biology, geology, ecology – some of which are further removed from human everyday experience than others. (The other-worldliness of modern physics is a case in point, of course.) Arguably the modes of explanation employed also differ widely, some explanations being non-causal and abstract, while others are straightforwardly causal-mechanical, say. In the light of all this it is natural to worry that scientists’ reliability in their explanatory reasoning could well vary a great deal from one domain of science to another. Perhaps we are, for instance, much less good at conceiving of all the alternative explanations in fundamental physics than we are in molecular biology. And perhaps we are less reliable assessors of explanatory goodness in con- nection with quantum phenomena than we are in connection with geology. On such grounds one might well worry that NMA over-generalises in its attempt to argue for realism about ‘all of mature science’ via a single application of IBE. The best explanation for empirical success could well differ from one area of science to another (Saatsi 2015). Moreover, the (global) realist’s inclination to generalise and abstract away from details of different subject matters goes further than this. In particular, when the realist emphasises the rule-circular character of NMA, she regards this particular philosophical IBE as being on a par, in a justificatory sense, with the scientific IBEs that are part of the subject matter of NMA (cf. Psillos 1999: §4). A philosophical (meta-level) IBE – namely NMA – that is about scientific IBEs is meant to be a good inference by virtue of being the same kind of inference as the scientific IBEs. This kind of similarity between NMA and scientific IBEs can be claimed at a descriptive level by abstracting away from all the differences in the respective explanations that are at stake in the realist argument, on the one hand, and the various scientific inferences, on the other. (What is the nature of the realist explanation, exactly? It’s not clear that any of the prominent accounts of scientific explanation apply to it.) But why think that those differences do not matter for our reliability in explanatory reasoning? To the contrary, given how different the realist explanation of the success of science is from all scientific explanations, we might well want to be sceptical of the realist’s IBE, even if (many of ) the scientific IBEs are reliable inferences. The same point applies

205 Juha Saatsi to various other philosophical IBEs, which are often justified by reference to empirically fruitful employment of IBE in science. (See Saatsi 2016 for more detailed discussion.) The various issues with NMA speak against global explanationist realism. But there are other, less global arguments for realism, some of which turn on more piecemeal attempts to justify scientists’ IBEs. For example, Lipton (2004) argues that the realist can justify at least some causal- contrastive IBEs involving unobservable causes on the basis of inductive evidence of our reliability in using these specific kinds of IBEs when reasoning about observable causes. The thought is that the relevant IBEs form a sufficiently unified kind of inference, underwritten by the specific kind of explanation involved, to support an inductive projection of our reliability (qua explanatory reasoners) from cases with observable explanans to cases with unobservable explanans. What is clearly important for spelling out a local justification of IBE along these lines is a good grasp on the nature of explanations at stake. This is one way in which philosophy of explanation interacts with the realist arguments. Finally, it is worth emphasising that although considerations concerning explanatory reason- ing have been central to much of the realist gambit, it would be a mistake to regard all realist arguments as (turning on) IBEs. For instance, Kitcher (2001) develops a quite thoroughgoing (but not global) realist strategy – the so-called ‘Galilean strategy’ – for delineating the conditions under which we can project the reliability of different modes of reasoning from the observable to the unobservable.4 Achinstein (2002) has argued that Perrin’s theoretical reasoning regarding the reality of atoms exemplifies a realist argument that does not turn on explanatory considerations.5 Hacking (1982) and Cartwright (1983) have presented local arguments for realism about vari- ous kinds of entities on the basis of experiments, independently of explanatory considerations. Although it has been argued that the entity realist arguments are best viewed as ultimately turning on IBE (Reiner and Pierson 1995; Pierson and Reiner 2008), there are also alternative ways of interpreting and precisifying those arguments, turning on a more local conception of what makes particular inductive inferences licit (Saatsi 2009).

4 Explanations’ realist commitments A central question in naturalistic, science-driven philosophy concerns our best scientific theories’ realist commitments. If we adopt a realist stance (at least for the sake of the argument) and take seriously the explanatory achievements of science, what exactly are we committed to being real- ists about in the light of our best theories? One influential answer to this question turns on what Stathis Psillos (2005: 389) has called ‘the explanatory criterion of reality’: “something is real if its positing plays an indispensable role in the explanation of well-founded phenomena.”6 This is certainly a natural starting point for a realist, as it captures the gist of the intuition that explanation is at its heart a factive notion: to explain is to get right the relevant explanatory features of reality. If we appeal to some feature of the world as explaining an empirical phenomenon – for exam- ple to solar flares as the explanatory cause behind an exceptional aurora borealis – presumably we should take as real the feature doing the explaining. And the fact that there are multifarious phenomena that scientists simply cannot understand without appealing to, say, electrons is surely behind scientists’ conviction that electrons are real. Electrons play an indispensable role in various scientific theories, furnishing the best available explanations of the relevant phenomena. Realism about electrons follows from taking our best explanations seriously. This is just IBE in action. So far, so plausible. But the explanatory criterion of reality very quickly leads to head scratch- ing. For one thing, it is not at all obvious how to square it with the fact that past scientific the- ories, such as Newtonian gravity, are still broadly taken to be genuinely explanatory of various (e.g. tidal) phenomena (Bokulich 2016). For another, in its simplicity the criterion counsels

206 The limits of explanatory reasoning realist commitment to everything that is indispensable for accepted scientific explanations of empirical phenomena. When one looks at scientific explanations more closely, it turns out that they can indispensably involve not only physical posits like electrons, solar flares, and so on, but also mathematics, abstractions – for example, average height or a donut’s centre of mass – and idealisations – for example, finite systems being modelled as infinite (see M. Leng, “Mathematical realism and naturalism,” and A. Levy, “Modelling and realism: strange bedfellows?,” chs. 32 and 19 of this volume). It seems that in many cases it is impossible to provide equally good explana- tions without recourse to abstractions and idealisations. This raises an interesting issue concerning the ontological commitment of such indispensable theoretical posits, assuming we want to infer what is real from our best explanations: taken at face value, the explanatory criterion of reality recommends commitment to abstract things that scientists themselves may casually regard as ‘fic- tional,’ ‘idealized,’ or as mere mathematical scaffolding needed to provide an explanatory model or derivation of some phenomenon. Some realists are happy to endorse the face-value upshot of the explanatory criterion: since it turns out that mathematics and other abstracta are explanatorily indispensable for empirical science, our realist commitments should simply be extended to those theoretical posits (e.g. Baker 2009; Colyvan 2013; Psillos 2010; Psillos 2011b). This perspective has an affinity with Quinean confirmational holism, and the so-called ‘explanatory indispensability argument’ is indeed natu- rally viewed as an enhancement of the Quine-Putnam indispensability argument for mathemat- ical realism.7 Others find the extension of realist commitments to abstract objects less palatable, for various reasons. (In particular, there are well-known epistemological worries about abstract objects and mathematical truths.) These philosophers with nominalist sympathies face the chal- lenge of driving a principled epistemological wedge between (1) those aspects of our best expla- nations that are in some sense ontologically committing or reality-latching and (2) those aspects that are merely instrumental – by virtue of playing a merely representation role, say – albeit indis- pensably so. It can be difficult to do this demarcation without begging some critical questions. For instance, from the perspective of IBE, taken in the abstract, it is unmotivated to restrict the explanatory criterion of reality to just causally explanatory features of the world – reducing the criterion to the so-called ‘Eleatic principle’ – since arguably there’s little to motivate the idea that all our best explanations in empirical science operate in causal terms (cf. Reutlinger and Saatsi 2018). It is also difficult to formally cleanly separate the nominalistic content of scientific theories so as to make sense of their ‘nominalistic adequacy’ (see, e.g., Psillos 2010). Nevertheless, I think it would be hasty to accept the face-value upshot of the explanatory criterion. To echo the ‘embarrassment’ that Newton-Smith decried years ago (see section 1), it is hard to see how we could adjudicate the substantial claims regarding the relation between explanations and ontology without relying on a sufficiently well-developed account of explana- tion. There is something odd in wholeheartedly advocating the explanatory criterion of reality without backing it up with an account of explanation. If anything, this seems to get things wrong way round. In order to figure out what an IBE commits us to, we presumably first should get a handle on explanatory goodness (in relation to the explanation at stake). And in order to get a handle on explanatory goodness, we presumably need a prior handle on what explaining amounts to. This natural logic suggests that what we need, first of all, is a sufficiently well-worked-out theory of explanation. In the light of this it is notable that the extensive debate on explanations’ realist commitment has been conducted largely in the absence of (substantial engagement with) well-developed accounts of explanation. And while Newton-Smith perhaps rightly complained about the state of philosophy of explanation at the turn of the millennium, a good deal of progress has been made since, providing us a much better grasp on many relevant aspects of explanation.

207 Juha Saatsi

The realism debate has yet to make full contact with this growing body of work, but recently philosophers have started paying increasing attention to different theories of explanation in assessing the ontological weight of explanatory indispensability (e.g. Baron 2016; Saatsi 2016). Although the jury is still out on the exact realist commitments that genuine explanations should be taken to have, paying due attention to philosophy of explanation certainly indicates as seriously problematic the simple idea that explanations simply wear their ontological com- mitment on their sleeves. Thinking about realism in the context of philosophy of explanation raises various interest- ing possibilities. Consider, for example, the fact that the explanatory goodness of competing explanations is assessed by us (human beings). The potential relevance of this fact can be understood in the light of the distinction between pragmatic versus ontic aspects of explana- tion. Explanations and explanatory reasoning can involve pragmatic elements due to the fact that they are communicated, assessed, and understood by us, cognitively finite beings (see e.g. Potochnik 2018).8 Problematic, purely pragmatic accounts of explanation aside, even if one looks at broadly ontic accounts of explanations – according to which explanations work by identifying explanatory worldly facts – there can be aspects of explanations that play a merely pragmatic or instrumental, as opposed to ontologically committing, factive role. Take, for instance, currently prominent modal accounts of explanation that identify explanations with information about difference-makers (Strevens 2008) or systematic counterfactual variance of the explanandum on the explanans (Woodward 2003). Arguably some aspects of explanations can be indispensable, not by virtue of figuring as a difference-maker or an explanans variable but for providing us such information in a way that is usable and cognitively salient (Saatsi 2016; Baron 2016). Consider, for example, Euler’s graph-theoretic explanation of why the old Koenigsberg’s seven bridges could not be traversed without crossing a bridge twice. This is a popular example of a ‘distinctly mathematical,’ non-causal explanation of an empirical phenom- enon (Pincock 2007). Appealing to the graph-theoretic notion of (non-)Eulerian graph is arguably indispensable for providing the best explanation of the explanandum at stake. Analysing the explanation in counterfactual terms suggests, however, that what is doing the explanatory ‘heavy lifting’ is the information about how a physical explanandum variable, the bridges’ traversibility, modally depends (in a particular way) on a physical explanans var- iable, the number of bridges connecting any two land masses the land masses (Jansson and Saatsi forthcoming). The use of mathematics is indispensable for providing the best, most general explanation of the phenomenon, but mathematics is naturally viewed as playing a merely representational role of capturing the explanatory dependencies between different physical features of the world. In this way, examining questions of explanatory indispen- sability in the context of modal accounts of explanation can yield more fine-grained dis- tinctions between different kinds of explanatory roles, thus offering ways of securing the basic realist intuition that genuine explanations are factive, while at the same time admitting that not everything indispensably involved in an explanation is automatically ontologically committing. Issues concerning explanatory indispensability also spill over to other debates in the metaphys- ics of science. Certain arguments for ontological anti-reductionism (or emergence), for instance, involve appeal to (seemingly) indispensable use of infinite limits in our best explanations of, for example, phase transitions or universality, allegedly indicating the reality of some emergent, explanatory feature of reality (e.g. Morrison 2012). Reductionists have responded by either deny- ing the indispensability or claiming that the explanatory role of infinite limits can be understood in instrumental terms in the light of a closer analysis of the nature of the explanation at stake (see e.g. Saatsi and Reutlinger forthcoming). 208 The limits of explanatory reasoning

5 Conclusion It seems undeniable that explanation is an important feature of science. Realist philosophers of science often seethe it centralas feature, claiming that scientists often use (implicitly or explicitly) inference to the best explanation as their methodological maxim.- Arguing for real ism often turns on justifying this kind of method as reliable and truth conducive. I have argued that there is much to be gained in thinking about these arguments in closer relation to philosophy of explanation. A better grasp on what scientificare and explanations what makes one explanationbetter than another, can throw further light on the viability (or otherwise) of various realist arguments, the nature and reliability of explanatory reasoning, and explanations’ realist commitments.

Notes 1 See Lipton (2004) for a classic exposition and defence of this line of thought and Douven (2017) for a nice review. 2 Indeed, some have argued that one need not be a realist at all in order to capture the explanatory achieve- ments of quantum physics in modal terms. See Healey (2017). 3 This idea goes back to the truth-condition of Hempel’s DN model of explanation. See e.g. Hempel (1962). 4 See Magnus (2003) for useful discussion of the limitations of Kitcher’s strategy. 5 See Psillos (2011c) for useful discussion of the role of broader explanatory considerations in the Perrin case. 6 Psillos finds the roots of this criterion in Sellars (1963) and also notes its affinity to the well-known indis- pensability arguments of Quine and Putnam. 7 While Quine and Putnam emphasised confirmational holism and the role of mathematics for maximising theoretical virtues, the ‘enhanced’ indispensability argument focuses more specifically on their role in maximising explanatory virtues. 8 We can recognise the significance of such elements without drawing a distinction between pragmatic versus ontic accounts of explanation, which is too black and white.

References Achinstein, P. (2002) “Is There a Valid Experimental Argument forThe Scientific Journal ofRealism?,” Philosophy 99(9), 470–495. ——— (2013)Evidence and Method: Scientific Strategies of Isaac Newton and, JamesOxford: Clerk Maxwell Oxford University Press. Baker, A. (2009) “Mathematical Explanation Thein Science,”British Journal for the Philosophy of Science 60(3), 611–633. Barnes, E. (2002) “The Miraculous Choice ArgumentPhilosophical for Realism,” Studies 111, 97–120. Baron, S. (2016) “The Explanatory Dispensability ofSynthese Idealizations,” 193(2), 365–386. Bokulich, A. (2016) “Fiction As a Vehicle for Truth: Moving BeyondThe the MonistOntic Conception,” 99, 260–279. Boyd, R. (1981) “Scientific Realism and Naturalistic Epistemology,” in P. D. Asquith and T. Nickles (eds.), PSA 1980, vol. 2, East Lansing, MI: Philosophy of Science Association, pp. 613–662. Cartwright, N. (1983)How the Laws of ,Physics Oxford: Lie Oxford University Press. Colyvan, M. (2013) “Road Work Ahead: Heavy Machinery Mindon the121, Easy1031–1046. Road,” Craver, C. (2007)Explaining the Brain: Mechanisms and the Mosaic Unity, Oxford: of NeuroscienceOxford University Press. Darwin, C. (1962)The Origin of Species(first published 1859), New York: Collier. Dawid, R. (2013)String Theory and the Scientific, Cambridge: Method Cambridge University Press. Dellsén, F. (2017) “Reactionary Responses to the BadStudies Lot inObjection,” History and Philosophy of Science 61, 32–40. Douven, I. (2017) “Abduction,” in E. StanfordN. Zalta Encyclopedia (ed.), of Philosophy (Summer 2017 ed.). Duhem, P. (1906)The Aim and Structure of Physical (1991 Theory ed.), Princeton: Princeton University Press. Field, H. (1989)Realism, Mathematics, and Modality, Oxford: Blackwell. 209 Juha Saatsi

Fraassen, B. C. van (1980) The Scientific Image, New York: Oxford University Press. ——— (1989) Laws and Symmetry, Oxford: Clarendon Press. ——— (2004) The Empirical Stance, New Haven, CT: Yale University Press. Frost-Arnold, G. (2010) “The No-Miracles Argument for Realism: Inference to an Unacceptable Explana- tion,” Philosophy of Science 77, 35–58. Hacking, I. (1982) “Experimentation and Scientific Realism,”Philosophical Topics 13, 71–87. Healey, R. (2017) The Quantum Revolution in Philosophy, New York: Oxford University Press. Hempel, C. (1962) “Explanation in Science and History,” in R. G. Colodny (ed.), Frontiers of Science and Philosophy, London: Allen and Unwin, pp. 9–19. Jansson, L. and Saatsi, J. (forthcoming) “Explanatory Abstractions,” British Journal for the Philosophy of Science. Kitcher, P. (2001) “Real Realism: The Galilean Strategy,” The Philosophical Review 110, 151–197. Kitcher, P. and Salmon, W. (1987) “Van Fraassen on Explanation,” Journal of Philosophy 84, 315–330. Lipton, P. (1993) “Is the Best Good Enough?” Proceedings of the Aristotelian Society 93, 89–104. ——— (2004) Inference to the Best Explanation (2nd ed.), London: Routledge. Magnus, P. D. (2003) “Success, Truth and the Galilean Strategy,” The British Journal for the Philosophy of Science 54(3), 465–474. McCain, K. (2016) The Nature of Scientific Knowledge: An Explanatory Approach, New York: Springer. Morrison, M. (2012) “Emergent Physics and Micro-Ontology,” Philosophy of Science 79, 141–166. Musgrave, A. (1988) “The Ultimate Argument for Scientific Realism,” in R. Nola (ed.), Realism and Rela- tivism in Science, Dordrecht: Kluwer, pp. 229–252. Newton, I. ([1687] 1999) The Principia (I. B. Cohen and A. Whitman, trans.), University of California Press: Berkeley. Newton-Smith, W. H. (2000) “Explanation,” in W. H. Newton-Smith (ed.), A Companion to Philosophy of Science, Oxford: Blackwell, pp. 127–133. Pierson, R. and Reiner, R. (2008) “Explanatory Warrant for Scientific Realism,” Synthese 161, 271–282. Pincock, C. (2007) “A Role for Mathematics in the Physical Sciences,” Noûs 41, 253–275. Potochnik, A. (2018) “Eight Other Questions about Explanation,” in A. Reutlinger and J. Saatsi (eds.), Explanation Beyond Causation: Philosophical Perspectives on Non-Causal Explanations, Oxford: Oxford Uni- versity Press. Psillos, S. (1999) Scientific Realism: How Science Tracks Truth, London: Routledge. ——— (2005) “Scientific Realism and Metaphysics,”Ratio 18(4), 385–404. ——— (2009) Knowing the Structure of Nature: Essays on Realism and Explanation, New York: Springer. ——— (2010) “Scientific Realism: Between Platonism and Nominalism,” Philosophy of Science 77(5), 947–958. ——— (2011a) “Choosing the Realist Framework,” Synthese 190, 301–316. ——— (2011b) “Living with the Abstract: Realism and Models,” Synthese 180(1), 3–17. ——— (2011c) “Moving Molecules above the Scientific Horizon: On Perrin’s Case for Realism,” Journal for General Philosophy of Science 42, 339–363. ——— (2011d) “The Scope and Limits of the No Miracles Argument,” in D. G. Dieks, W. Hartmann, W. T. Uebel and M. Weber (eds.), Explanation, Prediction, and Confirmation, New York: Springer, pp. 23–35. Putnam, H. (1978) Meaning and the Moral Sciences (Routledge Revivals), London: Routledge & Kegan Paul. Reiner, R. and Pierson, R. (1995) “Hacking’s Experimental Realism: An Untenable Middle Ground,” Phi- losophy of Science 62, 60–69. Reutlinger, A. and Saatsi, J. (2018) Explanation Beyond Causation: Philosophical Perspectives on Non-Causal Explanations, Oxford: Oxford University Press. Saatsi, J. (2009) “Form-Driven vs. Content-Driven Arguments for Realism,” in P. D. Magnus and J. Busch (eds.), New Waves in Philosophy of Science, London: Palgrave, pp. 8–28. ——— (2015) “Replacing Recipe Realism,” Synthese 194, 3233–3244. ——— (2016) “Explanation and Explanationism in Science and Metaphysics,” in Z. Yudell and M. Slater (eds.), Metaphysics and the Philosophy of Science: New Essays, New York: Oxford University Press, pp. 163–191. Saatsi, J. and Reutlinger, A. (forthcoming) “Taking Reductionism to the Limit: How to Rebut the Anti- reductionist Argument from Infinite Limits,”Philosophy of Science. Salmon, W. (1984) Scientific Explanation and the Causal Structure of the World, Princeton: Princeton University Press. Sellars, W. (1963) Science, Perception and Reality (reissued 1991), Atascadero, CA: Ridgeview Publishing Company.

210 The limits of explanatory reasoning

Stanford, P. K. (2006) Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives, Oxford: Oxford University Press. Strevens, M. (2008) Depth: An Account of Scientific Explanation, Cambridge, MA: Harvard University Press. Whewell, W. (1858) Novum Organon Renovatum, London: John W. Parker. Woodward, J. (2003) Making Things Happen: A Causal Theory of Explanation, Oxford: Oxford University Press.

211 17 UNCONCEIVED ALTERNATIVES AND THE STRATEGY OF HISTORICAL OSTENSION

P. Kyle Stanford

1 Historicist challenges to scientific realism Traditionally, scientific realists have taken the perfectly sensible view that the dramatic empirical and practical achievements of our best scientific theories show that those theories must be at least probably, approximately true. Perhaps surprisingly, however, the most persistent and influential challenges to this realist view have been motivated by exploring the historical record of scientific inquiry itself. Long before philosophers like Thomas Kuhn (1962) or Larry Laudan (1981) chal- lenged scientific realism on such historicist grounds, for example, similar concerns were articu- lated by scientists like Henri Poincaré and Pierre Duhem around the turn of the 20th century. As Poincaré formulates a famous version of an argument now called the Pessimistic Induction:

The ephemeral nature of scientific theories takes by surprise the man of the world. Their brief period of prosperity ended, he sees them abandoned one after the other; he sees ruins piled upon ruins; he predicts that the theories in fashion today will in a short time succumb in their turn, and he concludes that they are absolutely in vain. This is what he calls the bankruptcy of science. ([1905] 1952: 160)

Although such critics of scientific realism certainly recognize the almost literally incredible empirical and practical achievements of many of our best contemporary theories, they none- theless point out that many past scientific theories enjoyed at least the same general sorts of empirical and practical achievements even though they were ultimately discovered to be funda- mentally mistaken rather than even approximately true. Although contemporary scientific the- ories typically enjoy even more impressive empirical and practical success than their theoretical predecessors, such historicist critics argue that these successes differ only in degree and not in kind from the impressive achievements of earlier theories like Newtonian mechanics, the caloric theory of heat, the wave theory of light, and Weismann’s theory of the germ-plasm in empirical prediction (including predictions of novel or previously unknown phenomena) and in guiding our practical engagement with various parts of the natural world more generally. That is, they point out that the scientific theories of the present day are simply the latest in a long succession of quite fundamentally distinct theories whose impressive achievements led their proponents to

212 The Strategy of Historical Ostension think that they must be at least approximately true, and they hold that even the most successful scientific theories of the present day will ultimately find themselves overturned or abandoned in the course of further inquiry in just the same way that we ourselves have found such earlier theoretical orthodoxies abandoned and overturned (see P. Vickers, “Historical challenges to realism”, ch. 4 of this volume). In earlier work (2001, 2006), I argued that the most serious challenge facing scientific realism arises from a somewhat more subtle historical pattern than the one described by the Pessimistic Induction. More specifically, I suggested that the most troubling historical pattern for scientific realism is the repeated and demonstrable failure of scientists and scientific com- munities throughout the historical record to even conceive of scientifically serious alternatives to the theories they embraced on the strength of a given body of evidence, that were none- theless also well-confirmed by that same body of evidence. This “Problem of Unconceived Alternatives” arises because the inferential engine of much fundamental theoretical science is essentially eliminative in character, proposing and testing candidate hypotheses and then selecting from among them the one best supported by the evidence as that in which we should invest our credence. Such eliminative inferences can and do guide us to the truth in a wide range of contexts: when someone is murdered at an English garden party, Sherlock Holmes always brings the culprit to justice. But the reliability of such inferences requires that particular epistemic conditions be satisfied. One such condition is that we must have all of the likely or plausible alternative possibilities in view before proceeding to embrace the winner of such an eliminative competition as the truth of the matter: after all, Holmes would be a poor detective if he simply ignored most of the likely suspects in the crime! And we are in a similar predicament if our eliminative inferences in science simply neglect well-confirmed and scientifically serious alternative theoretical possibilities. Or as Duhem once framed this same worry in a scientific context ([1914] 1954: 189–190): “Shall we ever dare to assert that no other hypothesis is imag- inable? Light may be a swarm of projectiles, or it may be a vibratory motion whose waves are propagated in a medium; is it forbidden to be anything else at all?” (See also D. Tulodziecki, “Underdetermination,” ch. 5 of this volume.) Although this crucial epistemic condition is typically satisfied in everyday contexts of elim- inative inference and even a good many scientific ones, I suggested that the historical record of scientific inquiry itself gives us compelling empirical grounds for doubting whether it is typically satisfied when we formulate and test what we might call “foundational” scientific theories con- cerning the constitution of entities, the underlying causal mechanisms, and the dynamical prin- ciples at work in otherwise inaccessible domains of nature. (For further discussion of the scope of the problem and the range of theories to which it applies, see Stanford 2006: Ch. 2, 2010, 2011). What that historical record reveals instead, I argued, is a robust pattern of theoretical succession in which such foundational theories are accepted on the strength of a given body of evidence, only to be ultimately superseded by alternatives that were also well-confirmed by that evidence but nonetheless simply remained unconceived at the time of the earlier theory’s acceptance. To take an obvious example, Newtonian Mechanics did not dominate the physical science of its day because it was judged to be better confirmed than Special or General Relativity, but instead because this competing foundational theory, also well-confirmed by the very same mechanical phenom- ena for which Newton offered such a convincing account, simply had not yet been conceived of in the first place (nor, therefore, had any attempt to discriminate evidentially between the two even been considered). Although Newtonians given the chance to consider it might well have doubted whether General Relativity was really a scientifically serious alternative theoretical pos- sibility, the fact that it was serious in the only sense of the term that matters here is demonstrated by the fact that it ultimately was taken seriously and even accepted by an actual later scientific

213 P. Kyle Stanford community. This same pattern is exhibited by Cartesian mechanics with respect to both of these successors, Aristotelian mechanics with respect to its successors, and likewise in many other sciences (at least once we allow that two theories can both be reasonably well confirmed by the available evidence without explaining all and only the same phenomena). Indeed, the pervasiveness of this pattern of theoretical succession across the history of scientific inquiry generally would seem to give us every reason to believe that there are well-confirmed, fundamentally distinct alternatives to our own foundational scientific theories that nonetheless remain unconceived by contemporary scientists, even if we cannot specify or describe them further. Thus, to just the extent that support for scientific theories involves comparisons among theoretical alternatives or relies on eliminative forms of evidence such as abduction or inference to the best explanation, I suggested that this historical pattern gives us compelling reasons to doubt that those theories simply tell us how things stand in otherwise inaccessible domains of nature. The challenge thus posed to scientific realism has been the subject of ongoing discussion and criticism, including that found in Chakravartty (2008), Fine (2008), Godfrey-Smith (2008), Magnus (2010), Ruhmkorff (2011), Devitt (2011), Lyons (2013), and Egg (2016), among others. I do not think, however, that the term “antirealism” helpfully describes any picture of sci- entific inquiry that these reflections and this historical evidence should lead us toembrace. The name alone seems simply to suggest some kind of general opposition to reality itself, or perhaps instead the apparently absurd view that the fact that a scientific theory has survived our most sophisticated and rigorous efforts to subject it to empirical testing somehow constitutes an affirmative reason todisbelieve that theory. Perhaps most importantly, however, such “antirealism” is defined or characterized simply by its opposition to scientific realism, and so offers little guid- ance concerning any positive conception of the best scientific theories of the present day or of the scientific enterprise itself that we should embrace instead. Indeed, antirealism lumps together thinkers and views opposed to scientific realism on a wide variety of different grounds and with similarly diverse positive conceptions of what we should believe in its place. Rather than making us “antirealists,” I suggest that the historical evidence supporting both the Pessimistic Induction and the Problem of Unconceived Alternatives should lead us to embrace a view I have elsewhere (2015a) called “Uniformitarianism,” in parallel with the great battle in 19th-century geology between Catastrophists and Uniformitarians concerning the causes and pattern of changes to the Earth on a geological timescale. In that scientific debate, Uniformitarians, like Charles Lyell, famously argued that the Earth’s broad topographic and geographic features had been produced by the activity of natural causes like floods, volcanoes, and earthquakes operating over extremely long periods of time at roughly the same magnitudes and frequencies as they do in the present. Catastrophists like Georges Cuvier held instead that in the past such events had occurred with substantially greater frequency and/or magnitude than those we observe in the present day, on the order of the difference between a contemporary flood and the Great Flood that (according to the Christian Bible) Noah escaped by building the Ark. Uniformitarians thus held that the fundamental processes of geological and geographic transformation were still ongoing and that over time the topography and geography of the Earth would be transformed by contemporary natural causes just as profoundly as it had been in earlier eras. By contrast, Catastrophists held that truly profound and fundamental changes to the Earth were now confined to the distant past and that the basic topography and geography of the Earth could be further modified only in fairly restricted or marginal ways by contemporary natural causes. In the case of the scientific realism dispute, Catastrophists are those who believe that truly profound and fundamental revisions to our scientific understanding of the world are now largely confined to the past, and that the future course of scientific inquiry will not include further theoretical changes or revolutions of the sort by which Newtonian mechanics,

214 The Strategy of Historical Ostension caloric thermodynamics, Daltonian atomism, and Weismann’s theory of the germ plasm ulti- mately came to be profoundly modified, qualified, amended, or simply abandoned and replaced. Uniformitarians in the scientific realism debate, however, hold that the future of the scientific enterprise will be characterized by revolutions and transformations every bit as profound and consequential as those we find throughout its history. In both cases Uniformitarians see us as being in the midst of ongoing processes of fundamental transformation or revolution that Cat- astrophists insist are now confined to the past.

2 The Strategy of Historical Ostension Faced with such sweeping generalities, it is natural to want to hear more details concerning the picture of scientific inquiry that such Uniformitarians advocate in place of scientific realism itself. We might legitimately wonder, for example, just what Uniformitarians mean in denying that we are entitled to regard many or most contemporary foundational scientific theories as even “approximately true.” (See also G. Schurz, “Truthlikeness and approximate truth” and I. Niiniluoto, “Scientific progress,” chs. 11 and 15 of this volume.) Likewise, if we are told that such contemporary scientific theories will ultimately be replaced by alternatives that are “fundamen- tally distinct” from them, we will want to know what degree or variety of distinctness qualifies as fundamental. And perhaps most importantly of all, when Uniformitarians suggest that the future of science will continue to be characterized by “profound and fundamental” upheavals, revolutions, and transformations in our conceptions of nature, we might quite reasonably ask for a more precise and detailed specification of the vision of our scientific future that they mean to advocate thereby. By longstanding philosophical tradition, those who make such demands usually take them- selves to be asking for careful definitions of terms and expressions that remain unclear or vague, or perhaps even a list of necessary and sufficient conditions for a theory being “not even approx- imately true,” for one theory to count as “fundamentally distinct” from another, or for a trans- formation or revolution in scientific thought to be truly “profound and fundamental.” I want to suggest that we can effectively address such concerns, if not in quite the way those who pose them have in mind, by means of what I will call the Strategy of Historical Ostension. This strat- egy does not seek to provide definitions or necessary and sufficient conditions, but instead to flesh out our understanding of Uniformitarian commitments using a range of exemplars drawn from the historical record of scientific inquiry itself. That is, those who use the Strategy of His- torical Ostension seek to enrich our grasp of what constitutes a “profound transformation” in our theoretical understanding of some otherwise inaccessible natural domain, or what it is for a theory to be “fundamentally distinct” from its historical predecessors, not by reformulating such expressions in different terms but instead by indicating a range of actual historical cases in which they hold the relevant properties or relationships to be exemplified. Historicist critics of scientific realism, for example, often suggest that many successful past theories have turned out to be not even approximately true. It seems natural to ask for a more detailed characterization of the notion of approximate truth at work here, or even a list of features a theory must have (or lack) in order to count as “not even approximately true.” But the Strategy of Historical Ostension suggests that we can more effectively characterize what this description is intended to capture by instead pointing out a range of concrete historical examples of successful past theories that contemporary theoretical orthodoxy judges to be “not even approximately true.” Moreover, this strategy will ultimately put us in a position to articulate the central com- mitments of Uniformitarianism without relying on the contentious language or judgments of approximate truth in any case.

215 P. Kyle Stanford

For example, the study of thermodynamics in the late 18th through the mid-19th centuries was dominated by the caloric theory of heat, which achieved systematic and impressive empirical successes by positing the existence of caloric, a material substance or “subtle fluid” which tended to move from warmer bodies to colder bodies and whose concentration or accumulation in any given body was ultimately responsible for its temperature. But further evidence ultimately led later theorists to categorically reject the existence of any such material fluid substance, and later thermodynamic theories instead came to regard the temperature of an object simply as the mean kinetic energy of the molecules making it up. Although the historicist critic of scientific realism will not herself judge the truth of the caloric theory simply by comparing its central claims to those of contemporary theoretical orthodoxy, she nonetheless remains free to point out that assuming the truth of that contemporary theoretical orthodoxy would allow the caloric theory to illustrate one (though certainly not the only) important way that even a highly successful theory might fail to be even approximately true in its claims about otherwise inaccessible domains of nature. And the historicist critic of scientific realism holds that many successful contemporary theories stand at a considerable remove from the truth about nature (whatever that may be) in one or more of the same ways illustrated by differences between earlier theories and contempo- rary theoretical orthodoxy in this and other salient historical cases. In a similar fashion, the phlogiston theory of chemistry explained a wide range of chemical reactions as involving transfers or exchanges of “phlogiston,” a material substance that is released during reactions like combustion and that the surrounding atmosphere has a limited capacity to absorb. As in the case of the caloric fluid, the positing of phlogiston and the properties attributed to it were central to the theory’s empirical and practical achievements, and once again further evidence led subsequent inquirers to deny that anything like this postulated substance actually exists. Here too, historicist critics of scientific realism are free to argue that the relationship between the phlogiston theory and contemporary theoretical orthodoxy serves as a model for one or more way(s) in which at least some parts of that same theoretical orthodoxy are related to the truth about nature itself. A somewhat different way in which the failure of a central existential commitment can motivate the judgment that a theory is not even approximately true is exemplified by the 19th- century wave theory of light. This theory achieved remarkable empirical successes (including predictions of surprising and previously unknown phenomena) by proposing that light consists of waves transmitted through the “luminiferous ether,” a material substance surrounding all objects, filling any space between them, and serving as a mechanical medium in which such light waves could propagate. The ether was thus not taken to be the direct cause of the phenomena predicted or explained by the wave theory, but it was nonetheless seen as essential for generat- ing the theory’s successful predictions and explanations. Indeed, this case is particularly striking in that 19th-century scientists like James Clerk Maxwell explicitly considered whether light (and, later, electromagnetism more generally) could be transmitted as a wave without any such mechanical or substantival medium and explicitly denied that this possibility was even intelligible or coherent (see Stanford 2006: Chs. 6–7). Nonetheless, later theorists ultimately came to reject the existence of any such mechanical medium or material substance through which light and other electromagnetic waves were transmitted (for reasons including the famous null result of the Michaelson-Morley experiment), and contemporary theorizing regarding electromagnetism recognizes nothing like the substantival ether postulated by 19th-century wave theorists. This case thus illustrates a distinct way in which the failure of a central existential commitment can motivate the judgment that a theory is not even approximately true, and the fate of the 19th-cen- tury wave theory of light will represent Uniformitarian expectations regarding the fates of at least some highly successful contemporary scientific theories in the course of further inquiry as well.

216 The Strategy of Historical Ostension

For reasons I discuss elsewhere (Stanford 2015b) historicist critics of scientific realism have often lavished disproportionate attention on cases like these in which central existential com- mitments of past successful theories have ultimately been abandoned, but such examples by no means exhaust the heterogeneous range of grounds on which successful past theories have been ultimately judged by contemporary lights to be not even approximately true. We might well hesitate to say, for example, that Dalton’s commitment to the existence of atoms has simply been abandoned in subsequent theorizing: after all, even the term ‘atom’ itself has been retained and applied to entities in our own physical theorizing. But the theoretical account of atoms with which Dalton achieved considerable empirical and practical successes ascribed a wide range of central properties to atoms that have come to be rejected in subsequent physical and chemical theorizing. The atoms Dalton posited were solid and rigid homogeneous spheres, constitut- ing the indivisible smallest particles of matter and lacking any further subatomic components (indeed, the very meaning of the Greek term “atomos” which Dalton rescued from obscurity is “indivisible” or “uncuttable”). Dalton’s atoms also had invariable properties, were unconverti- ble into one another, and were incapable of being either destroyed or created. These profound differences between the central theoretical posit of Dalton’s theory and any entity recognized by contemporary atomic theorizing might well lead us to deny that Dalton’s atomic theory is even “approximately” true. Thus, even when the central entities to which a given theory is existentially committed are (arguably) preserved in its successors, sufficiently radical or substan- tial changes in the central properties ascribed to those entities can motivate the judgement that the original theory was not even approximately true. Even more importantly, as we will see in what follows, whether or not these substantial changes in the properties ascribed to atoms incline us to deny that Dalton’s atomism was even approximately true, Uniformitarians remain free to treat the relationship between Dalton’s atomism and contemporary theoretical orthodoxy as a model of the relationship between some parts of that very orthodoxy and the truth about nature itself, and to use the fate of Dalton’s atomism as a model for the fate of some contemporary scientific theories as well. The case of Newtonian mechanics illustrates yet another way in which theories might fail to be approximately true by the lights of current theoretical orthodoxy. If we look for them we can certainly find examples of existential commitments of this theory that have been rejected by subsequent physical theorizing (e.g. absolute space and time, gravitational forces), but the most compelling reason to deny that Newtonian mechanics is even “approximately” true is the fact that the central causal and explanatory mechanisms to which it appeals have simply been replaced in contemporary theorizing by alternatives that are radically different in character: the role of gravitational forces in Newton’s mechanics, for example, is simply subsumed by that of the curva- ture of spacetime in relativistic mechanics. That is, in relativistic mechanics gravitational motion is not the result of forces that arise between massive bodies and attract them to one another, but instead simply represents the motion of objects along spacetime trajectories that have been curved by the masses of those and other bodies. Thus, even without the failure of a central existential commitment, the relationship between Newton’s classical mechanics and contemporary relativ- istic mechanics would serve to illustrate or exemplify a quite distinct way in which a scientific theory might be extremely successful but not even approximately true. Similarly, in the case of Weismann’s influential theory of the germ-plasm, we can if we try find central existential commitments of that theory (e.g. “biophors”) straightforwardly rejected in subsequent theorizing, but these are neither the only nor even the most important reasons for regarding Weismann’s theory as not even approximately true. A central further reason is the the- ory’s explicit and forceful insistence on the doctrine of germinal specificity, holding that over the course of an organism’s development the germinal materials inherited from its parent(s) become

217 P. Kyle Stanford progressively divided into heterogeneous components until each cell of the adult organism holds only the specific small part of the germ-plasm needed to direct the development of that very cell. Weismann thus sought to explain a variety of ontogenetic phenomena (including the salient fact that different cells develop very different characteristics), and he even predicted the existence of a previously unknown phenomenon (the reduction division of meiosis) by positing a central causal and explanatory mechanism that has been simply abandoned altogether in the course of subsequent theorizing. In the now-familiar way, Uniformitarians will regard some contempo- rary scientific theories as standing in the same relationship to the truth about nature in which Weismann’s theory of the germ-plasm stands to contemporary theoretical orthodoxy concerning inheritance and development, and they will regard a similar fate to that of Weismann’s theory as awaiting at least some foundational contemporary scientific theories as well. The historical cases briefly described earlier illustrate a variety of grounds on which even very successful past scientific theories have been judged fundamentally mistaken rather than even approximately true by the lights of contemporary theoretical orthodoxy: the failure of central existential commitments, profound changes in the central properties ascribed to the entities posited by a theory, replacement of a theory’s central causal and explanatory mechanisms by rad- ically distinct alternatives, and the collapse of central theoretical claims or commitments. These examples certainly do not exhaust the wide variety of heterogeneous grounds on which such judgments have been or might be made, but each example nonetheless helps to fill in a more detailed and concrete sense of just what historicist critics of scientific realism mean in claiming that many of our own foundational scientific theories (or those theories supported exclusively or primarily by eliminative forms of evidence like abduction or “inference to the best explanation”) are not even approximately true. Just as importantly, however, these examples illustrate how the historicist critic of scientific realism can use the Strategy of Historical Ostension to formulate her challenge in a way that need not employ the contentious idiom or controversial judgments of approximate truth in the first place. That is, even if a sufficiently stubborn scientific realist insists that there is an available sense of “approximately true” that would encompass rather than exclude one or more of the examples of successful past theories we have considered, this linguistic legislation will not stand in the way of using the Strategy of Historical Ostension to characterize the Uniformitarian vision of the past, present, and future of scientific inquiry itself. The Uni- formitarian’s fundamental claim is that future scientists and scientific communities will ultimately come to regard our own foundational or eliminatively supported scientific theories as misguided, misleading, or mistaken in one or more of the same ways that contemporary theoretical ortho- doxy now regards such illustrious predecessors as the caloric theory of heat, the phlogiston theory of chemistry, the wave theory of light, Dalton’s atomism, Newtonian mechanics, Weismann’s theory of the germ-plasm, and further historical examples besides. Thus, once the Strategy of Historical Ostension has picked out the historical exemplars it will employ, it is free to kick away the ladder of “approximate truth” altogether, and instead let these exemplars themselves directly illustrate the wide variety of ways in which Uniformitarians expect many of the central claims of our foundational and/or eliminatively supported theories to themselves to become modified, qualified, abandoned, or rejected in the course of further inquiry. And of course, the examples considered above also help fill out our sense of just which contemporary scientific theories should be counted as foundational and of what it looks like for a theory to be supported primarily or exclusively by eliminative forms of evidence. Thus, the Strategy of Historical Ostension has the signal virtue of letting the historical evidence supporting Uniformitarianism itself simultaneously provide the models, exemplars, paradigms, guides, or templates through which we inform our grasp of the sorts of further changes in our theoretical conception of nature that the Uniform- itarian suggests are still forthcoming in the course of further scientific inquiry. Of course, such

218 The Strategy of Historical Ostension an ostensive strategy might also appeal to those more modest scientific realists who concede that further profound theoretical and conceptual revolutions are still forthcoming in our scientific picture of the world, but who insist that successful past theories nonetheless typically latched onto or partially captured the truth about the world in some way. Such realists might well point to one or more of these or other historical examples to illustrate what they mean to convey with this language of latching onto or partially capturing the truth about an underlying domain of nature. But to constitute a defence of realism, such a strategy would have to put us in a position to reliably project the particular parts, elements, or features of our own scientific theories that will be preserved indefinitely throughout the course of all further theoretical and conceptual revolutions and towards which we are therefore entitled to adopt a realist attitude. Otherwise, these restricted or qualified forms of scientific realism will simply collapse into the vague reas- surance that surely something from any sufficiently successful scientific theory will be preserved or reflected in its successors, and this is not a claim that Uniformitarians or historicist critics of scientific realism were ever concerned to resist. And indeed, those who have pursued this strat- egy often hopefully suggest that perhaps the very same parts, elements, or features of sufficiently successful scientific theories (such as their “structural” claims [Worrall 1989] or their “working posits” [Kitcher 1993]) are invariably or at least reliably preserved in their successors (see also P. Vickers, “Historical challenges to realism”, ch. 4 of this volume). I argue against this suggestion in detail elsewhere (2006: ch. 7), but here it may suffice simply to point out that even the set of historical examples we have considered so far should leave us skeptical of the idea that any particular aspect or element of sufficiently successful scientific theories is reliably preserved in its successors or that the continuities between successful scientific theories and their successors are systematically predictable in advance. Thus, the heterogeneity and unpredictability revealed by the historical record in the continuities and discontinuities of successful past theories with their successors is an integral part of the picture of the past, present, and future of scientific inquiry that the Uniformitarian uses the Strategy of Historical Ostension to capture or convey, but the same heterogeneity and unpredictability represents a serious obstacle to using that strategy to support even restricted or qualified forms of scientific realism itself. Hopefully it is by now also easy to see how we can deploy the Strategy of Historical Ostension to address other questions we recognized as legitimate and pressing for Uniformitarians. When a Uniformitarian claims that the further theories we ultimately come to embrace will be “funda- mentally distinct” from contemporary theoretical orthodoxy and that the future of science will be characterized by upheavals, transformations, and revolutions “every bit as profound and fun- damental” as those we find throughout its history, the Strategy of Historical Ostension once again allows us to specify the import of these claims with considerably greater precision. Her claim is that the even more powerful and impressive theoretical alternatives that ultimately come to replace many contemporary scientific theories will differ from them in ways that are just as pro- found and significant as those in which the caloric theory differed from its theoretical successors, and/or the phlogiston theory did from later chemical theories, and/or the wave theory of light did from subsequent conceptions of light and electromagnetism, and/or Newton’s mechanics did from its relativistic successors, and/or Dalton’s own atomism did from later theoretical accounts of atoms, and/or Weismann’s theory of the germ-plasm did from later theories of inheritance and development, and so on across a wide range of further historical examples as well. Each exemplar we add serves to flesh out more completely and explicitly just what the Uniformitarian really means to assert when she insists on such fundamental continuity between the past, pres- ent, and future of scientific inquiry itself. And once again, the Strategy of Historical Ostension allows us to use the very evidence we might cite in support of Uniformitarianism to inform or articulate the Uniformitarian claims that the theoretical successors of our own foundational

219 P. Kyle Stanford and/or eliminatively supported scientific theories will be “fundamentally distinct” from them, and that the future of scientific inquiry will be characterized by revolutions, transformations, and upheavals in our conception of nature that are “every bit as profound and fundamental” as those we find scattered so widely, reliably, and ubiquitously throughout its past.

3 Instruments and miracles Even if we do use the Strategy of Historical Ostension to fill out the Uniformitarian picture of the past, present, and future of scientific inquiry in this way, however, a further question remains that some scientific realists might well regard as the most pressing of all. If Uniformitarians deny that even the most successful contemporary foundational and/or eliminatively supported scientific theories are even approximately true, won’t they be forced to regard it as an inexplica- ble miracle that those same theories manage to achieve such impressive empirical and practical successes when and where they do? More generally, having rejected the realist’s view of the status and character of the most successful scientific theories of the present day, what competingpositive conception of them will Uniformitarians offer in its place? This is an important challenge, and no case against scientific realism can be compelling without a convincing answer to it. I have elsewhere (2006: ch. 8) suggested that the most prom- ising avenue to pursue is one with a long and distinguished historical pedigree of its own: the so-called “instrumentalist” view that even our best scientific theories should be regarded simply as powerful cognitive tools or instruments that we use to guide our predictions, interventions, and other forms of practical engagement with the world in useful and productive ways (see D. Rowbottom, “Instrumentalism,” ch. 7 of this volume). Some historically influential versions of such instrumentalism have gone so far as to deny that our best scientific theories even make claims or assertions about otherwise inaccessible domains of nature at all, holding instead that they are simply “inference tickets” allowing us to infer some observable states of affairs from others. More recently, Bas van Fraassen’s influential Constructive Empiricism argues that we should not presume that there is always (or ever) a convincing explanation of why or how our best scientific theories manage to successfully guide our practical engagement with the world where and when they do (see O. Bueno, “Empiricism,” ch. 8 of this volume). Instead, he insists “that the observable phenomena exhibit these regularities, because of which they fit the theory, is merely a brute fact, and may or may not have an explanation in terms of unobservable facts ‘behind the phenomena’” (1980: 24). But Uniformitarians need not (and should not) embrace such radically counterintuitive versions of the idea that our foundational theories are “merely” powerful cognitive tools or instruments. Instead, they are free to acknowledge that scientific theories do indeed make claims about how things stand in otherwise inaccessible domains of nature, and even to insist that when theories reliably enjoy empirical and practical success this is almost certain to be in virtue of some systematic relationship between those claims and how things stand in the otherwise inaccessible domains of nature they seek to describe. In fact, Uniformitarians will suggest that the heterogeneous variety of relationships that hold between successful past theories and contemporary theoretical orthodoxy – the same relationships to which the realist herself appeals in explaining the empirical and practical successes of those past theories – once again serve to illustrate the broad and heterogeneous sorts of relationships that she takes to hold between various contemporary scientific theories and the truth about nature in virtue of which those contemporary theories are able to enjoy the dramatic practical and empirical successes that they do. Realists will allow, of course, that the best contemporary scientific theories are indeed powerful cognitive tools or instruments, but they insist that we are justified in the further claim or belief

220 The Strategy of Historical Ostension that they are successful because their claims about otherwise inaccessible domains of nature are at least approximately true, or because they are true in some particular and identifiable respect (such as attributing a particular “structure” to nature or in the “working posits” to which they appeal) towards which we are therefore justified in taking a realist attitude. By contrast, Uni- formitarians argue that the hard-won lesson of the historical record is that our theories need not be approximately true in order to guide our pragmatic interactions with nature successfully in these ways, nor even accurate in any particular respect, element, or feature (as illustrated by the diverse range of such features, elements, and respects in which successful scientific theories of the past have ultimately proved to be mistaken) in order to do so. Consequently, what we might now call Uniformitarian Instrumentalists claim that the most we ourselves are in a position to justifiably infer, conclude, or believe regarding even the most successful foundational and/or eliminatively supported scientific theories of our own day is that they are powerful cognitive instruments or tools that guide our practical engagement with nature itself in ways that are often useful, successful, and productive. A number of thinkers have complained, however, that closer scrutiny reveals this instrumen- talist view to be either incoherent or not genuinely distinct from the realist view it seeks to supplant. Howard Stein (1989) has argued, for instance, that once the full range of “instrumental” uses of a theory is recognized, including its utility in making further predictions regarding how things stand in otherwise inaccessible domains of nature and providing clues in the search for even better hypotheses, there is simply no remaining difference that makes a difference between such an instrumentalist attitude and realism itself. In a similar fashion, responding to van Fraas- sen’s contention that even non-realists should be “immersed in” or “animated by” contemporary scientific theories, Simon Blackburn argues,

The problem is that there is simply no difference between, for example, on the one hand being animated by the kinetic theory of gases, confidently expecting events to fall out in the light of its predictions, using it as a point of reference in predicting and con- trolling the future, and on the other hand believing that gases are composed of moving molecules. There is no difference between being animated by a theory according to which there once existed living trilobites and believing that there once existed living trilobites. . . . What can we do but disdain the fake modesty: “I don’t really believe in trilobites; it is just that I structure all my thoughts about the fossil record by accepting that they existed?” (Blackburn 2002: 127–128)

These thinkers argue, then, that there is no room for any real difference between believing that our best scientific theories are “merely” powerful cognitive tools or instruments for guiding our pragmatic interaction with nature successfully and simply believing that those same theories are instead approximately true: when pressed for further details concerning what this supposedly distinctive instrumentalist attitude really amounts to, they suggest, it simply collapses back into the realist’s own view of the matter. Here again we find ourselves faced with a demand for a more detailed characterization of a central feature of the view we wish to consider, ideally one that will make clear how this instru- mentalist view differs from the realist’s own, and once again I suggest that an ostensive strategy can best help us satisfy the challenge, though not in quite the same way that it did earlier. In this case, I suggest that we can further characterize the relevant distinctively instrumentalist attitude not by considering the ultimate fate of past successful theories, but instead the realist’s own atti- tude towards those same theories. Let us see how.

221 P. Kyle Stanford

The idea that a theory might serve as a useful guide to prediction, intervention, and other forms of practical engagement with nature without itself being even approximately true is not, I suggest, a fanciful creation of instrumentalist fever dreams, but is instead simply the view that the scientific realist herself adopts towards many successful scientific theories of the past. That is, it represents the very same attitude that the realist takes towards theories like Newtonian mechanics, the caloric theory, or Weismann’s theory of the germ-plasm. The realist accepts that Newtonian mechanics is an extremely powerful and useful conceptual apparatus for making accurate predictions concerning the motions of objects across a wide range of speeds and sizes, for instance, but she nonetheless rejects that same theory’s account of the otherwise inaccessible domains of nature about which it theorizes in just the way that the Uniformitarian ultimately expects many contemporary theories to be similarly rejected in the course of further inquiry. The difference between an instrumentalist and realist attitude towards a theory is therefore just as robust and substantial as the difference between the realist’s own attitude towards Newtonian mechanics and her attitude towards its relativistic successors, and the instrumentalist is no more forced to regard the impressive empirical and practical successes of contemporary theories as “miracles” than the realist is forced to so regard the impressive empirical and practical achieve- ments of the many successful past scientific theories that she herself now regards as fundamentally mistaken concerning the constitution of nature. Of course, an important difference remains, in that the realist believes she can provide detailed substantive explanations of just how particular past theories managed to be so success- ful where and when they were. That is, because she takes contemporary theoretical orthodoxy to be at least “approximately true,” the realist believes she can articulate the details of the sys- tematic relationships between the claims of past successful theories about particular domains of nature and the truth of the matter about those domains in virtue of which (she claims) past theories were able to enjoy the particular successes that they did. The sensible instrumentalist agrees that any scientific theory enjoying reliable empirical and practical successes must be systematically related in some way to the truth about the underlying natural domain it seeks to describe ( just as contemporary theoretical orthodoxy so regards successful past theories like caloric thermodynamics or the wave theory of light), but because she does not take herself to be in possession of the truth about the relevant domain of nature, she cannot identify just what that systematic relationship is in any particular case. She will allow that particular successful contemporary theories are almost certainly systematically related to the truth about nature in one or more of the wide range of ways that theories like Newtonian mechanics, caloric thermodynamics, phlogistic chemistry, the wave theory of light, Dalton’s atomism, Weismann’s theory of the germ-plasm, and/or other salient historical exemplars are systematically related to contemporary theoretical orthodoxy, but nonetheless hold that we ourselves are simply not in a position to specify the details of that relationship in any particular case with any justifiable confidence. Nonetheless, she is no more forced to see the successes of contemporary theories that she regards as not even approximately true as miraculous than the realist must so regard the successes of the many highly successful theories of the past that the realist herself now judges to be not even approximately true.

4 Conclusion Throughout this chapter, I have sought to provide not only a sense of the challenge to scientific realism presented by the Problem of Unconceived Alternatives, but also (and perhaps more importantly) of the alternative conceptions of both the past, present, and future of science and the character and status of successful contemporary scientific theories that it recommends to us in place of realism itself. I suggest that historically motivated challenges to scientific realism 222 The Strategy of Historical Ostension like the Problem of Unconceived Alternatives and the Pessimistic Induction should not lead us to embrace an amorphous and ill-defined “antirealism” about scientific theories but instead what I have called Uniformitarian Instrumentalism. The central tenet of such a position is that the future of science will continue to be marked by theoretical transformations and revolutions every bit as frequent, profound, and unpredictable as those that have characterized its past, and the Strategy of Historical Ostension allowed us to articulate the content of this and related claims with considerably greater precision and in more concrete detail. As a consequence, such Uniformitarian Instrumentalists argue that we should see successful contemporary foundational and/or eliminatively supported theories in just the same way that the scientific realist herself sees successful theories of the past: as useful cognitive instruments or tools that guide our pre- dictions, interventions, and other forms of practical engagement with nature in a wide variety of extremely productive and successful ways. Unlike the instrumentalist, the realist may think she can actually specify in detail the heterogeneous relationships between successful past theories and the truth of the matter concerning the underlying domains of nature they seek to describe in particular cases, but the Uniformitarian Instrumentalist’s commitment to the existence of some such systematic relationship or other in the case of any theory enjoying reliable and impressive forms of empirical and practical success is enough to ensure that such Uniformitarian Instrumentalists need not see the successes of contemporary theories as any more miraculous or inexplicable than the realist regards those of the most successful theories that have been ultimately abandoned throughout the history of science itself.

References Blackburn, S. (2002) “Realism: Deconstructing the Debate,” Ratio 15, 111–133. Chakravartty, A. (2008) “What You Don’t Know Can’t Hurt You: Realism and the Unconceived,” Philo- sophical Studies 137, 149–158. Devitt, M. (2011) “Are Unconceived Alternatives a Problem for Scientific Realism?” Journal for General Philosophy of Science 42, 285–293. Duhem, P. ([1914] 1954) The Aim and Structure of Physical Theory (2nd ed.) (P. P. Weiner, trans.), originally published as La Théorie Physique: Son Objet, et sa Structure (Paris: Marcel Rivièra & Cie.), Princeton, NY: Princeton University Press. Egg, M. (2016) “Expanding Our Grasp: Causal Knowledge and the Problem of Unconceived Alternatives,” British Journal for the Philosophy of Science 67, 115–141. Fine, A. (2008) “Epistemic Instrumentalism, Exceeding Our Grasp,” Philosophical Studies 137, 135–139. Fraassen, B. van (1980) The Scientific Image, Oxford: Oxford University Press. Godfrey-Smith, P. (2008) “Recurrent Transient Underdetermination and the Glass Half Full,” Philosophical Studies 137, 141–148. Kitcher, P. (1993) The Advancement of Science, New York: Oxford University Press. Kuhn, T. (1962) The Structure of Scientific Revolutions, Chicago: University of Chicago Press. Laudan, L. (1981) “A Confutation of Convergent Realism,” Philosophy of Science 48, 19–49. Lyons, T. (2013) “A Historically Informed Modus Ponens against Scientific Realism: Articulation, Critique, and Restoration,” International Studies in the Philosophy of Science 27, 369–392. Magnus, P. D. (2010) “Inductions, Red Herrings, and the Best Explanation for the Mixed Record of Sci- ence,” British Journal for the Philosophy of Science 61, 803–819. Poincaré, H. ([1905] 1952) Science and Hypothesis, reprint of first English translation, originally published as La Science et L’Hypothèse (Paris, 1902), New York: Dover. Ruhmkorff, S. (2011) “Some Difficulties for the Problem of Unconceived Alternatives,”Philosophy of Science 78, 875–886. Stanford, P. K. (2001) “Refusing the Devil’s Bargain: What Kind of Underdetermination Should We Take Seriously?” Philosophy of Science 68, S1–S12. ——— (2006) Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives, New York: Oxford University Press. ——— (2010) “Getting Real: The Hypothesis of Organic Fossil Origins,” Modern Schoolman 87, 219–243. 223 P. Kyle Stanford

——— (2011) “Damn the Consequences: Projective Evidence and the Heterogeneity of Scientific Con- firmation,”Philosophy of Science 78, 887–899. ——— (2015a) “Catastrophism, Uniformitarianism, and a Realism Debate That Makes a Difference,” Philosophy of Science 82, 867–878. ——— (2015b) “‘Atoms Exist Is Probably True’ and Other Facts That Should Not Comfort Scientific Realists,” Journal of Philosophy 112, 397–416. Stein, H. (1989) “Yes, but . . . Some Skeptical Remarks on Realism and Antirealism,” Dialectica 43, 47–65. Worrall, J. (1989) “Structural Realism: The Best of Both Worlds?” Dialectica 43, 99–124.

224 18 REALISM, ANTIREALISM, EPISTEMIC STANCES, AND VOLUNTARISM

Anjan Chakravartty

1 Introduction Debates between different kinds of scientific realists and antirealists are longstanding and show every sign of continuing. In this chapter I examine one explanation of their longevity: lurking beneath various forms of realism and antirealism are conflicting commitments which (1) sustain these positions and (2) are immune to refutation. These deeper commitments are to different epistemic stances. I consider the nature of philosophical stances generally and, more specifically, of epistemic stances in relation to the sciences. I investigate the question of how stances are evalu- ated and adopted and elaborate on the most telling reason for their immunity, namely, a form of voluntarism regarding their adoption.

2 Why is the realism debate perennial? If one is willing to grant the slight anachronism required to read at least some past philosophical concerns as earlier versions of ours today, it is hard to resist the conclusion that disputes very much like contemporary ones between scientific realists and antirealists have been a philosophical preoccupation since antiquity. Of course, this is not an uncommon occurrence in philosophy and no doubt, in each case, there are reasons for the persistence of certain questions and disagree- ments about their answers. The case of scientific realism and antirealism (I will drop the adjective “scientific” henceforth, taking it as implied) is a case in point. It is arguable that the longevity of this debate and, indeed, its likely central place for years to come in the philosophy of science, has an explanation. One explanation is the subject of this chapter. In what follows, I will explore the idea that underlying the different forms of realism and antirealism available to us are deeper commitments which sustain these positions and which are not, in the final analysis, susceptible to any lasting philosophical defeat. Before getting underway, let me clarify the use of the terms “realism” and “antirealism” as intended here. Many discussions of these generic positions address the virtues and vices of spe- cific variants of them.1 For present purposes, however, a more generic understanding will suffice, and one may take the morals of the following discussion to apply, mutatis mutandis, to these more specific variants. Generically, realism is often characterized in terms of three commitments. The first is ontological, to the existence of a mind-independent world. The second is semantic, to

225 Anjan Chakravartty the literal interpretation of scientific theories and models as descriptions of this world. The third is epistemological, to the idea that our best such literally construed descriptions give us knowledge – a commitment which is sometimes expressed by saying that our best theories are true or approximately true (see G. Schurz, “Truthlikeness and approximate truth,” ch. 11 of this volume) or that their central terms refer to aspects of the world. Thus described, antirealism is any denial of realism, and since there are several components of the latter, each of which can be denied, there are several ways of subscribing to the former. With this generic understanding of realism and antirealism in hand, let us confront the ques- tion of why debates between the advocates of these positions seem perennial. Wylie (1986) offers a diagnosis which appeals to the idea that the debate had, even at that stage, developed in such a way that certain metaphilosophical differences between the opposing sides were apparent: “Realism and antirealism thus confront one another as preferred and essentially incommensurable modes of philosophical practice” (p. 287); “the locus of debate about realism shifts from differ- ences over the details of rival theories about science to a more comprehensive meta-philosophical disagreement about the principles that govern the formulation and evaluation of these theories” (p. 291). The invocation of the notion of incommensurability here is instructive. In keeping with the introduction of this notion by Feyerabend and Kuhn (see H. Sankey, “Kuhn, relativism and realism,” ch. 6 of this volume) to describe theorizing about the world in different time periods or from the perspectives of different theories, it suggests that realism and antirealism have “no common measure”. Though there is no doubt that certain kinds of comparisons between realism and antirealism are possible, whether or not these positions are ultimately compelling is some- thing that must be judged, to a large extent, from within separate contexts established by certain prior assumptions. Now, granted, this is all rather abstract. What does it mean? There are three key points that I would like to extract from Wylie’s diagnosis and elaborate in my own way in what follows. The first is that realists and antirealists have conflicting metaphilosophical commitments. Here the ground-level commitments are to realism or antirealism; the metaphilosophical commitments are at a “different level”, in a sense to be explained. The second point is that these commitments concern how the ground-level positions, realism and antirealism, are formulated as views about scientific knowledge and how they are evaluated. The third and last point is that there is a sense in which these formulations and evaluations are relative to, or internal to, different contexts within which they make sense and which are themselves robust and incapable of being dismantled by philosophical argument alone. In the rest of this chapter my aim is to elaborate these three points. The attempt to spell out these commitments which underlie realism and antirealism is, in effect, an attempt to illuminate what a number of proponents and critics of realism have already alluded to but only in passing. For instance, Worrall (2000) suggests that the foremost consid- eration in favour of realism, the so-called miracle argument, is not an argument per se (see also K. Brad Wray, “Success of science as a motivation for realism,” ch. 3 of this volume). The miracle argument is typically described as the thought that the best and perhaps only non-miraculous explanation for the amazing successes of our best science is that it is true or close to the truth or that it has genuinely latched onto entities and processes in the world. In Worrall’s (p. 230) estimation, “[a]ny such argument that was valid would inevitably involve either as explicit or implicit premise some assumption that prejudged the issue.” Likewise, the most popular argu- ment of recent times against realism, the pessimistic induction, which leverages the successes of false theories from the past to raise doubts about currently successful theories, is not an argument but rather “a plausibility consideration, which in turn sets a challenge” (p. 234) for the realist to articulate a response (see also P. Vickers, “Historical challenges to realism,” ch. 4 of this volume).

226 Realism, antirealism, epistemic stances

In section 3 I will identify the prior commitments of realists and antirealists which operate at a “different level” as epistemic stances, and consider the nature of these stances and how they sustain views such as realism and antirealism. In section 4 I will explore the question of how epistemic stances are evaluated, which will clarify the senses in which they are incommensurable and ulti- mately indefeasible. The thought that different and conflicting epistemic stances are things that can be adopted by different agents is explored in section 5. This variable adoption suggests a kind of voluntarism, or voluntary choice, but what it means to “choose” at the level of stances is unclear, and I will consider this issue briefly in conclusion.

3 Epistemic stances What, then, is a stance? Let us begin with the notion of a stance simpliciter and then refine it in a way that is more specifically germane to the issue of realism and antirealism. The notion of a stance broadly speaking reflects the meaning of the term in everyday use, where it is applied to both the physical posture or position a person may adopt and a person’s attitude or standpoint regarding a subject of interest. In the philosophical context this translates into the notion of taking a specific position or standpoint regarding a philosophical question or problematic. It is in this spirit that Boucher (2014: 2319) describes the idea of stances as “perspectives, or ways of seeing”, as “particular orientations on the world, or ways of seeing facts”. Admittedly, described at this level of generality, a stance would seem to include any sort of philosophical view, but the idea of a “way of seeing facts” does hint at something that is (potentially) more specific and limiting and which provides a clue as to how the term is used in debates concerning realism: in this more specific arena, a stance is not a view as such, which might be exhausted by some factual proposition or propositions, but something that indicates how one thinks or should think about factual propositions themselves. One take on this more specific notion of a philosophical stance is suggested by Rowbottom and Bueno (2011a: 9), who describe stances as having three components: a mode of engagement; a style of reasoning; and propositional attitudes (such as desires and hopes and even beliefs, though the last of these is not usually constitutive of or necessary to stances). By “mode of engagement” they intend something similar to Boucher’s idea of a perspective or an orientation. A mode, they say, is “a way of approaching the world (or a given situation)”. For example, one’s mode of engagement may be comparatively open-minded or dogmatic or more proactive in investigating the relevant facts as opposed to more laissez-faire. As examples of styles of reasoning they suggest tools used in reasoning about a given subject matter including “patterns of inference, diagrams, templates, and other useful devices”, which brings to mind Kuhn’s ([1962] 1970) concept of “exemplars”, which refers to standard methods, techniques, or problem-solving devices employed to answer certain kinds of (in Kuhn’s case, scientific) questions (see H. Sankey, “Kuhn, relativism and realism”, ch. 6 of this volume). Elaborating the idea of a stance in these more specific terms certainly helps to differentiate it from the idea of simply holding a philosophical position. In what follows I will operate with a conception of stances that, while compatible with both of the accounts just mentioned, is more committed than the first and less committed than the second to specific ingredients. This conception, a version of which is suggested by Teller (2004), associates stances with strategies for generating factual beliefs – “policies” regarding which prin- ciples and methodologies are appropriate or inappropriate to producing knowledge. A stance thus incorporates an epistemic policy or collection of policies, reflective of and including various attitudes and commitments regarding how one learns things about the world. While compatible with the more general notion of a worldly perspective or orientation, the idea of an epistemic policy is focused specifically on questions of generating knowledge, making it directly relevant

227 Anjan Chakravartty to the epistemological dimension of realism and antirealism mentioned in section 2. A broader notion of stance may well incorporate further attitudes, commitments, and so on, but in the inter- ests of thinking about realism and antirealism, a focus on epistemic policies seems appropriate here. Notions such as modes of engagement and styles of reasoning are simply further examples of the kinds of attitudes, commitments, principles, and methodologies that might come together to comprise an epistemic stance. Henceforth, it is this conception of stances as incorporating epistemic policies that I will have in mind. In order to understand how stances are germane to debates between realists and antirealists, it is important to appreciate that they are not themselves identical to specific claims regarding the nature of the world. Though at any given time or in any one philosopher’s hands, a particular stance may be employed so as to arrive at a set of beliefs, thus indicating a connection between the stance and beliefs at issue, stances themselves are not properly associated with any one set of propositions about the world. In holding a stance one may generate certain beliefs, but the former is distinct from the latter, which may change even while the stance itself remains fixed. As an illustration of the policy-like components of stances, consider how realists often take the perceived explanatory power of hypotheses about unobservable (by the unaided senses) entities or processes (subatomic particles, molecules of DNA, depression, market forces, etc.) seriously in determining what to believe, and how explanatory considerations typically carry little or no weight with empiricist-minded antirealists who believe only facts about observable things. Here we have different attitudes, emerging as policy differences, regarding how to weigh the evidential import of explanatory power where unobservables are concerned, and the policies are not them- selves truth apt. A stance is not the sort of thing that is believed; it is adopted. It is or includes a guide or set of guidelines for acting, epistemologically. Let us consider some examples of stances that are especially relevant to the context of debates about realism. I will outline, briefly, three stances that have dominated this context historically and in recent discussion. Though these stances are opposed in various ways, it would be a mistake to think of them as necessarily mutually exclusive in the life of any one epistemic agent, in that it is possible that one and the same person might be drawn towards one or another in application to different domains of scientific theorizing and practice. Furthermore, as we will see, the terms in which these stances are described likely admit of variable interpretation, and the commitments of those holding one and the same stance may vary accordingly. As a consequence of these sub- tleties, any attempt to describe a stance is inevitably something of a caricature; the attitudes and commitments of actual people are surely more complex combinations and interpretations of the relevant principles and methods than any simple description communicates. We must start some- where, however, and leaving aside the complexities of individual applications, let me now provide a stripped-down exposition of some important contrasts between these stances. The name of what I will label the “metaphysical stance” has an important historical conno- tation. On the standard distillation of logical empiricism, which dominated the philosophy of science for much of the twentieth century and the ultimate downfall of which was instrumental in stoking the fires of realism, any aspiration to describe things that go beyond mere observation (again, with the unaided senses) is metaphysical (see also M. Neuber, “Realism and logical empir- icism,” ch. 1 of this volume). On such a view, to interpret claims about “unobservable entities” like protons and enzymes, not as elliptical for observable phenomena but literally as describing strictly unobservable things, is to engage in metaphysics. This conception of metaphysics seems dated today, where the subject is more typically associated exclusively with the preoccupations of professional metaphysicians such as possible worlds, mereology, and the notion of grounding, as opposed to any of the commonly invoked, unobservable subject matters of science (see S. French, “Realism and metaphysics,” ch. 31 of this volume). For present purposes, however, let us think

228 Realism, antirealism, epistemic stances of all of these subject matters as falling under the remit of the metaphysical stance – one might, respecting changes in the use of the term “metaphysics” historically, simply think of its applica- tion to the scientific context as indicating a lesser degree or kind of metaphysical theorizing than is found in the seminar rooms of contemporary metaphysicians. The core epistemic policies associated with the metaphysical stance are as follows:

M1. Accept demands for explanation in terms of things underlying the observable. M2. Attempt to answer these demands by theorizing about the unobservable.

Typically, realists are committed to policies like M1 and M2. Many realists hold that it is part of the mission of science to give explanations of what we observe in everyday life and in scientific inquiry in terms of deeper, underlying facts, and that such explanations should be interpreted literally as revealing the underlying nature of the world (see J. Saatsi, “Realism and the limits of explanatory reasoning”, ch. 16 of this volume). In contrast, many antirealists are motivated by some form of empiricism, and van Fraassen, who deserves much credit for the resurrection of empiricism in the philosophy of science after the demise of logical empiricism, characterizes (2002) his own brand of empiricism not as a factual proposition, such as “the only source of knowledge of the world is experience”, but rather as a stance.2 Reflecting his conception of empiricism as oppositional to historical traditions of metaphysical theorizing, let me characterize what I will call the “empiricist stance” in terms of policies that are diametrically opposed to the metaphysical stance:

E1. Reject demands for explanation in terms of things underlying the observable. E2. A fortiori, reject attempts to answer these demands by theorizing about the unobservable.

The intended effect of these policies is to dissuade one from attempting to explain phenomena that are (one might feel) plain enough in experience by appealing to yet further, underlying things – things which are, according to the empiricist, typically less well understood and often mysterious or even occult. The implementation of these policies is in keeping with common empiricist refrains throughout the history of philosophy to the effect that things that are evident in experience do not, in fact, require explanation, and that to the extent that the sciences produce such explanations they should not be interpreted literally and then believed. This more austere epistemology contrasts with the less austere inclinations of those attracted to the metaphysical stance, who view empiricist attitudes towards underlying explanation as robbing us of genuine understandings of the phenomena, which (arguably) one should seek. Even as disputes between those holding versions of the metaphysical and empiricist stances play out in discussions of realism, a third stance is often found loitering in the neighborhood, associated with philosophers who are suspicious of the very nature of these debates to begin with. This is what I will label the “deflationary stance”, and one may express its core policies with respect to scientific knowledge as follows:

D1. Reject realist and empiricist attempts to describe the epistemic upshot of scientific practice. D2. A fortiori, reject the analyses of truth and reference in terms of which it is often explicated.

In contrast to the empiricist stance, it is difficult to identify one very specific tradition of phi- losophy that exclusively or primarily exemplifies the deflationary stance. That said, many who

229 Anjan Chakravartty espouse one or another form of pragmatism do appear to make such commitments. For instance, Blackburn (2002) argues that if one adopts a particular (what one might reasonably describe as pragmatic) understanding of certain claims that realists and antirealists commonly make, some of their supposed differences turn out to be not well posed, and the debate between them essentially collapses. This conclusion is difficult to comprehend, however, unless one reinterprets or deflates the contested claims of realists and antirealists in the sorts of ways that pragmatists are wont to do. For example, Blackburn takes issue with van Fraassen’s claim that one should not believe scien- tific theories to be true but merely accept them, which entails believing only that they correctly describe observable phenomena and a commitment to use them in practice. But the pragmatist sees no difference between belief and acceptance. For her, the meaning of a proposition just is its practical consequences for human experience, and the commitment to use a theory thus and so is the same in either case. This sort of contention is found in the writings of the very first authors in the modern tradition of pragmatism, such as Charles Saunders Peirce (1992) in “How to Make Our Ideas Clear” (published in 1878). Many points of dispute between realists and antirealists, including differences in epistemic commitment to claims about scientific entities and processes based on their observability, are effectively dissolved on this view. The result is a rejection of some familiar bones of realist–antirealist contention, and a sort of quietism about philosophical issues regarding which, one might contend, nothing sensible can be said. This seeming combination of pragmatism and quietism is prominent in recent reflections on realism and antirealism by Fine ([1996] 1986: chs. 7 and 8), who recommends what he calls the “natural ontological attitude” (NOA). Fine’s (1998: 583) emphasis on taking a particular attitude toward scientific work and its output is highly suggestive of the notion of a stance:

NOA is . . . simply an attitude that one can take to science. The attitude is minimal, deflationary and expressly local. It is critically positive, looking carefully at particular scientific claims and procedures, and cautions us not to attach any general interpretive agenda to science. Thus NOA rejects positing goals for science as a whole, as realists and constructive empiricists do. NOA accepts “truth” as a semantic primitive, but rejects any general theories or interpretations of scientific truth . . .

At least part of NOA is preoccupied with considering science in a piecemeal sort of way as opposed to generalizing about scientific knowledge more broadly, but there is no obvious reason why realists and antirealists cannot consider science in the same way, and I will leave this particular issue aside here (for thoughts on “local” versus “global” arguments, see L. Henderson, “Global versus local arguments for realism”, ch. 12 of this volume). More relevant to our present concerns and in the spirit of deflationism, NOA offers to take science on its own terms without adding any philosophical interpretations in the form of epistemological or metaphysical diagnoses, including claims regarding which parts of scientific theories and models refer to things in the world, which parts are properly subject to belief or properly regarded as true, and so on. I have described three stances – the metaphysical stance, the empiricist stance, and the defla- tionary stance – all of which are in the mix of and inform how philosophers engage with debates about realism, and all of which involve attitudes or commitments that are not strictly proposi- tional in that they are not claims as such, but rather strategies or guidelines for how to think about knowledge in the scientific sphere. Each of these stances is a subject of controversy in its own right, and clearly, the kinds of views in the philosophy of science with which they are linked – realism, antirealism, denials of both realism and antirealism, and so on – are likewise contested. This raises some obvious questions: which stance is to be preferred, and how does one go about determining this so as to make a choice? Let us turn to these questions now.

230 Realism, antirealism, epistemic stances

4 Evaluating epistemic stances It is not uncommon to find oneself holding a belief or adopting a stance without having thought about it much, if at all. The settings in which people are embedded – social, political, and even philosophical – are no doubt relevant to the kinds of beliefs and stances that one may absorb in the absence of careful reflection. Having explicitly placed a few stances relevant to realism and antirealism on the table, however, we have an opportunity to assess them critically in a thor- oughly conscious and deliberate fashion. Van Fraassen (2002) suggests two criteria of assessment for stances generally, which I will transpose here as criteria of assessment for epistemic stances in particular. First, a stance should pass some reasonable test of rationality; adopting a stance should not be demonstrably problematic from an epistemological point of view. Second, the stance or stances adopted by a given person should reflect what that person values, again, from an episte- mological point of view. Both of these criteria require some spelling out, and it is fair to say that while these criteria have not proven especially controversial in themselves, van Fraassen’s specific articulation of them has been a subject of considerable criticism (see endnote 2). As I will suggest, though, it is unclear whether the alternative articulations proposed by critical authors are, in fact, incompatible with the one they critique. To begin with the first criterion, van Fraassen defines rationality in terms of the internal coher- ence of the attitudes, commitments, strategies, and policies comprising the relevant stance. As he (2004: 184) puts it, the “defining hallmark” of irrationality is “self-sabotage by one’s own lights”, by which he means that if one’s stance is such that it is likely to frustrate the achievement of the very epistemic aims one seeks to fulfill, there is clearly something irrational about adopting it. If it is one’s goal to learn about how segments of DNA are copied in the construction of molecules of RNA in living cells, clearly it would be counterproductive to adopt a policy according to which one forswears theorizing about unobservable entities, since neither segments of DNA nor RNA are observable with the naked eye. If there is no such mismatch between the stance one adopts and one’s epistemic project, the former qualifies as rational. There are at least two interesting things to note about this conception of rationality. First, rationality here is something that can only be assessed in relation to an epistemic aim or set of aims and practices which together make up a project. Also, it is in principle “permissive”, in the sense that this conception does not stipulate that only one stance in a given domain of scientific (or other) investigation is rational; it is at least open to the possibility that more than one such stance could satisfy the condition of rationality. A number of philosophers have balked at the idea that the rationality of stances could be quite this permissive.3 The most common complaint is simply that permissive rationality is not demanding enough, thus allowing problematic stances to count as rational. Alspector-Kelly (2012: 189), for example, alleges that this view of rationality “is so wildly permissive that it coun- tenances as rational belief-sets that are obviously completely crazy, including belief-sets which completely disregard all empirical evidence”. Relatedly, Psillos (2007: 158, 162) argues that a plausible conception of rationality must include a “principle of evidential support” according to which all relevant evidence is properly considered. Arguably, though, these sorts of concerns point not to a problem with permissive rationality per se so much as the fact that “self-sabotage” is a highly under-specified notion. Charitably, one might unpack it in just the ways these authors suggest. For instance, given that disregarding relevant evidence runs a high risk of compromising any epistemic project, it is difficult to see how one could permit such neglect without making it correspondingly less likely that one will achieve one’s epistemic aims, which would then ren- der one’s stance irrational even on the permissive conception. Paying due attention to relevant evidence thus seems implicit in the idea of “no self-sabotage”. This of course leaves open the important task of further specifying what this idea entails.

231 Anjan Chakravartty

Let us turn now to the second criterion suggested for evaluating stances of interest to debates about realism (among others): the idea that any stance passing muster must be one that is appro- priate to or in tune with the kinds of things that one values, epistemologically. If predictions regarding things in the world are an important or central feature of one’s assessment of scientific knowledge, one’s stance should reflect this. The same goes for scientific explanation or unification or any other goal whose realization would count as a feature of knowledge that is valued and thus sought in this domain. The notion that an epistemic stance should allow for and promote epistemological goods is hardly controversial all by itself. It becomes controversial, however, with the further suggestion that different agents may reasonably value different and even incompatible goods. Recall that the permissive account of rationality leaves open the possibility that different stances may qualify as rational. If different people have different values, epistemologically speak- ing, this would explain the continuing appeal of different and conflicting stances. And given that the metaphysical, empiricist, and deflationary stances outlined previously are all still very much in play in debates about realism, the concurrence of different and conflicting values here seems undeniable. How should one view this sort of conflict? One possibility is that when stances conflict there is at most one choice that is the right choice, so that conflicts are properly viewed as the result of some confusion on the part of one or more interlocutors about which stance is the right one to adopt. Certainly, debates between realists and antirealists (and non-realists) generally convey unmistakable intentions to convince us that the parties with whom a given author is arguing are confused about something or other and that if only those interlocutors could be made to see clearly, they would adopt the correct stance in the context of scientific knowledge, whatever that may be. Conversely, the suggestion that there is no one correct stance for all epistemic agents – that values are not only found to vary between philosophers in practice but also that they are in principle or properly regarded as relative to individual judgment – urges the further suggestion that different stances may be appropriate to different people, namely, people with different and conflicting epistemological values. The contention that different stances may be ultimately defensible is sometimes linked to the historically influential essay “The Will to Believe” by William James ([1897] 1956), in which he argues that there is no one, rationally obligatory way to make epistemic commit- ments. At one extreme, driven by the laudable goal of believing truths, one may end up believ- ing far too much. At the other extreme, driven by the equally laudable goal of not believing falsehoods, one may end up believing far too little. The epistemic risks one takes in charting a path between these extremes, says James, are determined by an agent’s own assessment of how best to do so, in keeping with her values. One might translate this thinking into the present context by noting that there are a number of factors that seem relevant to determining the epistemic policies that one adopts – does one seek an explanation of something, or is mere prediction sufficient or preferable?; what sorts of phenomena require an explanation in the first place?; what kinds of explanations are genuinely illuminating?; and so forth – but chief among them is the degree of risk that one is willing to accept in believing any given proposition. It is arguably difficult to imagine that there could be one correct answer to the question of precisely how much risk an individual should accept, let alone what degree of risk one should attribute to any given proposition. Inspired by the Jamesian picture, one may diagnose many disputes between those holding different positions in the realism debate as indicative of differences in the conflicting stances they adopt and subtract the usual judgment that at most one party to these disputes is, in fact, correct. When a certain kind of realist, for example, believes in certain properties of molecules described in biochemistry or of black holes described in cosmology, she does so on the basis of assessments

232 Realism, antirealism, epistemic stances of how telling the evidence is one way or another. A certain kind of empiricist may feel the force of this evidence less strongly and come to different assessments. A certain kind of deflationist may worry that in the very act of disagreeing about whether to attribute these properties of things to a mind-independent world, one operates with a conception of knowledge that exceeds the bounds of what creatures like ourselves, who have no god’s-eye perspective on reality, after all, can meaningfully discuss. To the extent that these differences are irresolvable, because they reflect different judgements about different kinds of risk which agents may be willing or unwilling to accept, the diagnosis of many disputes about realism in terms of stances suggests that there is likely more than one, ultimately defensible way to think about scientific knowledge. There is a temptation here to read this conclusion too strongly. Contrary to what one might think at first glance, it does not entail the hopelessness or futility of debates about realismsimplic- iter. Rather, it indicates that at least some debates about realism – ones in which what is at stake is ultimately reducible to differences in the stances adopted – need to be engaged differently than they have been traditionally. One focus of a productive, worthwhile engagement is the question of whether the relevant stances are, in fact, internally coherent in the sense of avoiding evident risks of self-sabotage. Since any ultimately defensible stance must pass the test of rationality, the acceptability of the stances at work in debates about realism is properly subject to this kind of examination. On the flipside, disputes that are ultimately diagnosable in terms of different parties simply holding different but rational stances are, it would seem, futile after all. Certainly, this does not preclude attempts to persuade an interlocutor of the virtues of one’s own stance or the defi- ciencies of her otherwise rational stance from one’s own perspective.4 Furthermore, it is always open to individuals to change their minds about what they value, epistemologically, and thus about which stances they should adopt. But if in the face of such discussion, differences remain, there would seem to be no philosophical antidote to disagreement. That there are limits to the extent to which debates about realism can be resolved should come as no surprise. Recall, we began with the aim of exploring what it could mean to say that realists and antirealists have conflicting metaphilosophical commitments which are relevant to understanding how views such as realism and antirealism are evaluated, and which are in some sense incommensurable with one another. Having identified these commitments as different stances which very naturally support different positions in debates about realism, the source of incommensurability and stalemate should now be clear. While all defensible stances must pass the test of rationality, there is no one answer to the question of what a responsible epistemic agent should value. Epistemological values are variable, and though an individual may change her mind about them, she need not. Indeed, commitments to values are often deeply entrenched, and whatever common ground there may be between the holders of different stances, it is appar- ently insufficient (if the history of philosophy is any guide) to serve as a strong enough ground from which to convince those having certain alternative casts of mind. Add to this the inherent robustness of stances – they include strategies or policies for belief formation, but they are not themselves equivalent to whatever beliefs may result and change over time – and their resiliency is all the more unsurprising.

5 Voluntarism, about beliefs and stances Let us grant that at least some different and conflicting stances are rationally permissible, and that these stances include ones such as the metaphysical, empiricist, and deflationary stances that fac- tor into commitments to a number of contrasting positions in debates about realism. The extant variety of positions suggests that different parties to these debates have chosen different stances – ones that accord with their values. Rationally permissible but nonetheless variable adoption of

233 Anjan Chakravartty opposing stances suggests a voluntary choice of some kind, a form of (epistemological) volunta- rism. But what does it mean to say that people choose stances? Voluntarism in epistemology is commonly described as the view that it is possible to exercise voluntary control over one’s doxastic states, such as belief, disbelief, and suspension of belief. Opposed to this doxastic voluntarism is the view that doxastic states are not a matter of choice but are rather, to put it somewhat dramatically, forced upon the agent who has them. Arguably, perceptual beliefs are an example of beliefs that are formed automatically through processes not subject to an agent’s control, and one might further argue that even beliefs that follow as a consequence of evaluating evidence for and against a proposition follow automatically via the application of one’s faculty of reason as opposed to being chosen as such. While debates between doxastic voluntarists and involuntarists about whether some or all beliefs are chosen are instruc- tive, they are something of a distraction presently. When realists and antirealists end up with different doxastic states regarding propositions about electrons or attention deficit disorder, these states are “downstream” consequences of the fact that they have different stances. The (putative) voluntarism at issue here concerns something “upstream”: the prior attitudes and policies one has pertaining to whether certain kinds of scientific propositions are to be entertained as belief apt in the first place. Thus, the subject matter of voluntarism here is not belief, at least not in the first instance, but stances. This insulates the question of whether stance voluntarism is a compelling idea from the more common debate about doxastic voluntarism in epistemology, and even those who are wary of doxastic voluntarism are often ready to admit that attitudes and commitments regarding how beliefs are acquired (how evidence is procured and assessed, etc.) may well be subject to choice (e.g. Clarke 1986). Granted, if stance voluntarism is a compelling idea, there is a sense in which the doxastic states that an agent has as a result of implementing her stance-given strategies and policies (for generating beliefs) may be described as subject to a kind of choice, but note: this sense of choice is indirect as opposed to direct with respect to belief; it concerns not belief in the first instance but rather how agents come to believe, disbelieve, or suspend belief in the ways they do. With this distinction between direct and indirect senses of choice in hand, doxastic voluntarism and stance voluntarism are revealed as related but nonetheless separate and independent notions. Focusing our attention squarely on the issue of stance voluntarism, then, let us return to the question of what it means to choose a stance. Arising from the previous section we have an injunction to choose stances that are rational and another to choose stances that reflect one’s val- ues, epistemologically speaking. So at least part of the relevant conception of choice here involves examining one’s stance or prospective stances for signs of potential self-sabotage in the senses alluded to earlier. This aspect of choice has both a positive and a negative dimension: through considerations of rationality one admits candidate stances for possible adoption – those that pass the test of rationality – and rejects those that do not. Another part of the conception of choice at issue, which similarly has both positive and negative dimensions, involves matching one’s values with candidate stances. Those that are suitably matching pass this test, and those that are not do not. Thus, minimally, one can say that a good choice of stance is one that incorporates these two aspects, a negligent choice is one that neglects one or both, and a poor choice is one that for reasons of either mistaken assessment or negligence results in the adoption of a stance that fails to pass the test of rationality or value matching. It is arguable that neither of these aspects of choice amounts to the simple application of any explicit decision procedure or algorithm. Some strategies or policies for generating beliefs may seem obviously or self-evidently at odds with one’s epistemic goals, but others may be less transparent in this regard, and assessing whether the latter engender the risk of self-sabotage may well rely on unavoidably intuitive judgements. Furthermore, it is unclear how much we

234 Realism, antirealism, epistemic stances can say about what precisely is going on when one is attracted to stances which thereby, pre- sumably (one would reasonably conclude this on the basis of such attraction), match up with one’s values. Is there an analysis to be had of what this kind of attraction amounts to? Perhaps this is a question for empirical psychology or phenomenology. When pressed, it is difficult for individual agents to say more than to narrate facts about how certain stances strike them as the right ones to adopt, about how some “feel right” or “make sense” to them on some deep level that is resistant to further articulation, and one might think that psychology or phenomenology is the sort of inquiry that is capable of shining a light into this otherwise black box (on the latter, see Ratcliffe 2011). Conversely, one might argue that further investigation of these kinds is unlikely to yield any epistemological insight. Learning about how the adoption of certain stances may be correlated with features of the mind that are of interest to psychology or how they are correlated with affective states (what kinds of explanation does one find deeply satisfying?; what kinds are frustratingly mysterious?) is interesting in its own right, but unless one thinks that epistemology is reducible to psychology or phenomenology, one may yet hope to understand more about the nature of choice in the context of stances. Here, however, I think we simply run out of explanations. Consider: the heat is stifling and the ice cream parlour inviting. There are several flavors and all of them are permissible; the batches that didn’t work out were thrown out by a conscientious chef. Having limited your choice to recipes that seem to work, there is no further, agent-transcendent basis for deciding. Something pushes you toward vanilla. Is there anything further to say about the nature of your choice? This is a silly example and surely disanalogous to the choice between stances relevant to realism and antirealism in numerous ways. Nevertheless, when one says (as I would recommend) that pistachio and a stance leading to realism are the ways to go, there is a sense in which one is irreproachably correct.

Notes 1 Some influential accounts of antirealism are discussed in D. Rowbottom, “Instrumentalism”, and O. Bueno, “Empiricism”, chs. 7 and 8 of this volume. For some important accounts of realism, see I. Votsis, “Structural realism and its variants”, and M. Egg, “Entity realism”, chs. 9 and 10 of this volume. 2 A number of authors have alluded to the general idea of stances under different headings, but current interest owes a great deal to the provocation of van Fraassen (2002). For critical responses to this work and the conception of stances it describes, see Monton (2007) and Rowbottom and Bueno (2011b). For a study of how this theme relates to van Fraassen’s philosophy more generally, see Okruhlik (2014). 3 For a longer and more detailed list of concerns than I will give here (as well as an argument to the effect that they miss their mark), see Chakravartty (2015: 183–186). 4 Van Fraassen does this in connection with beliefs held by some scientific realists which he regards as overly metaphysical (cf. 2004: 89). For reflections on the efficacy of these and other attempts at persuasion, see Chakravartty (2007: 20–26) and (2011).

References Alspector-Kelly, M. (2012) “Constructive Empiricism Revisited,” Review of P. Dicken 2010, Metascience 21, 187–191. Blackburn, S. (2002) “Realism: Deconstructing the Debate,” Ratio 25, 111–133. Boucher, S. (2014) “What Is a Philosophical Stance? Paradigms, Policies, and Perspectives,” Synthese 191, 2315–2332. Chakravartty, A. (2007) A Metaphysics for Scientific Realism: Knowing the Unobservable, Cambridge: Cambridge University Press. ——— (2011) “A Puzzle about Voluntarism about Rational Epistemic Stances,” Synthese 178, 37–48. ——— (2015) “Suspension of Belief and Epistemologies of Science,” International Journal for the Study of Skepticism 5, 168–192.

235 Anjan Chakravartty

Clarke, M. (1986) “Doxastic Voluntarism and Forced Belief,” Philosophical Studies 50, 39–51. Fine, A. ([1986] 1996) The Shaky Game: Einstein, Realism and the Quantum Theory (2nd ed.), Chicago: Uni- versity of Chicago Press. ——— (1998) “Scientific Realism and Antirealism,” in E. Craig (ed.), Routledge Encyclopedia of Philosophy, vol. 8, London: Routledge, pp. 581–584. Fraassen, B. C. van (2002) The Empirical Stance, New Haven: Yale University Press. James, W. ([1897] 1956) The Will to Believe, and Other Essays in Popular Philosophy, New York: Dover, pp. 1–31. Kuhn, T. S. ([1962] 1970) The Structure of Scientific Revolutions, Chicago: University of Chicago Press. Monton, B. (ed.) (2007) Images of Empiricism: Essays on Science and Stances, with a Reply from Bas C. van Fraas- sen, Oxford: Oxford University Press. Okruhlik, K. (2014) “Bas van Fraassen’s Philosophy of Science and His Epistemic Voluntarism,” Philosophy Compass 9, 653–661. Peirce, C. S. (1992) The Essential Peirce: Selected Philosophical Writings, Vol. 1 (1867–1893) (N. Houser and C. Kloesel, eds.), Bloomington: Indiana University Press. Psillos, S. (2007) “Putting a Bridle on Irrationality: An Appraisal of van Fraassen’s New Epistemology,” in B. Monton (ed.), Images of Empiricism, Oxford: Oxford University Press, pp. 134–164. Ratcliffe, M. (2011) “Stance, Feeling and Phenomenology,” Synthese 178, 121–130. Rowbottom, D. P. and Bueno, O. (2011a) “How to Change It: Modes of Engagement, Rationality, and Stance Voluntarism,” Synthese 178, 7–17. ——— (2011b) “Stance and Rationality: A Perspective,” special issue of Synthese 178(1), 1–169. Teller, P. (2004) “Discussion – What Is a Stance?” Philosophical Studies 121, 159–170. ——— (2004) “Replies to Discussion on The Empirical Stance,” Philosophical Studies 121, 171–192. Worrall, J. (2000) “Tracking Track Records II,” Aristotelian Society Supplementary Volume 74, 207–235. Wylie, A. (1986) “Arguments for Scientific Realism: The Ascending Spiral,”American Philosophical Quarterly 23, 287–297.

236 19 MODELING AND REALISM Strange bedfellows?

Arnon Levy

1 Introduction Most characterizations of scientific realism involve one or both of the following elements. First, they include a claim the aboutaims of science– sometimes called an axiological claim – roughly, that science aims to discover truths about the world, including truths about unobservables (van Fraassen 1980; Lyons 2005). Second, they advance the a claim achievements about of science – sometimes referred to as an epistemic claim – roughly, that science often discovers said truths, providing us with knowledge of them (Boyd 1983; Psillos 1999;1 ChakravartyHowever, 2007). and as recent philosophy of science has emphasized (Frigg and Hartmann 2013), when one pays attention to the actual practice of science one finds many cases in which scientists put forward models containing assumptions they know full well to fall short of the truth, sometimes strik- ingly so. Physicists study point masses sliding down frictionless planes despite knowing that such things do not (indeed cannot, physically) exist. Similarly, population biologists routinely concern themselves with infinitely large populations whose generations do not overlap, while economists analyze the behavior of fully rational agents, unhindered by cognitive biases and other limitations. How can such practices be squared with scientific realism? Specifically, the ubiquity of modeling and associated practices such as idealization and abstrac- tion raises two sorts of questions. First, a question about aims: how can science be aiming at truth while at the same time putting forward knowingly false and partial models? Does this show science has a different aim, or have we subtly mis-described the practice of modeling? Second, a question about achievements: if our best science consists of models, and if we know these models to be rife with idealizations and abstractions, then perhaps science cannot or at any rate does not deliver truths, its aims notwithstanding. Moreover, models often play a key role in prediction and explanation. This fact threatens to clash with an important argument in favor of realism – the No Miracles Argument (NMA). The NMA rests on the connection between predictive and explan- atory success and truth – more on it in section 4 – a connection that seems to be contravened by predictions and explanations that knowingly employ falsehoods. Is the NMA to be dispensed with or revised? Or is it, rather, our thinking about models and their role in explanation and prediction that ought to be reconsidered? Before delving into these issues we must first characterize some basic notions, especially mod- eling and idealization. This is done in the next section. In section 3 I look at whether modeling

237 Arnon Levy poses challenges to realist claims about the aims of science and in section 4 at potential challenges to science’s success in meeting those aims. In closing I discuss possible implications for the status of inference to the best explanation.

2 Models and modeling There are various senses of ‘model’ in science and its philosophy. The focus of this chapter will be on issues connected with the practice of modeling, where this is understood as constituting a par- ticular (albeit very common) strategy of representing and studying phenomena (Godfrey-Smith 2006; Weisberg 2007a).2 Modelers offer local and limited representations, often containing inten- tional deviations from completeness and accuracy and tailored to specific needs and contexts. Of particular significance in the present connection is idealization: the introduction of assumptions which are known not to reflect the realities of the phenomenon being modeled. Idealization is very common – examples can be multiplied beyond those given earlier (infinite populations, point particles, fully rational agents). They involve a deliberate mismatch between the model and the world – not error or oversight. It is useful to contrast idealization with abstraction. The latter involves lack of detail, i.e. incompleteness but no inherent distortion. A description that omits fine detail, settles for a range of possible values rather than a particular number, averages over a population and so forth is thereby abstract. A description that misrepresents – typically, depicting the world as simpler than it is known to be – is thereby idealized.3 Further distinctions within the practice of modeling will be discussed in what follows. It will be helpful, however, to note a difference between two types of models: some are theory based and some aren’t. In cases of the former sort, a general, often fundamental theory exists, and modeling comes in when the general theory is applied to a particular context. Such is the case, for instance, in statistical physics and quantum theory. In these areas there are well-understood fundamental laws, but applying these laws to particular phenomena requires various idealizing assumptions (Cartwright 1983). This is sometimes referred to by saying that models serve as ‘mediators’ between theory and the world (Morgan and Morrison 1999). I will refer to such cases as mediating models. In other contexts, however, there is no general theory, so that modeling of the relevant phenomena does not function in application or mediation but as the ‘highest’ theoretical understanding available. Such is the case, for instance, in studies of animal behavior, for instance in the common framework of evolutionary game theory. Although theoretical work is organized around a family of related models and associated analytic techniques, these models are not derived from a more basic set of laws or axioms. When the contrast with mediating models matters, I will refer to models in this second category as standalone models.

3 Do modelers aim at truth? Recall the first central element of scientific realism described already: the claim that science aims at truth. In this section we will inquire whether the practice of modeling casts doubt on this claim. Now, to say that the aim of science is truth or accuracy is crude, at best. The idea that a multi- faceted social institution like science has an aim and that, if so, it can be summarized succinctly, may appear puzzling or unrealistic. Moreover, it is not obvious that truth – rather than some more refined notion of truthlikeness (Oddie 2014; see also G. Schurz, “Truthlikeness and approximate truth,” ch. 11 of this volume) – is the right choice here. Nor is it obvious which truths science seeks. Is it only truths that matter (matter for what) (Kitcher 2001; Lyons 2005)? However, these finer points recede into the background when one takes note of the blatant and radical nature of

238 Modeling and realism many idealizations found in science. Recall the examples we began with: point particles, infinite populations, rational agents. However one understands truth or some suitable variant thereof, it is clear that such assumptions will not count as true or approximately true or near the truth in any reasonable sense of ‘near’. These assumptions are wide off the mark. And however exactly one understands the idea that science aims at truth, it should be at the very least puzzling how assumptions of this sort are consistent with such an aim. To be sure, there are good reasons for studying models of point particles and rational agents. They are cognitively accessible and computationally tractable and often facilitate simple and compelling explanations. But, prima facie, fidelity to the real-world does not seem to be among these reasons. To the contrary, it may seem that such models are best seen as an expression of instrumentalism: a modeler will make whatever assumptions she finds productive for saving the phenomena, irrespective of their truth or accuracy. How should a committed realist respond to this situation? One option is to deny that mod- eling is the right arena in which to debate realism in the first place. Instead, the realist might say, the debate should center on laws of nature and, more generally, on the fundamental theories from which models are derived. Models, in this view, are merely tools for applying theories to the world, and as such their truth is not a matter of concern for the realist. But this response is hardly adequate. For one thing, as noted, in many areas we find standalone models – models that are not derived from fundamental theory. For another thing, there are substantial doubts over whether laws of nature themselves are to be read as literal statements of fact. Moreover, it has been argued that laws depend on models for their predictive and explanatory power (Cartwright 1983; Giere 1999; Lange 1993). So the appeal to laws may turn out to be a detour, eventually leading us back to a consideration of models. Instead of appealing to laws, the realist may try to explain how idealized models, despite their apparent falsity, can still be seen as subservient to the goal of truth: appearances to the contrary, modeling is part and parcel of the attempt to accurately depict nature. Here, two strategies can be discerned. The first views modeling as involving literal yetindirect representation of targets in the world. The second, in contrast, views models as being directly about the world. Let us look at these approaches in turn.

3.1 Modeling as indirect representation The indirect approach, as the name suggests, treats models as relating to the world indirectly – via a simpler (idealized) model system. It is perhaps easiest to understand this idea by thinking first of concrete physical models, such as Watson and Crick’s famous scale model of DNA. Watson and Crick constructed a large DNA-like structure made of stiff wire and metal sheets. They took this to be a representation of DNA: the wire stood for the molecule’s backbone, the sheets for the bases bound to it. They tinkered and manipulated, changing the shape and number of wires, the placement of the metal sheets and so forth, regarding these manipulations as informative about features of real DNA. The indirect approach takes something like this investigative structure as characteristic of modeling in general, including mathematical and mechanistic models. As the indirect approach has it, the modeler constructs an object (the ‘model system’), one that is simpli- fied and easier to handle relative to the phenomenon under study (the ‘target system’). She then uses this object as a representation of the target system. It functions as a surrogate for the target. In a case like Watson and Crick’s model, this involves physical construction and actual causal manipulation. In mathematical or other non-material models, the construction phase consists of writing down equations and a text and/or drawing a figure. These serve to pick out or specify

239 Arnon Levy

(Giere 1988) a non-material object, which serves as a stand-in for the target. Once it is specified, the modeler analyzes the model system, attempting to learn about its properties and behavior. Then she compares the model to the target and, depending on how the comparison pans out, reaches conclusions about the target system. Recall that the puzzle we are dealing with stems from the apparent tension between the idea that science aims at truth and the seeming falsity of idealized models. How can a population biologist be aiming at truth if she portrays the population she studies as infinitely large? The indirect approach resolves this puzzle by treating the population biologist’s claims about infinite populations as being about the model system and only indirectly, via resemblance or mapping, about the world. If the modeler decides to specify an infinitely large population she is not making a false claim about the world. Indeed she is not making a claim at all. Rather, she is picking out a system she wishes to study. This system can then be compared to a real-world system, potentially yielding true (or near-true) claims about the relationship between the model and the world. Thus, the appearance of falsity is removed and a distinct locus of truth is identified: models are things, to be compared with the world; statements about the model-world relationship are where truth or accuracy evaluations come in. However, the indirect approach raises puzzles and problems. First, there is the issue of what model systems are. Some model systems, like Watson and Crick’s model, are actual concrete objects. But most aren’t: mathematical, computational and mechanistic models are “missing systems”, as Thomson-Jones (2010) puts it – they cannot be seen, heard, or otherwise causally interacted with. What are they, then? I will briefly discuss three recent proposals. One natural suggestion is that model systems are abstract objects – mathematical objects in particular. Suggestions along these lines have been made by several authors (e.g., Giere 1988; Teller 2001). This account makes sense of why model systems cannot be heard, seen or interacted with, as well as the fact that mathematics is often central to model specification. But several questions arise, too. First, there are general concerns about how knowledge of mathematical abstracta is possible (Benacerraf 1973; Field 1989). Second, there are concerns about whether abstract mod- els can be suitably related to concrete targets – specifically whether a similarity-based account, preferred by several advocates of the indirect approach, is coherent (Giere 1988; Weisberg 2013; Thomson-Jones 2010). Third, many models do not have a mathematical character, and treating them as abstracta seems tenuous at best (Thomson-Jones 2012; Levy 2015).4 In response, some have suggested that models are best viewed as concrete hypothetical objects. In particular, several authors argue that models are akin to fictions like Sherlock Holmes’s Lon- don and Tolkien’s Middle Earth (Godfrey-Smith 2006; Frigg 2006). Advocates argue that such a fictionalist view of model systems, besides accommodating non-mathematical models, fits well with the practice of modeling and with the often-prominent role of the imagination in mod- eling. Such an approach, however, brings with it metaphysical anxieties not dissimilar to those connected with the models-as-abstracta view. What are fictional objects? How do they fit into the natural world? How can we know about them (see Kroon and Voltolini 2013)? Additional issues pertain specifically to the models-as-fictions approach. For one thing, it is not clear what the identity conditions are for fictional systems, and so it may be unclear whether two scientists are engaging with the same model (Weisberg 2013: §4.4.1; see also Friend forthcoming). For another thing, some models may seem hard to picture or otherwise imagine (Weisberg 2013: §4.4.2), and it is unclear how they fit into a fictionalist view. Thus, both versions of the indirect approach – the abstracta view and the concrete-hypotheticals view – have attractions but also problems. They allow one to make sense of the realist aim under- lying modeling, but they generate ontological commitments and semantic puzzles. Perhaps an intermediate view, combining elements of both accounts, could overcome these issues (Contessa

240 Modeling and realism

2010; Thomasson forthcoming). Or perhaps the indirect approach ought to give way to a direct conception of modeling – to which I now turn.

3.2 Modeling as direct representation While the indirect approach treats modeling as the specification and study of model systems, the direct approach ‘skips over’ the model system and treats models as being directly about the world, sans intermediaries. Let us look at two ways of developing this idea. Sorensen (2012) suggests that modeling may be understood as a form of suppositional reason- ing, akin to conditional proofs and reductio ad absurdum. Sorensen explicitly motivates his view as a solution to the problem of how idealizations are compatible with the realist aim of truth. He suggests that the suppositional approach relies on well-understood concepts from classical logic and that it accommodates the creativity of modeling while explaining its epistemic utility. These are certainly advantages. But it is hard to see how, on Sorensen’s view, one learns from modeling about the actual natural world. That is, it is readily apparent how one learns hypothetical facts – what the world would be like under such-and-such a supposition. And it is fairly clear how one can learn negative modal facts – such-and-such cannot be the case. But most model-based knowl- edge pertains to contingent, actual facts, and Sorensen says very little about how such knowledge is generated on the suppositional approach. Another way of developing a direct approach is by tying it to a particular, pretense-based view of fiction due to Walton (1990). Walton’s view takes its cue from pretend play and games of make-believe. It treats fictional statements not as descriptions (of a fictional entity) but as prescriptions to the imagination. Applied to modeling, this leads to a view on which models are games of make-believe about real-world targets: a pretense according to which systems in the world are different – simpler, more tractable – then they actually are. Toon (2012) argues for such a view, following closely in Walton’s footsteps. Toon’s account is mainly aimed at understanding the content of models and the status of the associated speech-acts and flights of the imagination. Levy (2015) also relies on Walton’s theory to develop a direct account, placing more emphasis on the model–world relationship. He suggests that it may be understood via Yablo’s (2014) notion of partial truth. Yablo’s framework, the spe- cifics of which will not be discussed here, focuses on the notion of a statement’s subject matter5 and allows us to make sense of the idea that a statement can be true specifically with regard to a given subject matter, even if it is false in other respects. Thus, suppose we have a model of a predator–prey system that assumes an infinite population. Yablo’s framework would allow us to say that while idealized, and hence partly false, such a model is true as regards the subject matter of how predator abundance affects prey abundance.6 The fiction-based version of the direct account accommodates salient aspects of the prac- tice, such as its reliance on imagination, without incurring heavy metaphysical costs. However, some argue that it is subject to worries similar to those afflicting indirect fictionalist accounts. There are also questions about whether a direct fictionalist account – or perhaps any fictionalist account – can accommodate models that appear to have no target (Weisberg 2013: Ch. 4; Friend forthcoming; Thomasson forthcoming). My own sympathies lie with a direct approach to modeling, as it seems to accommodate the practice fairly well, while being ontologically modest and semantically straightforward. However, the key message of this section is less committal: there are ways of reconciling the ubiquity of idealization and the practice of modeling more generally with a realist understanding of the aims of science. Such reconciliations face some questions and difficulties, but on the whole they seem to permit a view of model-based science that does not slide into instrumentalism.

241 Arnon Levy

4 Does modeling challenge the realist’s achievements claim? We have so far dealt with the question of whether modeling undercuts the idea that science aims at truth, reflecting a kind of instrumentalism. But the ubiquity of modeling might lead one to suspect that even if science aims at truth, it often fails to fulfil this aim. The challenge here is somewhat subtle, and it is best seen as threatening the No Miracles Argument (NMA) – perhaps the best-known and most important argument for scientific realism (see K. Brad Wray, “Success of science as a motivation for realism,” ch. 3 of this volume). In a nutshell, the NMA states that the immense predictive success of science would be inexplicable, a ‘miracle’, if the theories from which predictions were derived were not true. But many models: (1) serve in prediction and explanation. And (2) contain (known) falsehoods. So how can the idea of predictive and explanatory success as a mark of truth be maintained? And what becomes of the NMA without this idea? It is important to see how this challenge differs from traditional objections to realism and to the NMA in particular. For one thing, it is distinct from standard empiricist concerns, accord- ing to which science cannot afford us knowledge of unobservables. Idealization has no spe- cial connection to unobservables. The challenge from modeling also differs from (although it shares some features with) history-based concerns about realism such as the pessimistic induction (Laudan 1981; Stanford 2006). Roughly speaking, these are challenges that appeal to past scien- tific failures, arguing that our current theoretical beliefs are no more justified than those of our (refuted) scientific ancestors. So while history-based challenges emphasize a legacy of scientific error, the challenge from modeling pertains to abstraction and idealization, i.e. deliberate omissions and misrepresentations. The challenge is whether we can form true beliefs on the basis of models, consistent with our knowledge that they are incomplete and idealized. I will describe three responses to this challenge. The first focuses on ways of de-idealizing, that is relaxing idealizations, thereby obtaining more realistic models. The second concerns strategies that may be put into use when de-idealization is impossible – especially so-called ‘robustness analysis’. The third and final response consists of a move toward perspectivism, an attenuated form of realism.

4.1 Correctable idealizations In some cases an idealized model can be de-idealized: its false assumptions can be corrected for. Consider the ideal pendulum. It is a model according to which a point-mass bob is suspended from a massless rod, oscillating in two dimensions only and experiencing no air resistance. This highly unrealistic model can be de-idealized, introducing corrections and refinements so that it does take account of the bob’s mass distribution, the effects of the surrounding medium and so on. When this is done, the model’s predictions with respect to the motion of actual pendula become progressively better. Let us call this the Galilean strategy – correcting for the idealizations yields better predictions (McMullin 1985; Weisberg 2007b). When the Galilean strategy works, it supplies a ready and direct answer to the challenge from modeling. For while the idealized model may be simple and computationally tractable, it is also dispensable in favor of more accurate models. Even if this can only be done progressively, over time, the realist can still maintain that idealized models are merely temporary, pragmatic devices, to be replaced if necessary and/or in due course. They need not pose a fundamental worry to the realist. Moreover, if as one persists with the Galilean strategy one also obtains better and better predictions, then it would appear that rather than undermining the connection between predictive success and truth, the practice of modeling can buttress it (McMullin 1985; Laymon

242 Modeling and realism

1989). Thus, if successfully pursued, the Galilean strategy may turn the tables on the challenge from modeling, using it as an argument for realism. But this response is not problem free. One concern is that the practice of science does not conform to the Galilean strategy. Oftentimes, scientists do not de-idealize; instead they move on to other idealizations (Hartmann 1999). A more serious problem is that de-idealization may often be impossible: predictive and explanatory success may depend on the presence of idealizations. Robert Batterman, for instance, has argued that this kind of situation obtains in explanations of universal phenomena such as phase transitions. The phenomenon can be deduced only by exam- ining limiting behaviors, and any attempt to relax the idealization will not yield valuable predic- tions or understanding (Batterman 2002, 2005). Similar claims have been made with respect to other areas and phenomena, such as hadron physics (Hartmann 1999), quantum “dots” (Bokulich 2008), models of oscillatory behavior (Wayne 2011) and climate science (Parker 2006). The next subsection looks at how the realist might handle such cases of non-correctable idealization.

4.2 Non-correctable idealization The existence of non-correctible idealizations in science might seem to pose a grave difficulty for the realist, insofar as she appeals to the NMA. If the best available theoretical representations contain assumptions that are known to misrepresent the world and, moreover, there is good rea- son to believe that these misrepresentations cannot be corrected for, then how can one hold that predictive and explanatory success is an indication of underlying truth? In fact, the cases of predictive and explanatory success may pose different challenges (Batter- man’s arguments, alluded to earlier, place most emphasis on explanatory uses of uncorrectable models). But space limitations do not allow us to cover both cases, so I will primarily look at predictive uses of such models and the challenges they pose for the NMA. A shorter discussion of explanation-related issues is contained in the next section, apropos inference to the best expla- nation (IBE). A number of authors have discussed suggestions that are relevant to this concern. Their details vary, but they all contain a common underlying idea: to the extent that one can show that the predictive success of a model is independent of the particular idealizations it contains, one can have faith in the model’s capturing a genuine truth about reality. Note that the suggestion isn’t that one demonstrate that the model can be freed from idealization. That would be a case of correctable idealization. Rather, the idea is that for any particular idealizing assumption, it can be shown that the model’s success does not depend on it.7 There are at least two ways of fleshing out this suggestion. The first is via a direct argument – or, if possible, a proof – that the idealizations in question are not relevant. Michael Strevens (2008: ch. 8) discusses this possibility in connection with his view of scientific explanation.8 He takes as a case study the explanation of Boyle’s Law by the ideal gas model. Boyle’s Law states that for a dilute gas under fixed temperature, pressure is inversely proportional to volume. The model commonly used to explain this law, the so-called ideal gas model, makes substantial idealizations, including an assumption that gas molecules do not collide with each other. Strevens quotes the following lines from a physical chemistry textbook: “We . . . assumed that the molecules of the gas do not collide with each other . . . But if the gas is in equilibrium, on the average, any collision that deflects the path of a molecule from [the path assumed in the derivation] will be balanced by a collision that replaces the molecule” (McQuarrie and Simon 1997: 1015). He goes on to comment, “The no-collision assumption is justified, then, by an argument, or rather an assertion, that collisions or not, the demonstration of Boyle’s law goes through” (2008: 316). I will not worry too much about the details of this case, as it is the general idea that matters here:

243 Arnon Levy if a model’s result can be shown not to depend on specific idealizing assumptions then one can regard the model as capturing genuine features of its target-in-the-world, as the realist hopes. A related way of carrying out this realist strategy employs what is often labeled robustness analysis. In this type of analysis, the model’s idealizations are varied, and the modeler then checks whether its results are retained over a range of different idealizations. This way one can show that the idealizations are, in a sense, irrelevant to the model’s success. William Wimsatt has commented,

[A]ll the variants and uses of robustness have a common theme in the distinguishing of the real from the illusory; the reliable from the unreliable; the objective from the subjective; the object of focus from artifacts of perspective; and, in general, that which is regarded as ontologically and epistemologically trustworthy and valuable from that which is unreliable, ungeneralizable, worthless, and fleeting. (1981: 128)

Thus, the point of varying the idealizing assumptions is to distill the trustworthy core of the model, separating the assumptions upon which the model’s success depends.9 On the supposition that these core assumptions are true, the result is a justification for believing that the model is ‘getting at’ something true, the idealizations notwithstanding. Let me note that the response from robustness is, as stated here, only a sketch. The realist must still provide an indication of what counts as the invariant core of a model – how are core elements to be identified? What distinguishes them from other elements? Can these discriminations be done in a forward-looking and reliable way? Absent quite detailed responses here it is unclear whether the robustness response is adequate. Now, in some ways the first strategy – direct proof of the irrelevance of the idealizations – is better than robustness analysis. For one thing, robustness analysis probes only a certain range of variation in the idealizing assumptions, since it is virtually never possible to cover all possible idealizations. One’s confidence that the model’s success is a function of the core alone is therefore limited by the extent to which one can model the space of possible alternatives to one’s idealized assumptions.10 Another benefit of the direct proof strategy is that it typically helps one see why the idealizations do not matter. In the ideal gas example discussed by Strevens, for instance, the justification of the no-collision assumption he cites not only shows that the assumption is not a difference maker for Boyle’s Law. It also shows why this is so: because of the statistical profile of the trajectories of the gas molecules any collision will be balanced by a counter-collision, nullifying their overall effect. However, it is often difficult to provide a genuine proof that idealizations are irrelevant. So one may still need to rely on robustness analysis.

4.3 Perspectivism A third and final response to the ubiquity of modeling consists in a more dramatic shift – toward a different view within the realism debate, namely perspectivism. This view may be seen as an attenuated form of realism or as a middle ground between full-blown realism and a kind of rel- ativism. In a nutshell, the perspectivist suggests that scientific knowledge is inextricably bound to a (limited) perspective, that is it depends, essentially, on the state of prior knowledge, the range of information-gathering tools and analytical techniques 11of Perspectivism scientists. can be seen as a kind of constructivism, with illustrious predecessors such as Kant and Kuhn. But the viewpoint I want to discuss here takes a more practice-oriented approach, arguing that concrete features of scientific practice, especially those associated with modeling, give a special impetus and flavor to their view.

244 Modeling and realism

Central advocates of perspectivism in recent philosophy of science include Paul Teller and Ronald Giere. The latter, in particular, has given the view a book-length treatment (2006) and has defended it from subsequent criticism (2009). Morrison (2011) and van Fraassen (2008) endorse parts of the perspectivist outlook, although I will not attempt to describe each author’s specific commitments (see also M. Massimi, “Perspectivism”, ch. 13 of this volume). Giere defines perspectival knowledge as knowledge the scope and character of which is a product of the knower’s epistemic wherewithal: her other beliefs and interests, her knowledge gathering capacities, her tools and her methods of analysis and inference. Giere points out that the choices scientists make regarding how to idealize, what to abstract from and other issues asso- ciated with modeling stem from the kinds of phenomena they care about, the sort of knowledge they already have and, perhaps most crucially, the tools and methods they possess. In other words, models embody a perspective; they reflect modelers’ knowledge and concepts, tools and prag- matic interests. So they lead to perspectival knowledge. Moreover, as Giere notes, and as others like Morrison (2011) discuss at greater length, oftentimes in science a phenomenon is addressed via multiple, mutually incompatible models. Such incompatible models pose an especially serious challenge to the realist, from the perspectivist’s point of view, since it seems to prevent a ‘piecing together’ strategy, in which the results of different models are combined to provide an overall, non–perspective-bound picture of the phenomenon. However, several writers have provided plausible responses to the model-based perspectivist challenge (Chakravartty 2010 and Rueger 2005 respond to perspectivism directly. Pincock (2011) addresses similar issues under a different heading). These responses agree that observations and models provide perspectival information. But they argue that this information is perfectly objec- tive information, hence ‘kosher’ from a realist point of view. As Rueger puts it, “These models describe the system relationally: from this perspective, the system looks as if it has intrinsic prop- erty x, from that perspective it looks like it has property y” (op. cit.: 580). Chakravartty (2010) develops the idea further by arguing that models can be read as informing us about dispositional properties of target systems, where these dispositions are revealed by the relational properties we detect in experiments and observations. The dispositions themselves are intrinsic and perspective independent, argues Chakravartty, but they lead to different manifest behaviors depending on the conditions: “salt sometimes dissolves in water, and other times does not, depending on the circumstances. The ability to dissolve is a property of salt that is manifested in some circumstances and not in others” (Chakravartty 2010: 410). Is this response adequate? Two issues might be raised, although their significance is not fully clear. First, it seems likely that we have knowledge of non-dispositional properties – such as the number of protons in an iron atom’s nucleus or the three-dimensional struc- ture of DNA. If knowledge of these properties is also contained in idealized and abstract models, might the perspectivist have it right about such properties? This would seem like a substantial concession on the part of the realist, to the extent that such non-dispositional properties are common and important. But perhaps we can gain model-based knowledge of such non-dispositional facts; responders to perspectivism have not shown how, so far; but they may yet do so. A second, related worry is articulated by Massimi (2012): the realist can assert that science affords access to non-perspectival facts. But how can the realist justify this assertion? According to Massimi, any justification of a concrete scientific claim is embedded in an epistemic network, consisting of information about observation devices, measurement techniques and background theoretical assumptions. This information, in turn, is perspecti- val. And so any claim for non-perspectival knowledge must ‘bootstrap’ its way out of exist- ing perspectives, a task Massimi views as incoherent (see also M. Massimi, “Perspectivism,” ch. 13 of this volume).12

245 Arnon Levy

These issues remain under discussion. Although it is related to older philosophical views, in its modern, model-related guise, perspectivism is a relative newcomer in philosophy of science. It remains to be seen whether it can be articulated or refuted in a fully compelling fashion.

5 Troubles for IBE? In closing, I want to explicitly address a methodological issue that has been largely in the back- ground so far: the status of inference to the best explanation, or IBE for short (see also J. Saatsi, “Realism and the limits of explanatory reasoning”, ch. 16 of this volume, for further discussion). Roughly speaking, IBE is a non-deductive inference rule that instructs us to believe the explanans of our best explanations. Versions of IBE vary, as do opinions about its epistemic credentials – Douven (2011) provides an up-to-date survey – but more than a few philosophers of science and epistemologists take IBE to be an important, even indispensable epistemic method (Lipton 2004; Weintraub 2013). This issue has general and widespread significance in philosophy of science, but it poses special concerns for those philosophers – and they appear to be in the majority among realists – that put stock in the No Miracles Argument (NMA). For, as noted earlier, the NMA is typically seen as an instance of IBE, that is as the argument that successful scientific theories are likely to be true, since that is the best explanation of their success. However, one might worry that the ubiquity of models, especially idealized models, casts doubt on the very method of IBE. If many of our explanations make false assumptions about the world, then it would seem that we cannot use our best explanations as indicators of the truth. But it seems that discarding IBE, at least on account of the apparent tension with modeling, would be somewhat hasty. Strictly speaking, IBE has an ‘if true’ proviso: the IBE inference rule states that we ought to believe in the proposition that, if true, would best explain the data. This proviso is sometimes stated explicitly (e.g., Douven 2011: §1), but not always. Clearly, the ‘if true’ proviso entails, at the very least, that the candidate explanations that enter into an IBE should be ones that we do not know to be false, or else they would not even be admissible as candidate explanations. But this is patently not the situation with idealizing explanations. In such cases, we knowingly introduce falsehoods into our explanation. Thus, the existence of idealized explan- atory models does not invalidate IBE; such explanations ought not to serve as inputs to IBEs to begin with. That said, awareness of the prevalence and special features of modeling should give us pause and engender caution when relying on IBE. The discussion of non-correctable idealizations attests to the difficulty of extracting the kernel of truth underlying an idealized model. This serves to remind us that one cannot simply read off, from our most predictively successful theo- ries and models, truths about what the world is like. Often enough such inferences are possible, but they are far from straightforward.

Notes 1 Not everyone accepts both claims, and authors differ regarding their formulation. I am neither assuming that both elements are essential to realism nor assuming some very specific understanding of the content of the two claims or the relations between them. These matters do not affect the discussion that follows as far as I can tell. 2 In contrast, the so-called semantic view of theories (Suppes 1960; Suppe 1989; van Fraassen 1980; da Costa and French 2003) appeals to the logician’s notion of model and uses it to give a general account of the nature of theories. Issues relating to the semantic approach will be set aside in this chapter. 3 For more on this distinction, see Thomson-Jones (2005), Godfrey-Smith (2009), and Levy (forthcoming). 4 See Weisberg (2013) for responses to some of these problems.

246 Modeling and realism

5 Yablo provides a precise technical gloss to the notion of a subject matter. But for present purposes an everyday informal understanding suffices. 6 Yablo (forthcoming) develops some aspects of this view with an eye, specifically, to modeling. 7 In this respect, the response to the non-correctable idealization challenge is similar to responses realists have sought to offer to the pessimistic induction (Saatsi, 2016). 8 Strevens is not directly concerned with realism but with how idealized models manage to explain. But since he regards truth as a necessary condition on explanation, the problems are closely related. 9 There is a debate over whether robustness analysis can serve as a surrogate, or perhaps even a type of, empirical confirmation (Orzack and Sober 1993; Levins 1966, 1993). For our purposes, what matters is the manner in which robustness analysis allows one to distinguish ‘the real from the illusory’, regardless of whether this is taken to be a form of confirmation. 10 Odenbaugh (2011) raises a closely related worry, arguing that only if one can replace the ide- alizations in the model by true assumptions (and provided the model’s predictive success is not undermined by this) can one be sure that the ‘core’ assumptions in the model do not depend on the idealizations. An assumption that seems to underlie Odenbaugh’s argument is that there is no in-principle limit to the idealizations one can put forward as a replacement. This assumption seems correct, at least in a sizeable range of cases. With it, Odenbaugh’s argument is sound. However, it does not follow that by varying the idealized assumptions enough, so as to cover a wide range of possibilities, one cannot have quite high confidence that it is the core assumptions that matter for the model’s success. 11 Note that this is an epistemic, not a metaphysical, claim. Some perspectivists also hold that reality itself is, in some sense, perspectival. I doubt that this idea is coherent, but I will not discuss the matter here. 12 This is a version of the bootstrapping criticism levelled at epistemic reliabilism (Vogel 2008).

References Batterman, R. (2002) The Devil in the Details: Asymptotic Reasoning in Explanation, Reduction, and Emergence, Oxford: Oxford University Press. ——— (2005) “Critical Phenomena and Breaking Drops: Infinite Idealizations in Physics,”Studies in History and Philosophy of Science Part B 36(2), 225–244. Benacerraf, P. (1973) “Mathematical Truth,” Journal of Philosophy 70(19), 661–679. Bokulich, A. (2008) Reexamining the Quantum-Classical Relation: Beyond Reductionism and Pluralism, Cam- bridge: Cambridge University Press. Boyd, R. (1983) “On the Current Status of the Issue of Scientific Realism,” Erkenntnis 19, 45–90. Cartwright, N. (1983) How the Laws of Physics Lie, Oxford: Oxford University Press. Chakravartty, A. (2007) A Metaphysics for Scientific Realism: Knowing the Unobservable, Cambridge: Cambridge University Press. Chakravartty, A. (2010) “Perspectivism, Inconsistent Models, and Contrastive Explanation,” Studies in History and Philosophy of Science Part A 41(4), 405–412. Contessa, G. (2010) “Scientific Models and Fictional Objects,”Synthese 172(2), 215–229. Costa, N. da and French, S. (2003) Science and Partial Truth: A Unitary Approach to Models and Scientific Rea- soning, Oxford: Oxford University Press. Douven, I. (2011) “Abduction,” in E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. URL: http:// plato.stanford.edu/entries/abduction/ Field, H. (1989) Realism, Mathematics & Modality, London: Blackwell. Fraassen, B. van (1980) The Scientific Image, Oxford: Clarendon Press. Friend, S. (forthcoming) “The Fictional Character of Scientific Models,” in A. Levy and P. Godfrey-Smith (eds.), The Scientific Imagination: Philosophical and Psychological Perspectives, Oxford: Oxford University Press. Frigg, R. (2006) “Scientific Representation and the Semantic View of Theories,” Theoria 55, 49–65. Frigg, R. and Hartmann, S. ([2006] 2013) “Models in Science,” in E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. URL: http://plato.stanford.edu/entries/models-science Giere, R. N. (1988) Explaining Science: A Cognitive Approach, Chicago, IL: University of Chicago Press. ——— (1999) Science without Laws, Chicago, IL: University of Chicago Press. ——— (2006) Scientific Perspectivism, Chicago, IL: University of Chicago Press. ——— (2009) “Scientific Perspectivism: Behind the Stage Door,”Studies in History and Philosophy of Science 40, 221–223.

247 Arnon Levy

Godfrey-Smith, P. (2006) “The Strategy of Model-Based Science,” Biology and Philosophy 21, 725–740. ——— (2009) “Abstractions, Idealizations, and Evolutionary Biology,” in M. M. Barberousse and T. Pradeu (eds.), Mapping the Future of Biology: Evolving Concepts and Theories, Boston: Springer, pp. 47–56. Hartmann, S. (1999) “Models and Stories in Hadron Physics,” in M. Morrison and M. Morgan (eds.), Models as Mediators: Perspectives on Natural and Social Sceince, Cambridge: Cambrdge University Press, pp. 326–346. Jones, M. (2005) “Idealization and Abstraction: A Framework,” in M. Jones and N. Cartwright (eds.), Idealization XII: Correcting the Model: Idealization and Abstraction in the Sciences, Amsterdam: Rodopi, pp. 173–217. Kitcher, P. (2001) Science, Truth and Democracy, New York: Oxford University Press. Kroon, F. and Voltolini, A. (2013) “Fiction,” in E. N. Zalta (ed.), Stanford Encyclopedia of Philosophy. URL: http://plato.stanford.edu/entries/fiction/ Lange, M. (1993) “Natural Laws and the Problem of Provisos,” Erkenntnis 38(2), 233–248. Laudan, L. (1981) “A Confutation of Convergent Realism,” Philosophy of Science 48, 19–48. Laymon, R. (1989) “Cartwright and the Lying Laws of Physics,” Journal of Philosophy 86(7), 353–372. Levins, R. (1966) “The Strategy of Model Building in Population Biology,” American Scientist 54(4), 421–431. ——— (1993) “A Response to Orzack and Sober: Formal Analysis and the Fluidity of Science,” Quarterly Review of Biology 68(4), 547–555. Levy, A. (2015) “Modeling without Models,” Philosophical Studies 172(3), 781–798. ——— (forthcoming) “Idealization and Abstraction: Refining the Distinction”. Lipton, P. (2004) Inference to the Best Explanation, London: Routledge. Lyons, T. (2005) "Towards a Purely Axiological Scientific Realism," Erkenntnis 63(2), 167–204. Massimi, M. (2012) “Scientific Perspectivism and Its Foes,” Philosophica 84, 25–52. McMullin, E. (1985) “Galilean Idealization,” Studies in History and Philosophy of Science 16, 247–273. McQuarrie, D.A., and Simon, J.D. (1997) Physical Chemistry: A Molecular Approach, Sausalito, CA: University Science Press. Morgan, M. and Morrison, M. (eds.) (1999) Models as Mediators: Perspectives on Natural and Social Science, Cambridge: Cambridge, University Press. Morrison, M. (2011) “One Phenomenon, Many Models: Inconsistency and Complementarity,” Studies in History and Philosophy of Science 42, 342–351. Oddie, G. ([2001] 2014) “Truthlikeness,” in E. N. Zalta (ed.), Stanford Encyclopedia of Philosophy. URL: www. plato.stanford.edu/entries/truthlikeness/ Odenbaugh, J. (2011) “True Lies: Realism, Robustness and Models,” Philosophy of Science 78(5), 1177–1188. Orzack, S. H. and Sober, E. (1993) “A Critical Assessment of Levins’s the Strategy of Model Building in Population Biology,” Quarterly Review of Biology 68(4), 533–546. Parker, W. (2006) "Understanding Pluralism in Climate Modeling," Foundations of Science 11(4), 349–368. Pincock, C. (2011) “Modeling Reality,” Synthese 180(1), 19–32. Rueger, A. (2005) “Perspectival Models and Theory Unification,”British Journal for the Philosophy of Science 56, 579–594. Psillos, S. (1999) Scientific Realism: How Science Tracks Truth, London: Routledge Saatsi, J. (2016) “Models, Idealisations, and Realism,” in E. Ippoliti, F. Sterpetti and T. Nickles (eds.), Models and Inferences in Science, New York: Springer, 173–189. Sorensen, R. (2012) "Veridical idealizations," in M. Frappier, L. Meynell, and J. R. Brown (eds.), Thought Experiments in Philosophy, Science, and the Arts. London: Routledge. Stanford, K. P. (2006) Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives, Oxford: Oxford University Press. Strevens, M. (2008) Depth: An Account of Scientific Explanation, Cambridge, MA: Harvard University Press. Suppe, F. (1989) The Semantic View of Theories and Scientific Realism, Urbana and Chicago: University of Illinois Press. Suppes, P. (1960) “A Comparison of the Meaning and Uses of Models in Mathematics and the Empirical Sciences,” Synthese 12, 287–301. Teller, P. (2001) “Twilight of the Perfect Model,” Erkenntnis 55, 393–415. Thomasson, A. (forthcoming) “If Models Were Fictions, What Would They Be?” in A. Levy and P. Godfrey- Smith (eds.), The Scientific Imagination: Philosophical and Psychological Perspectives, Oxford: Oxford University Press. Thomson-Jones, M. (2010) “Missing Systems and the Face Value Practice,” Synthese 172(2), 283–299. ——— (2012) “Modeling without Mathematics,” Philosophy of Science 79(5), 761–772.

248 Modeling and realism

Toon, A. (2012) Models as Make-Believe, London: Palgrave-Macmillan. ——— (2008) Scientific Representation: Paradoxes of Perspective, New York: Oxford University Press. Vogel, J. (2008) “Epistemic Bootstrapping,” Journal of Philosophy 105(9), 518–539. Walton, K. (1990) Mimesis as Make-Believe, Cambridge, MA: Harvard University Press. Wayne, A. (2011) “Expanding the Scope of Explanatory Idealization,” Philosophy of Science 78(5), 830–841. Weintraub, R. (2013) “Induction and Inference to the Best Explanation,” Philosophical Studies 166(1), 203–216. Weisberg, M. (2007a) “Who Is a Modeler?” British Journal for the Philosophy of Science 58(2), 207–233. ——— (2007b) “Three Kinds of Idealization,” Journal of Philosophy 104(12), 639–659. ——— (2013) Simulation and Similarity, New York: Oxford University Press. Wimsatt, W. C. (1981) “Robustness, Reliability, and Overdetermination,” in M. Brewer and B. Collins (eds.), Scientific Inquiry and the Social Sciences, San Francisco: Jossey-Bass, pp. 124–163. Yablo, S. (2014) Aboutness, Princeton, NJ: Princeton University Press. ——— (forthcoming) “Models and Reality,” in A. Levy and P. Godfrey-Smith (eds.), The Scientific Imagina- tion: Perspectives from Science and Philosophy, Oxford: Oxford University Press.

249 20 SUCCESS AND SCIENTIFIC REALISM Considerations from the philosophy of simulation

Eric Winsberg and Ali Mirza

1 Introduction It is perhaps not immediately obvious why a volume on scientific realism should contain an entry on computer simulation. Not all computer simulations are even directed at the natural world, and few if any aim to offer table-thumpingly true descriptions of it. They are usually tools of opportunity, not instruments of epistemic rigor. This is no exciting news for the realist or the anti-realist. When simulations are directed at the natural world, especially the physical world, they usually depend on physical theory to begin with. Computer simulations of the earth’s climate, of astrophysical phenomena, of solid-state systems and the like depend on the epistemic credentials of the basic equations that drive them. Those in turn come from theories: theories of fluids, of intermolecular forces, or even of the dynamics internal to molecules – quantum mechanics. And so the epistemic attitude we have towards simulations will for the most part bottom out in the antecedent attitudes we have towards those very theories. Realism, orcomes not, about theories first. Finally, it seems unlikely we can straightforwardly settle what attitude to have towards our best theories by looking at how they are deployed in some of their least principled applications. Not surprisingly, then, this is not a chapter which aims to proselytize for realism or anti-realism. Rather, what we aim to investigate is whether considerations arising from a close look at the practice of computer simulation in the physical sciences have lessons to offer us about the oppor- tunistic ways in which those sciences sometimes proceed – lessons that we can take back to those much more basic debates about scientific realism itself. We think that they can. Or at least, they can to a limited degree. That is, we aim to argue that such considerations can shed light on the strength of a particular line of argument that is often mustered in favor of scientific realism: the so-called “no miracles” argument. We argue, in particular, that the success of certain techniques in computer simulation, those that employ what we sometimes call “fictions” might offer reasons for doubting the strength of the “no miracles” argument for securing scientific realism against the pessimistic meta-induction and kindred pes- simistic arguments. They highlight the fact that “success” can be highly purpose dependent. We believe this is a general feature of scientific theories and models – there is no theory or model that is successful for every imaginable purpose. Physicists and other mathematical modelers are opportunistic in the ways in which they generate their successes. In what follows, we present the structure of our argument and defend it from recent criticisms. We go on to explore various

250 Success and scientific realism consequences of our argument for the proper epistemic attitude to have regarding various fea- tures of our best scientific representations.

2 The no miracles argument and the success-to-truth rule Proponents of the “no miracles” argument, such as Kitcher (2002), have sought to secure the validity of the argument by making use of the “success-to-truth” rule (see also K. Brad Wray, “Success of science as a motivation for realism,” ch. 3 of this volume). An initial, simplistic for- mulation that nonetheless captures the intent of the rule can be given as follows:1

If X plays a role in making successful predictions and interventions, then X is true.

As it stands, however, this rule has a number of known counter-examples. Phlogiston theory, caloric theory, and the humoral theory of disease all played a role in making successful predictions and interventions, yet none was true. In response, realists have claimed that such theories fail as counter-examples because they did not allow for sufficiently specific and fine-grained predic- tions and interventions; they were not a part of mature, sophisticated science. The rule, then, has to be modified such that only those Xs that play a role in sufficiently specific and fine-grained predictions and interventions ought to be considered true. Another, structural realist response to such historical counter-examples has conceded that there are specific and fine-grained counter-examples (including, for instance, the wave-ether theory of light) and has sought to bifurcate such historical theories into two parts: (A) the part that plays no role in the successful predictions and interventions (the part now considered to have been false) and (B) the part that did play a genuine role in the successful predictions and interventions and is still considered to be true. Structural realists admit that these theories are at least partly false (due to part A) but maintain that there is a core structure (part B) in the theory that explains whatever predictive and interventional success the theory has. To account for this, the “success-to-truth” rule has to be qualified to only apply if X, in its entirety, plays a genuinely central role in the successful predictions and interventions. Additionally, it is clear that many scientific theories, for example Newtonian mechanics or punctuated equilibrium, omit details and are not perfect representations of their target phenom- ena but still have a history of successful predictions and interventions. Moreover, these theories can be expected to have projectable, future success, and they are not simply successful due to repeated ad hoc qualifications. Consequently, the success-to-truth rule must be further qualified such that (1) its consequent does not speak of X’s truth tout court but only of “approximate truth” or truth in “some qualified sense”, and (2) that the success of X in predictions and inter- ventions be systematic and not merely ad hoc. Lastly, the rule must also be limited so as to only apply to those Xs which have representational content. As Winsberg (2010) points out, “[n]o one would deny that a calculator, a triple-beam balance, and even a high-energy particle accelerator can all play genuinely central roles in mak- ing specific and fine-grained interventions” (p. 126). These entities, however, are not the sort of things that can be true or false or represent accurately or inaccurately and so are not the targets of the “success-to-truth” rule. The rule must only apply to the right sort of X. The considerations made so far lead us, following Kitcher, to a more plausible rule:

If (the right sort of ) X (in its entirety) plays a (genuinely central) role in making (sys- tematic) successful (specific and fine-grained) predictions and interventions, then X is true (in some suitably qualified sense).

251 Eric Winsberg and Ali Mirza

The commonly cited historical examples of false-but-successful theories and models now fail as counter. If any X is to be a counter-example to the more sophisticated rule, it must meet the following conditions: i X must play a genuinely central role in making predictions and interventions. ii The successful predictions and interventions X plays a role in must be (a) specific and fine- grained, (b) systematic, and (c) projectable. iii X must not be separable into a part that is false and a part that does the relevant success-fuelling work. iv X must be a relevant sort of representational entity. v X cannot plausibly be described as true even in some suitably qualified sense (e.g. as “approx- imately true”).

It is clear that (i–v) cannot be satisfied by any of the usual suspects drawn from the history of sci- ence. More promising candidates, meeting all the requisite conditions, can be found from com- putational fluid dynamics, as well as some multi-scale modeling in nano-mechanics. Artificial viscosity, vorticity confinement, and silogen atoms are considered in Winsberg (2006a, 2006b). Here, we review two of these examples (artificial viscosity and vorticity confinement) and present two new ones: vortex particles and synthetic thermostats. We present these new examples in part to show how ubiquitous these sorts of methods are but also to highlight that they exist both in continuum modeling (artificial viscosity, vorticity confinement) and particle-based modeling (silogens and synthetic thermostats) and also that they sometimes involve fictitious forces and sometimes fictitious particles.2

3 A nice derangement of examples

Artificial viscosity Artificial viscosity was initially a product of the Manhattan Project. As John von Neumann’s team used simulations to study the dynamics of shockwaves, it was apparent that shockwaves – which are highly compressed regions of fluid undergoing rapid yet still continuous pressure change (rather than instantaneous pressure change) – could not be modeled as such. The simulations simply are not fine-grained enough to allow direct modeling at the molecular level at which the shock front occurs. If the study of shockwaves was to be made computationally tractable, it would require a more coarse-grained model. The problem, however, is that a more coarse-grained model results in oscillations around the shockwave that, given the evolution of the shockwave over time, results in an erroneous model of shockwave dynamics and leads to unreliable predictions. If these oscillations could somehow be damped, the reliability of the model would remain intact. To do this, von Neu- mann’s team inserted a viscosity-like variable that is a function of the square of the divergence in the velocity field (which is of significant value only close to the shockwave) to damp the unwanted oscillations. This variable, called “artificial viscosity,” had a value far too high to correspond to any real-world feature, yet it became indispensable for making a wide range of fluid dynamic compu- tations; the predictive and interventional power found in many fluid dynamic models is parasitic on the use of artificial viscosity. It is unlikely, for instance, that von Neumann’s team could have succeeded in reliably modeling shockwaves if it wasn’t for the employment of artificial viscosity. Artificial viscosity, then, clearly meets condition (i) of the “success-to-truth” rule, as it plays a genuinely central role in making predictions and interventions. Condition (ii) is also met: (a) the predictions and interventions that artificial viscosity allowed for in the Manhattan Project,

252 Success and scientific realism amongst many others, are specific and fine-grained; (b) the success of artificial viscosity is not ad hoc and stems from its use in novel circumstances; and (c) its history of success in specific, systematic, and novel circumstances makes its success projectable. Moreover, artificial viscosity cannot be separated into real and non-real parts; the whole notion of such a high-valued viscosity-like term is a fiction in its entirety designed to set simula- tions aright. Thus, it satisfies condition (iii). Regarding condition (iv), artificial viscosity requires one to assume a structure for the shockwave (or other fluid dynamic phenomena). And, given that such an assumption is the sort of thing that could be true or false, artificial viscosity is the right sort of entity to meet condition (iv). Moreover, the structure assumed is one that the real- world systems simply do not have, meaning that a well-known falsehood has to be assumed in the construction of the model if it is to have predictive and interventional success. Thus, condition (v) is met as well. The upshot is that artificial viscosity speaks against the success-to-truth rule and against realism relying on this rule.

Vorticity confinement and vortex particles Artificial viscosity is not the only non-physical “effect” that can be put to use in reliable simu- lations. Another example from computational fluid dynamics is what is known as vorticity con- finement (Steinhoff and Underhill 1994). Vorticity confinement is similar to artificial viscosity in that it is an artificial construct used to overcome a fundamental limitation of discretizing the flow of a fluid. The problem to be overcome in this case arises because fluid flows often con- tain significant amounts of rotational and turbulent structure that is invariable, occurring below the grid size of any reasonable computational scheme. And when that structure manifests itself below the grid scales, significant flow features can get damped out in an unrealistic manner. This undesirable effect of the grid size is called “numerical dissipation”, and it often needs to be mitigated. Vorticity confinement is a method that consists in finding the locations where signifi- cant vorticity has been damped out and adding it back using an artificial “paddle wheel” force. Much as in the case of artificial viscosity, this is achieved with a function that maps values from the flow field onto values for the artificial force. A naïve reader, coming across this term for the first time, could be forgiven for thinking they had come across something not entirely unlike the Maxwell-Faraday equation. But of course, no such effect actually exists in this case. Vortex particles are an alternative approach to addressing the same issues, but one that is employed when it is particularly important to create phenomena that have the appearance of real fluid flow (Selle, Rasmussen, and Fedkiw 2005). Thus, vortex particles are employed primarily in applications to computer graphics. We mention them here in part because they are an interesting example of a fictitious particle (and one that lives in a hybrid continuum model), but also because they highlight the fact that “success” can be highly purpose dependent.

Synthetic thermostats The use of fictions for the sake of more reliable models is not an idiosyncratic feature of com- putational fluid dynamics. Synthetic thermostats, also called “artificial thermostats,” are used in the study of macromolecules, defects in crystals, friction between surfaces, and porous media, in addition to fluid dynamics (see Rondoni and Mejía-Monasterio 2007). Thesefictional thermostats have been employed in modelling for well over two decades. Denis Evans and Mark Gillan used them to account for the lack of a temperature gradient in simulations as early as 1982, while they were used for flow calculations as early as 1980 (Evans 1982; Evans and Morriss 2008). The use of these fields as techniques of thermo-regulating simulation proliferated and was pivotal in the

253 Eric Winsberg and Ali Mirza development of non-equilibrium molecular dynamical simulation, where their use has allowed for the calculation of the exact thermal transport coefficients (see Williams, Searles, and Evans 2004). Transport coefficients describe a rate of diffusion that is the response of a system to some perturbation (for example, the shear viscosity coefficient describes the response of the system to shearing forces). Mechanical transport coefficients can be calculated by applying the relevant mechanical perturbation and using the constitutive relations to determine the response of the system. Note, what makes these transports “mechanical” is that they are descriptions of responses to mechanical fields such as an electric or magnetic field. Thermal transport coefficients, on the other hand, describe the behaviour of the system that is driven by thermal boundary conditions. Simulating these boundary conditions, however, makes calculation of the thermal transport coef- ficients complicated because particles gathering near the walls lead to artifacts of the computation scheme (see Evans and Morriss 2008). The solution, one involving synthetic fields, is to invent a fictionalmechanical field to account for the missing thermal dynamics:

We invent a fictitious external field which interacts with the system in such a way as to precisely mimic the linear thermal transport process. [. . .] These methods are called “synthetic” because the invented mechanical perturbation does not exist in nature. It is our invention and its purpose is to produce a precise mechanical analogue of a thermal transport process. (Evans and Morriss 2008: 119)

The fictitious external field is used to create the same sort of transport processes that would be pres- ent if the system had the relevant thermal dynamics; with the transport processes adequately mim- icked, the thermal transport coefficients can be calculated as the system responds to the artificial perturbation. This synthetic field accounts for the missing thermal dynamics without forcing one to model the more complicated thermal dynamics itself. In fact, Rondoni and Mejía-Monasterio (2007) consider the real-world system’s degrees of freedom “practically impossible” to include in the simulation models. Accordingly, these fictional fields are called synthetic or “artificial” thermo- stats because they play the requisite thermoregulating role in the absence of a model of real-world thermal processes. Now, the use of synthetic thermostats has evolved over the past few decades, and it has become clear that their success is highly reliant on the conditions of the target system; in some cases, the specifics of the thermostat become irrelevant and, in others, these thermostats have led to erroneous or non-physical dynamics requiring the construction of other methods. The use of synthetic thermostats clearly will not do for every application. It is also clear, how- ever, that these synthetic thermostats have been pivotal in allowing for the increase in predictive and interventional capabilities resulting from non-equilibrium molecular dynamics simulations. More so than anything else, it is their significant prevalence in a wide variety of simulations over the last two decades that makes it difficult to see how such an increase in predictive and inter- ventional capabilities would have been possible without them (see Daivis, Benjamin, and Tetsuya 2012). Like artificial viscosity and vorticity confinement, synthetic thermostats are used to make simulations successful, yet they do not exist in the real world.

4 Some objections There are two objections to this line of argument worth considering. The first objection has to do with approximate truth and where, exactly, we ought to be looking for it.3 It is, of course, part and parcel of the success of techniques like artificial viscosity, synthetic thermostats, and the like that

254 Success and scientific realism they can be used to build local, representative models that depict their target phenomena with a great deal of accuracy. In other words, if one is building a fluid flow model in astrophysics, say of an intergalactic gas jet, and one uses the von Neumann-Ricthmeyer method of artificial viscosity, one can use that method to create a highly realistic model of the gas jet. Shouldn’t it be pointed out, therefore, that the models in which artificial viscosity appear, virtually by definition of them being successful, are themselves approximately true – or at least exhibit some close enough cousin of approximate truth to please the scientific realist? (See also G. Schurz, “Truthlikeness and approximate truth”, and A. Levy, “Modeling and realism: strange bedfellows?”, chs. 11 and 19 of this volume.) Here we think it is worth drawing a distinction between local models of phenomena, like a model that depicts the inner convective structure of a star or the inner flow dynamics of a gas jet, and the broader model-building principles that inform, motivate, or govern those local models. We certainly agree that it is the local models of phenomena – models that are put together using a variety of model-building tools including not only well-confirmed theory but also bits of physical intuition and lots of calculational tools, including falsifications – that are the instances of local success. But we still insist that some of these falsifications, including those we have described already, have their own, much less local track records of success. And as model-building tools that can be applied across domains of whatever breadth, it is these bits of representational structure that should be compared apples to apples to scientific theories – the domain of concern of the scientific realist. The fact that false model-building principles play a genuinely central role in making specific, fine-grained, systematic, and projectable predictions and interventions then puts pressure, we believe, on the intuition that only true theories could possibly do this. More recently, we have also frequently encountered the following sort of objection: we already know that artificial viscosity is artificial! We already know that the world does not contain fluids whose viscosity is proportional to the square of the divergence of the velocity field! We already know that there is no “confinement term” in fluid dynamics, and so on. So, these cases are noth- ing like the cases of phlogiston or bodily humors, which were part of widely accepted theories about how combustion and disease, respectively, worked. Jesper Jerkert has highlighted this worry rather clearly in response to a review of Winsberg (2010):

But are we really forced by notions such as artificial viscosity and silogen atoms in scientists’ models and simulations to reconsider the role of truth and realism in science? After all, are we not all aware that artificial viscosity is a fictitious entity? And don’t we all know that silogen atoms do not exist? In both cases, we should be extremely surprised if we found out that artificial viscosity and silogen atoms do exist. Since the springboard of Winsberg’s argument is the demonstration that there are counterexam- ples (such as artificial viscosity) to the success-to-truth rule, it should be important to him that the version of the success-to-truth rule he is using is the only reasonable one when discussing scientific realism. But as far as I can see, this is not so. We could simply add another extra parenthesis to the rule, stating the quite obvious clause that we do not believe something to be true in the face of strong arguments against it: If (the right sort of ) X (in its entirety) plays a (genuinely central) role in making (systematic) successful (specific and fine-grained) predictions and interventions (and if we have no other strong reasons for believing X not to be true), then X is (with some quali- fication) true. (Jerkert 2012: 174)

255 Eric Winsberg and Ali Mirza

There are a few different things to say in response to this line of argument. Recall that the point of the success-to-truth rule is that it was supposed to legitimize a certain kind of inference ticket. You give me some kind of representation of the world, I check to see whether it has certain qualities (specific and fine-grained success, centrality, etc.), and if it does, I get to infer that the representation is true or accurate or approximately true or whatever. I get to do this, so says a defender of the rule, no matter what other misgivings I might have had about the representation, because the possession of those other qualities by the representation would be a miracle if it were not (approximately) true. The question then, the thing that might be contested, the very thing that the examples from simulation are meant to shed light on, concerns the degree to which that inference ticket is defeasible. Given that, it is illegitimate in the present dialectic to include among the list of qualities that go on the input side of the inference ticket a proviso to the effect that the inference ticket is not known to be defeated. The very question was: if I’m worried about the veridicality of some representation, should its possession of those qualities necessarily put my worries to rest. By way of analogy, suppose, for example, that you are a diamond dealer, and you would like to know whether a certain checklist of properties can assure you that a purported diamond is not fake. Maybe you think something like the following: if a stone is brilliant, can cut glass, and cannot be smashed with a hammer, then it’s a diamond. Now you want to know whether this rule is indefeasible, so you ask a friend to bring you some fake diamonds to make sure that none of them pass the test. Your friend brings over ten fake diamonds, and to your dismay, four of them pass the test. “Don’t worry,” your friend consoles you, “all we need to do is to amend your test by adding one more criterion. As long as the stone is brilliant, can cut glass, can’t be smashed with a hammer, and you don’t have any reason to believe it’s not a diamond, then you can be sure it’s a diamond. None of the stones I brought are counter-examples to the reliability of that test.” This suggestion seems illegitimate given that the very reason you were trying to devise a test was to allay your suspicions about fake diamonds. Note, furthermore, that this test is logically insusceptible to counter-examples, since any stone you knew to be a non-diamond wouldn’t count. The anti-realist should feel the very same way about Jerkert’s suggestion. The anti-realist is suspicious of assurances that all scientific representations (with various credentials) are genuinely veridical. The point of the success-to-truth rule was to convince her that she is being paranoid, that her suspicions are far-fetched. It would be a miracle, we are meant to be assuring her, if representations could do what they do without being veridical. But now, we are meant to think, the success-to-truth rule only applies in the absence of doubt to begin with. Relatedly, given that the whole point of the success-to-truth rule is that it is supposed to ground a no miracles argument, the success-to-truth inference ticket needs to be virtually inde- feasible to make any sense. It is supposed to be so indefeasible that finding a counter-example to it would be a miracle. The very intuition behind the rule and the no miracles argument is that the only thing that could possibly play a (genuinely central) role in making (systematic) successful (specific and fine-grained) predictions and interventions is something true or at least approximately true. If there are any counter-examples to this, they should undermine that intu- ition, regardless of the provenance of the counter-examples. So even if we try to adapt Jerkert strategy and try to add a clause like “and the representation was not specifically concocted for such and such purpose,” the anti-realist should not find any comfort in this. Thus, the move that Jerkert and others seem to want to make here misunderstands the dialec- tical situation. The examples from simulation like artificial viscosity are not meant to underwrite scientific anti-realism. The argument does not go: artificial viscosity is not real, so therefore nei- ther are electrons. The argument goes like this: for whatever antecedent reasons, the anti-realist

256 Success and scientific realism wants to place limits on the epistemic scope of science. She doubts that all mature scientific representations should be trusted to the degree that the realist claims. Her interlocutor is trying to convince her that there is a useful and reliable inference ticket that will allay her suspicions. He proposes the rule. Her job then becomes to produce counter-examples to his purported rule, and his job is then to make plausible modifications to the rule to rule out her counter-examples. Having understood the dialectic this way, it should be clear that the modifications to the rule cannot include conditions on the antecedent degree of belief we have in the representation in question, since this is exactly what is at issue between the anti-realist and realist. In short, the point of the examples was not to provide motivation to be an anti-realist. The point of the examples was to rebut an argument against the anti-realist. It was to provide, as the title of previous work (“Models of success vs. the success of models”) suggests, an alternative possible model of predictive and interventional success that the anti-realist could hang her hat on – one according to which there are other possible sources of systematic fine-grained and specific predictive and interventional successes other than truth. Viewed in this proper context, Jerkert’s proposed modification does nothing to defuse that point.

5 Optimism saved? – a nod to Wimsatt The earlier considerations lead to two conclusions: (1) that the target of our earlier argument is an argument against anti-realism based on the success-to-truth rule rather than an argument for anti-realism and (2) that defending realism from its critics by modifying the success-to-truth rule faces significant obstacles unlikely to be overcome. Indeed, insisting on defending realism via such a rule might prevent one from recognizing the myriad ways in which scientific work uses false principles in order to set the knowledge, predictions, and interventions embodied in our best models aright. One philosopher of science who has done a great deal to advance the thesis that false model- building principles can lead to reliable local models of phenomena is William Wimsatt. For Wimsatt, the use of a false model is an aid to achieving truer representations of the world. In what follows, we argue that Wimsatt’s approach can be modified and extended (there is one significant source of disagreement to be discussed, however) to account for the use of artificial viscosity and our other examples. The result is a more promising form of epistemic optimism about science than the one defended by the proponents of the no miracles argument. It is one that makes sense of fictitious model-building principles being reliable for building local models in which we can be epistemically confident. Wimsatt (1987: 28–29) lists seven ways in which a model might be said to be false:

1) A model may be of only very local applicability. This is a way of being false only if it is more broadly applied. 2) A model may be an idealization whose conditions of applicability are never found in nature (e.g., point masses, the uses of continuous variables for population sizes, etc.), but which has a range of cases to which it may be more or less accurately applied as an approximation. 3) A model may be incomplete – leaving out 1 or more causally relevant variables. 4) The incompleteness of the model may lead to a misdescription of the interactions of the varia- bles which are included, producing apparent interactions where there are none (“spurious” correlations), or apparent independence where there are interactions – as in the spurious “context independence” produced by biases in reductionist research strategies. 5) A model may give a totally wrong-headed picture of nature. Not only are the interactions wrong, but also a significant number of the entities and/or their properties do not exist.

257 Eric Winsberg and Ali Mirza

6) A closely related case is that in which a model is purely “phenomenological.” That is, it is derived solely to give descriptions and/or predictions of phenomena without making any claims as to whether the variables in the model exist. Examples of this include: the virial equa- tion of state (a Taylor series expansion of the ideal gas law in terms of T or V.); the automata theory (Turing machines) as a description of neural processing; and linear models as curve fitting predictors for extrapolating trends. 7) A model may simply fail to describe or predict the data correctly. This involves just the basic recognition that it is false, and is consistent with any of the preceding states of affairs. But sometimes this may be all that is known.

He adds, however, that “the productive uses of false models would seem to be limited to cases of types 1 thru 4 and 6”. He continues, “[i]t would seem that the only context in which case 5 could be useful is where case 6 also applies, and often models that are regarded as seriously incorrect are kept as heuristic curve fitting devices” (ibid.). How do our examples relate to these ways in which a model can be false? Note, first, that arti- ficial viscosity is not false because of its local applicability (1); in fact, it has broad applicability, but it is simply not a correct representation of the real-world systems in broad or local applications. Nor is it an idealization (2): while the value given to artificial viscosity is high around the shock front, there is nothing in the real world that even approaches it. It is neither incomplete (3) nor does it lead to a misdescription (4) between variables (it actually results in the correct descriptions of variables, though it is not so in and of itself ).4 Artificial viscosity, instead, gives an entirely wrong-headed description of reality. Indeed, artificial viscosity, and the other examples discussed earlier, seem to be nice examples of Wimsatt’s type 5 false model. Wimsatt, however, was suspi- cious of the idea that models that are false in way (5) could lead to reliable representations:

Cases 5 and 7 above represent models with little useful purchase . . . The most produc- tive kinds of falsity for a model are cases 2 or 3 above, though cases of types 1 and 4 should sometimes produce useful insights. (ibid., p. 30)

We agree about case 7. Our examples, however, present an alternative picture of the usefulness of false models (or really, what we would call model-building principles – see earlier) of type 5. Artificial viscosity, vorticity confinement, and, we think, synthetic thermostats show that a model that is false in way 5 may yet have tremendous predictive potential. It might seem intuitive that entirely wrong-headed models could not result in the fine-grained and specific predictions central to scientific work (indeed, this is the very intuition behind the success-to- truth rule!), but such intuitions are shown, by the case studies, to be as wrong-headed as the models themselves. The disagreement here with Wimsatt is clear but subtle. Wimsatt shows that false models are of assistance to scientific work by allowing them to set aright our knowledge, interests, and pre- dictions (see Wimsatt 1987). But he hesitates in allowing entirely wrong-headed models to play a central role in scientific predictions. Even Wimsatt felt the pull, it would seem, of the intuition behind the success-to-truth rule. The case studies here, however, require us to broaden what sorts of false models we take as conducive to useful scientific predictions and interventions. The last vestige of the success-to-truth rule must be given up: the truth or falsity of a model-building principle does not determine its predictive or interventional success. One cannot infer the degree of usefulness of a model from the nature of its falsity (at least not independent of the context of application).

258 Success and scientific realism

This is not to say, of course, that nothing explains why some models are useful and others are not. Wimsatt (1987) says, for instance:

Will any false model provide a road to the truth? Here the answer is just as obviously an emphatic “no!” Some models are so wrong, or their flaws so difficult to analyze that we are better off looking elsewhere. (p. 30)

But whether a false model is of use has more to do with its context of application then it does with the seriousness of its falsity simpliciter. Artificial viscosity is an entirely wrong-headed rep- resentation of the area around shock fronts, but its context of application explains in toto why it works. There is no miracle here: because simulations in fluid dynamics cannot be fine-grained enough to allow modeling at the molecular level (leading to a false description), another false model (artificial viscosity) is inserted to counteract the erroneous results of the initial coarse- grained model (it is as if the wrong-headedness of each model cancels out that of the other). In the case of synthetic thermostats, a fictional force is used to imitate the dynamics that would occur if thermal dynamics could be efficiently inserted into the simulations. Following Wimsatt, one might subsume such a use of false models under a higher-level opti- mism about science: whatever false model-building principles are used in science, they are used in order to provide a better picture of reality on a higher level of analysis. So, while the surrounding area around the shockwave may not have the structure assumed by artificial viscosity, assuming such a false structure allows one to predict and describe what happens in fluid dynamics with more reliability and precision; a false model-building principle, artificial viscosity, is used to produce a more reliable (perhaps one could say, if one wanted: a truer) model of shockwave dynamics.

6 Conclusion The success-to-truth rule, in its various guises, is underpinned by the intuition that truth and success (properly construed) are coextensive. The foregoing considerations from the philosophy of simulation provide us with the opportunity to test the viability of this intuition. The result, we think, is that this broadly realist intuition behind the success-to-truth rule should be aban- doned. Even though models like artificial viscosity, vorticity confinement, and particles, as well as synthetic thermostats, are outright fictions, their “falsity” does not exclude them from not only being part of successful models but from having their own characteristic success in so far as predictions and interventions are concerned. Once success and truth part ways, there is nothing to drive the no miracles argument, and any realist account supported by the argument loses force. Modifications to the rule have traditionally been the preferred route. But they all, for the considerations provided earlier, fail to successfully account for our examples. Moreover, it is hard to see, once one accepts that “fictions” play a role in computational modeling, what the rule has to recommend it. As stressed, this need not mean that one abandon a higher-level optimism about science. Rather, the foundation for such an optimism should not stem from a conflation between truth and success. Indeed, we think that the ability of simulators to in set spitetheir of models aright the impossibility of a direct model of the real-world systems is a reason for a broad epistemic optimism about such scientific work. The fact that not only true models but also fictions can be successfully used in scientific modeling both enhances our understanding of the predictive and interventional power of scientific work and broadens our view of the toolkit that scientists, especially simulators, have at their disposal.

259 Eric Winsberg and Ali Mirza

Notes 1 In earlier work (Winsberg [2006a, 2010]) one of us has argued that considerations arising from a close look at the practice of computer simulation in the physical sciences provide counter-examples to the “no miracles” argument for scientific realism. This section mostly follows the presentation of the issue in those texts. 2 For a discussion of silogen atoms, see Winsberg (2006b, 2010). We do not discuss them in detail here in part because the methods they are involved in do not seem to have become as successful as they looked to do some years ago. 3 This objection and its reply are discussed in Winsberg (2006a) 4 The models in which “artificial viscosity” are likely to be used in are, of course, commonly incomplete and, so, are an instance of (3). However, “artificial viscosity” deserves to be treated as a model in and of itself due to its history of success that is independent of any such individual model. Treated as a model in its own right, with its own qualities and properties, artificial viscosity is not incomplete but, rather, entirely erroneous.

References Daivis, P. J., Benjamin, A. D. and Tetsuya, M. (2012) “Effect of Kinetic and Configurational Thermostats on Calculations of the First Normal Stress Coefficient in Nonequilibrium Molecular Dynamics Simu- lations,” Physical Review E 86(5), 056707. Evans, D. J. (1982) “Homogeneous NEMD Algorithm for Thermal Conductivity – Application of Non-Canonical Linear Response Theory,” Physics Letters A 91(9), 457–460. Evans, D. J. and Morriss, G. (2008) Statistical Mechanics of Nonequilibrium Liquids, Cambridge: Cambridge University Press. Jerkert, J. (2012) “Science in the Age of Computer Simulation – by Eric Winsberg,” Theoria 78(2), 168–175. Kitcher, P. (2002) “On the Explanatory Role of Correspondence Truth,” Philosophy and Phenomenological Research 64(2), 346–364. Rondoni, L. and Mejía-Monasterio, C. (2007) “Fluctuations in Nonequilibrium Statistical Mechanics: Models, Mathematical Theory, Physical Mechanisms,” Nonlinearity 20(10), R1. Selle, A., Rasmussen, N. and Fedkiw, R. (2005) “A Vortex Particle Method for Smoke, Water and Explo- sions,” ACM Transactions on Graphics (TOG) 24(3), 910–914. Steinhoff, J. and Underhill, D. (1994) “Modification of the Euler Equations for ‘Vorticity Confinement’: Application to the Computation of Interacting Vortex Rings,” Physics of Fluids 6(8), 2738–2744. Williams, S. R., Searles, D. J. and Evans, D. J. (2004) “Independence of the Transient Fluctuation Theorem to Thermostatting Details,” Physical Review E 70(6), 066113. Wimsatt, W. C. (1987) “False Models as Means to Truer Theories,” in M. Nitecki and A. Hoffman (eds.), Neutral Models in Biology, Oxford: Oxford University Press, pp. 33–55. Winsberg, E. (2006a) “Models of Success versus the Success of Models: Reliability without Truth,” Synthese 152(1), 1–19. ——— (2006b) “Handshaking Your Way to the Top: Simulation at the Nanoscale,” Philosophy of Science 73(5), 582–594. ——— (2010) Science in the Age of Computer Simulation, Chicago: University of Chicago Press.

260 21 SCIENTIFIC REALISM AND SOCIAL EPISTEMOLOGY

Martin Kusch

1 Introduction Scientific realism (SR) is a view of scientific knowledge, and scientific knowledge obviously is the product of research groups, traditions, schools of thought, or paradigms. And yet, these social dimensions of scientific knowledge have not been at the forefront of SR theorising. Work on these dimensions has however been prominent in various -forms of social episte mology. This chapter seeks to continue a conversation over the relationship between social epistemology and SR. “Social epistemology” can be understood broadly or narrowly. On the broad understanding, it covers all systematic reflection on the social dimension or nature of cognitive achievements such as knowledge, true belief, justified belief, understanding, or wisdom. The sociology of knowl- edge, the social history of science, or the philosophy of the social sciences are among the key parts of social epistemology thus understood. On the narrow understanding, social epistemology is primarily a philosophical enterprise and has its roots in Anglo-American epistemology, in feminist theory, as well as the philosophy of science (Kusch 2011: 873). In this chapter I shall focus on one ingredient of the broad understanding, to wit, the “soci- ology of scientific knowledge”, or “SSK” for short. It is this ingredient that has stimulated most debate with, and amongst, philosophers interested in SR. Some philosophical commentators take SSK to be incompatible with SR, others as fitting with SR. I shall concentrate here on the contri- butions of four authors that exemplify different possible stances: Jeff Kochan (2008, 2010), Tim Lewens (2005), David Papineau (1988), and Nick Tosh (2006, 2007, 2008). Kochan, Lewens, and Papineau take different conciliatory lines, while Tosh opts for irresolvable disagreement. I shall follow the authors’ selection as towithin which SSK strandmost interestingly engages with scientific realism. This strand is the “Strong Programme” in SSK, developed first and foremost by Barry Barnes, David Bloor, and Harry Collins. (Bloor and Barnes will figure more prominently since their writings are philosophically richer than Collins’s works.) I will try to defend an irenic solution to the dispute over the relationship between SR and SSK thus under- stood. Against Kochan I shall argue that there is more SR in SSK than he allows for. Against Tosh I shall seek to establish that the realism of SSK is not in conflict with other elements of the programme. And finally, against Lewens and Papineau, I shall maintain that a reliabilist version of SR is unable to block the sociologists’ relativism.

261 Martin Kusch

2 The strong programme Philosophers and sociologists disagree over the question how SSK is best defined. But no-one disputes that Bloor’s “four tenets” of the Strong Programme are central. They are:

1 It [i.e. the Strong Programme] would be causal, concerned with the conditions which bring about belief or states of knowledge. Naturally there will be other types of causes apart from social ones which will cooperate in bringing about belief. 2 It would be impartial with respect to truth and falsity, rationality or irrationality, success or failure. Both sides of these dichotomies will require explanation. 3 It would be symmetrical in its style of explanation. The same types of cause would explain, say, true and false beliefs. 4 It would be reflexive. In principle its patterns of explanation would have to be applicable to sociology itself. (Bloor 1991: 7)1

One famous historical study in the sociology of scientific knowledge that clearly follows at least the first three tenets is Steven Shapin’s paper on phrenology in early-nineteenth-century Edinburgh, “Homo Phrenologicus” (1979). Shapin begins by noting that anthropologists have identified three kinds of social interests that motivate preliterate societies to gather and sustain knowledge about the natural world: an interest in predicting and controlling events in the natural world, an interest in managing and controlling social forces and hierarchies, and an interest in making sense of one’s life situation. The first-mentioned interest hardly calls for further comment. But how does an interest in social control relate to knowledge about the natural world? The answer is that people everywhere use knowledge about the natural world to legitimate or challenge social order. It is almost invariably regarded as strong support for a given social arrangement if it can be made out to be “natural”, that is, in accord with the way the (natural) world is. Shapin argues that the same three kinds of interests can also be found sustaining scientific knowledge – phrenological knowledge in early-nineteenth-century Scottish culture for exam- ple. Phrenology was developed in late-eighteenth-century Paris by Franz Josef Gall and Caspar Spurzheim. In Edinburgh these ideas were taken up and championed by various members of the rising bourgeoisie. Phrenologists believed that the mind consists of twenty-seven to thirty-five distinct and innate mental faculties (e.g., amativeness and tune). Each faculty was assumed to be located in a distinct part, or “organ”, of the brain. Moreover, the degree of possession of a given faculty was thought to be correlated with the size of the respective organ. And since the con- tours of the cerebral cortex were taken to be followed by the contours of the skull, phrenologists believed that they could “read off” the skull of a person which faculties he or she possessed and to what degree. Phrenologists believed that the faculties were innate, but they allowed that the environment could have a stimulating or inhibiting effect upon the growth of the brain organs. They also held that social values and feelings were the outcome of an interaction between indi- viduals’ innate faculties and the institutions of a particular society. How then did this theory serve the aforementioned three interests? There can be no doubt that the phrenologists were genuinely curious about the brain as a natural object. They amassed an enormous amount of detailed knowledge about the convolutions of the cortex; they were the first to recognize that the grey and white matter of the brain have distinct functions; and they noticed that the main mass of the brain consists of fibres. They clearly collected as much infor- mation about the brain as they could – with their limited means – hoping eventually to be able

262 Scientific realism, social epistemology to explain more and more of the brain’s structure and functioning. Thus the interest in prediction and control was obviously important. As far as the other two interests are concerned, it is important to note that the advocates of phrenology came from bourgeois and petty bourgeois strata in the society. At the time, these strata were moving up in society. Traditional hierarchies and forms of social control were break- ing down as commercial interests became more dominant. The economy was rapidly undergoing a shift from a traditional agricultural to a modern industrialist system. This shift weakened the old aristocracy and worked to the advantage of the middle classes. Phrenology was used as an argument in favour of the change. First, it considerably increased the number of mental faculties over the traditional six. An increased number of mental faculties provided a natural argument for a greater diversity of professions and division of labour. Second, the new faculty of “con- scientiousness” explained the new social reality of competition and contest: this was the faculty that allowed people to compare their standing with that of others. And third, phrenology also made sense of the experience of collapsing hierarchies. Traditional philosophy had put a heavy emphasis on the boundary between spirit and body – metaphorically, “spirit” stood for the gov- erning elite, “body” or “hand” for the workforce. Phrenologists stopped short of equating body and mind, but they made the brain the organ of the mind. In other words, phrenological theory was popular among the rising bourgeoisie since it allowed the latter both to feel at home in the changed socioeconomic situation and to argue against the dominance of the old aristocracy. It is easy to see that the first three tenets of Bloor’s programme – causality, impartiality, and symmetry in style of explanation – are central to Shapin’s analysis. Shapin’s study proposes a causal explanation for the fact that the members of the Edinburgh bourgeoisie tended to favour phrenology over other theories of the mind. The relevant cause was their interest in making sense of their social situation in changing society in a way that benefits them. Shapin does not say or imply that this social interest was the only cause of the belief in phrenology. Indeed, his refer- ence to the role of the interest in prediction and control (of the natural world) and thus to the phrenologists’ detailed brain mapping suggests that other causes, for instance the phrenologists’ observations about the brain, also were causes of their belief. Furthermore, Shapin’s analysis is impartial; he does not attempt to determine which parts of the phrenologists’ or the traditional philosophers’ theories were true or false, successes or failures. Shapin’s mode of investigation is simply blind to these differences. And thus Shapin’s style of explanation is also “symmetrical”: the same types of cause explain true and false beliefs. That is, the phrenologists’ various social interests explain (in part) why they opted for their theory, both for the parts we now regard as true and for the parts that we now regard as false.

3 Papineau’s conciliatory response Having introduced the “Strong Programme” in general terms and with an example of one of its most celebrated case studies, I can now turn to the philosophical debate over its relationship to realism in general and SR in particular. I begin with Papineau’s contribution. In his paper “Does the Sociology of Science Discredit Science?” (1988), Papineau defends a negative answer to his title question. Papineau wishes to determine what follows for SR from the fact that, according to the SSK theorists’ case studies, scientists often do not behave as traditional rationalist images of science would lead us to expect. That is to say, these studies often portray scientists as influenced by factors that the realist would not see as “good reasons” for the scientists’ beliefs (1988: 37). Papineau’s central idea is borrowed from epistemology. He distinguishes between “Cartesian” and “naturalized” epistemology. Cartesian epistemology is a form of epistemic-internalist foun- dationalism. It holds that, to be appropriately epistemically justified, our beliefs must be based on

263 Martin Kusch good reasons accessible to our consciousness (1988: 39). Naturalized epistemology is an external- ist form of reliabilism about epistemic justification. For a belief to be justified it is sufficient that it is produced by a reliable process. It does not matter whether the holder of the belief is aware of this process. Arguments for beliefs are not without interest and importance, but they are not always necessary. Even a non-conscious belief can be justified (1988: 41). Naturalized epistemol- ogy has a prescriptive side, too: individuals and groups should seek to develop ever more reliable belief-forming techniques (1988: 43). Papineau maintains that it is Cartesian epistemology – and Cartesian epistemology alone – that cannot but take SSK to be discrediting science. The first step of the argument supporting this conclusion is the idea that Cartesian epistemology is naturally thought of as a form of anti-realism. This is so because Cartesians conceive of reason as prior to truth: “‘Truth’ and ‘reality’ . . . are simply epithets attached to the picture of the world that reason leads us to” (1988: 46–47). Moreover, and this is the second step, “rationality is by definition the way that scientists think” (1988: 49). And, step three, there is the rub: if SSK is right, then the reasoning of even highly successful scientists contains elements that intuitively should not be there (such as social-political interests). This is a conclusion that the Cartesian is unable to accept. And hence she has to conclude that SSK discredits science (1988: 49). Papineau thinks that naturalized epistemology can respond to SSK differently. To begin with, naturalized epistemology is a form of realism rather than of antirealism. This means, according to Papineau, that truth is prior to reason. Moreover, naturalized epistemology does not seek to justify standards of rationality with reference to how scientists think. Epistemic standards are justified if and only if they are in fact reliable techniques for reaching a high proportion of true over false beliefs. It follows from this, Papineau alleges, that naturalized epistemology is not forced to assume that SSK case studies discredit science (1988: 51). More precisely, Papineau holds that the overall structure of scientific practice would not be reliable for truth if the processes bringing about scientific beliefs included “only social factors”. But the results of SSK do not establish this conclusion (1988: 52).

4 Lewens’s ambivalent response Lewens’s “Realism and the Strong Program” (2005) picks up the thread where Papineau left it sev- enteen years earlier. Lewens pushes the argument further by attending not just to SSK case studies but also to Barnes’s and Bloor’s theoretical pronouncements. Moreover Lewens focuses on agree- ment as well as disagreements between SSK and SR. Whereas Papineau had merely insisted that the naturalized epistemologist is not forced to think of SSK as discrediting science and SR, Lewens even finds some statements of Barnes and Bloor congenial to SR. He applauds Bloor’s statement that “(non-social) nature plays a central role in the formation of belief” (Bloor 1999: 102) and Barnes’s pronouncement that “talk of ‘external reality’ is thoroughly justified and sensible” (Barnes 1992: 135; Lewens 2005: 560). Indeed Lewens even finds little to disagree with in the four tenets of the Strong Programme. The realist too seeks to give causal explanations for beliefs, and although social causes will often be distal rather than proximate,2 even the distal role “seems enough to ground empirical sociology of knowledge”. The requirements of impartiality and reflexivity are likewise realist common sense: the realist too thinks of all beliefs as caused, and he has no objections to the idea that the beliefs of sociologists require causal sociological explanations as well (2005: 562–563). Lewens spends more time analyzing the symmetry tenet, but his primary concern is to shield it from widespread misunderstandings. For instance, Bloor’s symmetry requirement is not that true and false beliefs have exactly the same explanations. It is the requirement that true and false beliefs are accounted for using “the same family of explanatory concepts” (2005: 563).

264 Scientific realism, social epistemology

Lewens thinks that reliabilist externalism often fits with the symmetry tenet. Take two indi- viduals with the same reliable system of vision, one of whom is looking into a normal cubic room while the other is looking into a trapezoid Ames room. The first will acquire justified and true beliefs, the second unjustified and false beliefs. And yet, as far as neurological level is concerned, both beliefs receive the same causal explanation (2005: 565). Lewens also reminds his readers that reliability is context dependent and sometimes even community dependent. It is the latter when the reliability of one’s testimonial beliefs depends upon a sufficient number of truth tellers in one’s environment (2005: 566). Turning from agreement to disagreement, Lewens finds fault with Bloor’s use of explan- atory contrasts. The key passage in this context is one of Bloor’s methodological comments when discussing the dispute between Robert Millikan and Felix Ehrenhaft over the possibility of sub-electronic charges (cf. Holton 1998: 25–83; Franklin 1986: 140–164; Barnes, Bloor, and Henry 1996: 18–45). Bloor grants that today “we believe . . . that Millikan got it basically right” and that thus “electrons . . . did play a causal role in making him believe in . . . electrons”. So far, so good, as Lewens is concerned. The problem is with the way Bloor continues:

But then we have to remember that (on such scenario) electrons will also have played their part in making sure that Millikan’s contemporary Felix Ehrenhaft didn’t believe in electrons. Once we realize this, then there is a sense in which the electron “itself” drops out of the story because it is a common factor behind two different responses, and it is the cause of the difference that interests us. (Bloor 1999: 93; Lewens 2005: 572)

This is the part of Strong Programme methodology Lewens finds unpalatable, at least if it is gen- eralized to cover all SSK explanations. His counterexample involves “Bigfoot” hiding in a cave. Jim enters the cave and sees the creature; John stays outside to sleep. Lewens insists that if we are to explain why Jim believes that Bigfoot is in the cave and why John does not, Bigfoot cannot drop out of the story. Lewens thinks this case generalizes to science. Often the best explanation for a difference in belief between two disagreeing scientific communities is that one was exposed to a different part of the world than the other. Lewens alleges that Barnes and Bloor, in earlier work, had in fact “conceded” this very point when they wrote that “certainly any differences in the sampling of experience, and any differential exposure to reality must be allowed for” (Barnes and Bloor 1982: 35; Lewens 2005: 573). Lewens also objects to Bloor’s writing that “there are no absolute proofs to be had that one scientific theory is superior to another: there are only locally credible reasons” (1999: 102; Lewens 2005: 574). He detects here the Cartesian internalist epistemology that we saw Papineau con- trasting with externalist reliabilism. It is true that we cannot prove to others that our theories or standards are superior. But from this it does not follow that there are only locally credible reasons. As Lewens has it, we need not be able to show that our rational standards are reliable for them to be reliable (2005: 576).

5 Tosh’s uncompromising response Tosh agrees with Lewens’s critical part but not with the latter’s conciliatory comments. Tosh’s main goal is to argue that it “is impossible coherently to espouse the claims of the Strong Program while recognising the existence of scientific knowledge” (2006: 686). The argument in essence is this. A true belief that p can sometimes be explained by the fact that p. But a false belief that q cannot, in any way or form, be explained by the fact that q. After all, there

265 Martin Kusch is no such fact. Applied to scientific knowledge: if advocates of SSK recognize the existence of scientific knowledge, then they must allow that scientific knowledge that p is causally connected to the fact that p, and that false beliefs that q are not so connected.3 And this breaks the symmetry between the explanation of true and false beliefs. Put differently, Tosh makes two points, one trivial, the other substantive. The trivial point is that if there is no fact that q, then a fortiori q cannot be used to explain the belief that q. The sub- stantive claim is that the trivial observation leaves open the possibility that the fact that p might relevantly be cited to explain someone’s belief that p. For instance:

. . . if we believe both that electrons have (and always have had) a charge-to-mass ratio of 1.76 × 1011 C kg-1, and that J. J. Thomson believed that electrons have a charge- to-mass ratio of about 1011 C kg-1, then we are very likely to want to tell a causal story relating the latter to the former. (2007: 687)

Bloor grants that “we believe . . . that Millikan got it basically right” and that thus “electrons . . . did play a causal role in making him believe in . . . electrons”. Tosh reads this as a commitment to SR. But Tosh finds this commitment in contradiction with the symmetry principle. As we saw, Bloor thinks that Ehrenhaft also interacted with electrons and that “the electron itself ‘drops out’ of the story because it is a common factor behind two responses”. As Tosh has it, Millikan’s true belief that there are electrons is at least in part explained by the existence of electrons, whereas Ehrenhaft’s false beliefs that there are subelectronic charges is not explained by the existence of either electrons or subelectronic charges (Tosh 2006: 687). More precisely, it is the difference in how Millikan and Ehrenhaft set up their experiments that explains why Millikan came to believe in their existence and why Ehrenhaft did not. But this difference is explanatory only on the assumption that electrons exist and have a charge of about 1.6 × 10−19 C (2007: 190). Tosh is happy to concede that the true charge of the electron is not the “complete” causal explanation for Millikan’s belief in electrons. But he deems it likely that a proper causal account of how Millikan and Ehrenhaft arrived at their respective views will end up invoking (what we take to be) the correct charge of electrons. And this use of electron physics will not be symmetrical (2007: 191). Tosh considers a number of possible objections to his argument. The most important of these objections builds on Ian Hacking’s claim that “we should not explain why some people believe p by saying that p is true”. Hacking asks us to consider how we explain why some scientist came to believe in the existence of a “Big Bang” in cosmology. We might provide a long list of reasons, Hacking assumes, but the actual truth of the Big-Bang theory will not be one of them. Tosh is not convinced. He accuses Hacking of conflating explanation with justification. The truth of the Big Bang theory cannot be one of the actor’s reasons for believing it, but it might still explain why the actor came to believe it (Hacking 1999: 81; Tosh 2006: 691).

6 Kochan’s defence of the strong programme Kochan seeks to defend the Strong Programme against Lewens and Tosh. Like Lewens, he is concerned to find common ground between SSK and SR; and unlike Tosh, Kochan denies that Strong Programmers are or should be committed to SR. Kochan repeatedly emphasizes that SSK is happy with at least a weak form of realism, that is, the view that there exists a mind-independent world (2008: 25). But this conception is not as ambitious as SR. What then is the problem with the latter? As Kochan sees it, SR treats scientific knowledge as a “resource” rather than as a “topic” for historical explanations

266 Scientific realism, social epistemology of scientific beliefs. This means that scientific realists “explain the credibility of scientific beliefs on the basis of their correspondence to an inherently structured reality” (2010: 131). In so doing, scientific realists assume that correspondence is a causal relation and that there is “a special form of perception” that puts us in touch with “absolute feature[s] of the world” (2010: 137). By contrast, SSK assumes nothing of the sort. To make this plausible Kochan follows Tosh in focusing on Bloor’s statement that “we take for granted that trees and rocks, as well as electrons and bacilli, have long been stable items amongst the furniture of the world” (Bloor 1999: 86; Kochan 2010: 130). As Kochan has it, this is not a commitment to SR: Bloor merely states what “we” in ordinary life take for granted. Our local tradition does “compel us to judge in favour of Millikan’s theory” (2010: 137). But Bloor does not thereby commit him- self to thinking that, in positing that electrons and bacilli exist, science has hit upon the one inherent structure of the world. For Bloor the world has no such structure. In fact, nature does not determine the one correct theory about it, and it allows for a multitude of descriptions and classifications (2010: 131). This does not mean that Bloor is sceptical about scientific knowledge. On the contrary, SSK even uses scientific knowledge “as a resource in sociological explanation” (2010: 132). Kochan suggests that the ideas of the last paragraph must be seen as operative when Bloor speaks of electrons as playing a causal role in Millikan’s and Ehrenhaft’s experiments. Here Bloor is not using the term “electron” in the sense in which it is used by Millikan. Nor does “electron” stand for something that a scientific belief can “track” (2010: 131). Instead the term stands for “the natural causes, or ‘states-of-affairs’ in the world, which produced the experimental data of both Millikan and Ehrenhaft” and which can be interpreted in dif- ferent ways (2010: 132). Put differently, “the natural attitude” which takes the existence of electrons for granted is “inappropriate for sociologists and historians” who seek to explain how our belief in electrons came about. The same applies also to Lewens’s Bigfoot scenario: if we want to explain why the person entering the cave took himself to be seeing Bigfoot, we cannot use Bigfoot himself as a cause. To do so would be to use Bigfoot as a resource and as a topic in one and the same explanation. And this Kochan regards as unacceptable (2010: 134–135).

7 An irenic resolution The viewpoints of Papineau, Lewens, Tosh, and Kochan are difficult to reconcile. There is no alternative to deciding on the correctness of the respective readings of SSK. A useful place to start is the distinction between three different kinds of realism:

A minimal realism: the view that there exists a mind-independent world; B the unreflective realist talk of everyday life, that is, the “natural attitude” of talking about rocks and trees, electrons and bacilli as things; C scientific realism vis-à-vis the natural and/or social sciences. This involves three claims: a the metaphysical view that “the world has a definite and mind-independent natural- kind structure”; b the semantic view that scientific theories in the mature sciences are approximatively true, and that the relevant theory of truth is the correspondence theory; and c the epistemic view that the predictively successful scientific theories of the mature sciences are well confirmed. (Psillos 1999: xix)

267 Martin Kusch

It should be uncontentious that SSK theorists are committed to (a). They are not Berkeleyan or Hegelian idealists. To jump from (a) to (c), note that Stathis Psillos’s definition and book-length defence of SR does not involve two of the features Kochan attributes to SR; to wit, that SR makes correspondence a causal notion and that it involves a special kind of perception. The first claim is moreover explicitly denied by Lewens (2005: 570). Insofar as SR is not committed to these claims, the distance between SR and SSK is reduced. Turning to (b), there seem to be good grounds for attributing to SSK theorists central ingredients of an SR about the social sciences: practitioners of SSK have no scruples about making explanatory use of the theoretical and unobservable posits of a wide range of social theories. Examples are classes and their interests, groups, or social structures. Of course, social kinds differ from natural kinds in not being mind independent.4 And practitioners of SSK do not believe that the theories of SSK are predictively successful – at least not over and above the general prediction that all scientific knowledge has social variables. But these two provisos to one side, at least when it comes to basic categories of social life, such as “group”, “interest”, “common knowledge”, and the like, SSK theorists never use the idea that reality has no definite structure or that it allows for numerous and equally acceptable alternative conceptualizations. Furthermore, the theorists insist that SSK case studies can be and often are true to the historical facts. SSK’s straightforward and bold realism about the social world has occasionally been chal- lenged from the outside of the field, that is, by practitioners of ethnomethodology (e.g. Lynch 1992). But this challenge has not been able to weaken the social-SR of authors like Barnes, Bloor, Harry Collins or Shapin. The key question in the present context is of course how SSK stands vis-à-vis SR about the natural sciences. This topic is complicated. Let me begin with a couple of comments on how SSK theorists use natural-scientific knowledge. When SSK scholars investigate a specific scientific claim p, they rely on the first three tenets of the Strong Programme in order to see the credibility of p as being in principle as problematic as imaginable or real alternatives. But note that this method of making social processes salient is only ever applied to one specific claim or theory at a time. And while this one claim or theory is turned into a topic of research, the rest of science remains in the position of a taken-for-granted resource. For instance, when studying how Millikan’s claims about electrons became credible in physics, the SSK scholar freely speaks about atoms, electric currents, gravitation, and much else (unobservable) besides (Bloor 1991: 177; cf. Collins 2004: 758, 793–794). There is a further way, too, in which the SSK theorist relies on scientific knowledge as a resource. Theorists like Barnes or Bloor have always been keen to be “naturalists” about the social (Bloor 1999: 87). That is to say, against philosophers like Peter Winch or fellow sociologists like Harry Collins, they have tried to integrate their sociological theses with other scientific fields focused on human capacities. Particularly important in this respect has been the psychology of perception and its philosophical interpretation at the hands of Jerry Fodor and Paul Churchland. Thus in their joint book (written together with John Henry), Barnes and Bloor discuss a range of theories of perception and side with Fodor’s modularity thesis against Churchland’s insistence on plasticity (Barnes, Bloor, and Henry 1996: 1–17). Here too SSK theorists are not troubled by their own talk about unobservable entities and structures. In order to penetrate more deeply into SSK theorists’ attitude towards SR, we need to take notice of some of their more general views regarding the nature of natural knowledge, Mary Hesse’s “network-model”. After a brief summary of Bloor’s rendering of the model (Bloor 1982; cf. Barnes 1981) I will discuss its compatibility with SSK. The basis of the model is an idealized account of naming. Language learners are taught to associate specific words with conventionally discriminated things or features of the environment.

268 Scientific realism, social epistemology

Call the latter “exemplars”. Humans have the ability to generalize. They apply their terms in new circumstances on the basis of similarity judgements. New cases may or may not be brack- eted with the exemplars. Bloor suggests that the same basic ability to recognize similarities is still operative in science. He mentions the use of models and analogies as clear cases of this phenom- enon (1982: 270). Our primitive sense of similarity often allows for more than one way of developing our clas- sificatory scheme. Some of these developments are acceptable to other speakers of the language, and some are unacceptable. In other words, our similarity judgements are frequently overruled. Bloor puts much emphasis on the fact that the model, as outlined so far, points to the importance of both a psychological and a sociological factor; the former comprises our perceptual capacities and primitive sense of similarity, while the latter concerns the interaction between speakers and the role of convention (1982: 271). Within a system of classification different kinds of entities are connected by “elementary laws”, for instance “fire is hot”. These laws involve probability estimates of the form: the occur- rence of stimulus A makes the occurrence of stimulus B probable. Bloor emphasizes a socio- logical perspective on such laws, suggesting that many of them have the “status of conventional typifications” and are learnt from accepted authorities. This makes them “collective representa- tions” (as Durkheim would have called them). Bloor notes three features of such laws. First, they extend the area in which a classification can be confidently applied. Second, the laws need not be true for technologies informed by them to be successful. After all, the steam engine was a technology initially based on the caloric theory of heat. And third, laws always form networks (of two or more laws). An example of such a system of laws is our knowledge concerning mammals (including how they differ from fish; 1982: 272–273). Networks of (empirical) laws often face problems with new cases. Consider what happened to our knowledge of mammals and fish when we discovered whales. These odd creatures are like fish in spending their whole life in the oceans and like mammals in suckling their young. Are whales fish that suckle their young, or are they mammals that spend their life in the oceans? The answer is underdetermined by perceptual similarities. This brings home the point that our verbal rendering of experience is a matter of both our sense perception and our responsiveness to networks of laws. Moreover, every element of the network is in principle open to negotiation. Each and every element can be given up, as long as the appropriate changes are made elsewhere in the network (1982: 274). At the same time, the network cannot be changed arbitrarily: classificatory decisions must be made in light of experience. Hesse spoke of this feature as the “correspondence pos- tulate”. Bloor suggests “adaptation postulate” instead since he regards the allusion to the correspondence theory of truth as misleading. As Bloor has it, the correspondence theory of truth implies “structural identity” of a fact and a belief, or “the perfect reflection of reality in knowledge”. But this is not what Hesse was after – or should have been after. The network model comes with the assumption that “reality is indefinitely complex” and that all networks of laws simplify the experience they are rendering intelligible. No one network can hence be the whole truth (1982: 278). The adaptation postulate is not the only factor which explains the relative stability of our networks of laws and concepts. At least equally important are the efforts by their users to protect certain parts of the network from change, using the rest of the network in doing so. In so doing they assume protected parts to be true or self-evident, “but this will be a justification for the special treatment rather than the cause of it” (1982: 283). The sociologist will seek the causes amongst the beliefs and interests of the users, not amongst the properties of laws. Parts of net- works that attract efforts to protect them are of two main kinds: models, metaphors, and analogies

269 Martin Kusch on the one hand and boundaries or distinctions on the other hand. Hesse sought to capture these protective strategies with the concept of “coherence conditions”: these are factors which govern a whole network of laws. Hesse suggested that “culturally conditional metaphysical principles” might qualify. Bloor prefers to take the idea in a different direction. He follows the anthropologist Mary Douglas’s proposal according to which metaphysical principles are part and parcel of our attempts to control others around us. We so construct our knowledge of nature that we can use it to justify our preferred social arrangements (1982: 283). All this is not to suggest that what we call “knowledge” is just a fairy tale invented for political purposes. It is to maintain instead that “knowledge” is the resultant of two vectors: the vector of experience and the vector of convention and social interests (cf. Bloor 1991: 32). The social dimension can never be filtered out. Moreover, and to repeat, nature is indef- initely complex, and every network is a simplification. It follows that no network can ever cut nature at its joints. There thus is no one unique set of natural kinds. Elsewhere Bloor and Barnes claim that the model also suggest three further ideas. To begin with, “all cultures are equally near to nature” insofar as all networks “engage with nature according to the same general principles” (Bloor 1999: 88; cf. Barnes 1982: 316). Furthermore, the model “has no place for the myth, much beloved by many realists, that science progresses by converging on the truth” (Barnes 1992: 143). And finally, it is a mistake to think that if a theory is predic- tively successful, then “its terms must stand in a one-to-one link to the things mentioned”: the predictive success of a theory is always the predictive success of the theory as a whole. This makes it illegitimate to attribute this success atomistically to the alleged reference of some terms of the theory (1999: 94). Let us take stock. On a first reading the “network model” might seem incompatible with SR: no convergence on the truth, no inference from predictive success to truth or reference, no correspondence theory of truth, no unique set of natural kinds, reality indefinitely complex, and all cultures equally near to nature. Can you get further from SR? On a second, closer look things are a little less clear-cut. First, the scientific realist too can accept that reality is indefinitely complex and that all classifications and theories of natural processes are therefore simplifications. This claim seems to be just realist common sense. Of course the realist insists that some simplifications are better than others given certain purposes. But it is hard to see how the SSK theorists could disagree. Second, reconciliation might look less likely concerning the issue of natural kinds. If SR insists on one unique set of natural kinds, then SSK and SR are incompatible. Note however that this assumption of unique natural kinds is not accepted by some authors who call themselves realists (e.g. Dupré 1993; Hacking 1991). Third, the claim that all cultures are equally near to nature is innocuous if it merely means – as it seems to mean – that all cultures develop their classifications on the basis of the same basic psychological and social mechanisms and that long-living cultures have successfully adapted their beliefs and belief-forming techniques to nature. Fourth, Bloor’s opposition to truth as correspondence seems to be an opposition to a rather specific version of this theory: to wit, the view that a belief or theory could be “a perfect reflection of reality” – without simplification or idealization – and that it is in principle possible for us to arrive at the whole truth about the world. These are claims that most sensible scientific realists will reject, too. Fifth, when SSK theorists attack “naïve realism”, their targets are philosophers, not scientists. I suspect Barnes and Bloor would agree with Collins, who writes, “I endorse realism as an attitude both for scientists at their work and for sociologists at theirs” (2004: 15). There is no suggestion in the context of this endorsement that would limit the realism to observables. Barnes and Bloor attack what they regard as the philosophers’ “naïve realism” because they see the latter as focused on unifi- cation and inevitability. The SSK theorists insist that there is no reason to assume that science

270 Scientific realism, social epistemology will converge on a single unified theory of everything. They see this as a central implication of the network model. There may well have been a time when SSK and SR differed on this score. Today the situation is less clear. After all, one of SR’s most prominent defenders, Howard Sankey, has recently written explicitly against saddling SR with claims concerning the inevitability of scientific progress (Sankey 2008; cf. Kinzel 2015). I have argued that SSK theorists are scientific realists of sorts regarding some areas of the social and cognitive sciences. And I have suggested that some of their opposition to SR has become obsolete in light of recent developments in SR. But I do not want to downplay those passages in which Bloor and Barnes directly challenge SR’s core assumptions. For instance, Barnes insists that the SSK theorists’ interpretation of Hesse’s network is “uncompromisingly ‘instrumentalist’” (1981: 307). And Bloor attacks the “naïve” assumption of a “one-to-one link” between terms of a predictively successful theory and natural kinds in the world. (“If the talk is about electrons or microbes, then there must be electrons or microbes” 1999: 94.) Of course, SR is not committed to simple-minded “one-to-one links” between all theoretical terms and natural kinds, and today structural SR is the preferred option in many quarters any- way. But I doubt that Bloor would withdraw his criticism in light of these corrections and modifications. There is some hope of a rapprochement, however, in Bloor’s further comment in the same context:

Obviously individual terms in the theory will have individual occasions of use. We talk about these electrons, these microbes, these lines of force, and so on. On those occa- sions particular experiential episodes will prompt the application of our terms, but that doesn’t mean some uniquely direct or successful reference has been achieved. The entire system of classification is implicated and, before long, this may change. (1999: 94)

Thus talk of electrons and microbes is all right as long as we recognize that the referential links to features in the world are indirect, partial, and mediated by the entire fallible system of clas- sification. This is not quite what a hard-core SR has in mind, but neither is it scepticism about referential links tout court. Note also that Bloor does not present anything like a pessimistic metainduction. It is true that, at one point, Bloor uses the idea that all scientific theories sooner or later face competitors or alternatives (1999: 106). But he does not conclude from this that we should suspend judgement with respect to our current theories – or at least their theoretical entities. Bloor’s goal is rather to deal with a difficulty that emerges when three ideas meet: that an SSK analysis of scientific knowledge seeks to identify its conventional character; that we can speak of conventions only where it makes sense to speak of alternative conventions; and that for our current best science it is often difficult to identify such alternatives. Bloor’s response to this conundrum is an “optimistic induction” based on SSK case studies concerning past science: SSK scholarship has always found, in the historical record, competitors to past scientific theories; hence such competitors are likely to emerge for our current best science as well. It follows that no scientific theory is in principle beyond sociological analysis. We saw that the relationship between the unreflective realism of everyday life (=B) and SR (=C) is one of the contentious issues between Tosh and Kochan. Tosh sees SSK as committed to SR on the grounds that Bloor speaks of electrons and bacilli as things that “we” accept. Kochan is right to insist that things are not that simple. And yet it is not easy to accept Kochan’s inter- pretation either. He says both that as members of our culture we are “compelled” to believe in electrons and that sociologists can escape this compulsion when they resist the impulse to explain

271 Martin Kusch its source in terms of mind-independent features of reality. But that raises more questions than it answers. On the one hand, scientists and philosophers of instrumental or constructive-empiricist persuasions have also been sceptical about (some) theoretical entities that have made it into the standard curriculum. On the other hand, we might wonder whether SSK is meant to free the rest of us from this compulsion or whether we are free to return to our naïve talk after we have taken on board the role of tradition and training in the dispute between Millikan and Ehrenhaft. These comments are not meant as a criticism of Kochan. He seems to me to correctly pick up on one strand in the texts of Bloor and Barnes. But I also see value in Tosh’s insistence that our two SSK theorists waver in their response to SR: sometimes they reject it outright; sometimes they come surprisingly close. Over the past few paragraphs I have provided grist to Kochan’s and Lewens’s mill by suggesting that the SSK theorist and the scientific realist need not be total enemies. But this leaves me with three loose ends: to answer Tosh’s claim according to which SR and the symmetry principle are incompatible; to situate SSK vis-à-vis reliabilism; and to explain what SSK’s relativism boils down to. I shall address these three issues in that order. A fair discussion of Bloor’s claim that “the electron ‘itself’ drops out of the story” should begin by acknowledging Barnes and Bloor’s statement that “certainly any differences in the sampling of experience, and any differential exposure to reality must be allowed for” (Barnes and Bloor 1982: 35). Lewens is fair enough to quote this passage but then goes on to treat it as a “concession”. This rendering seems a little unkind. A more charitable reading would be to say that when differential exposure to reality is a key cause of opposed beliefs, then reality does not drop out of SSK stories. The Bigfoot case, in Lewens’s rendering, is thus not fit to cause trouble for Bloor’s general position. We might still ask: how would we have to modify the Bigfoot scenario for Bigfoot to drop out of the story? Assume that around 1900 Jim and John disagreed over the existence of Bigfoot. Allow further that we today have excellent evidence for Bigfoot’s existence. Jim and John have both travelled far and wide in the region where Bigfoot was supposed to dwell. They have never seen Bigfoot, but Jim has found droppings that he judged to originate with Bigfoot. John has inspected these droppings as well but has a different theory as to their source. If we want to explain why Jim believes in Bigfoot and John did not, then it is unclear what causal role Bigfoot himself can play. Millikan and Ehrenhaft have the same evidence but interpret it differently in light of their respective background beliefs and research traditions. Of course Bigfoot is part of the wider causal story; but he does not help to answer our contrastive ques- tion. The same is true of the electron case. It is not the case that Millikan had a device that trapped electrons, whereas Ehrenhaft lacked such an experimental set-up. If that were the case, then the electron could not drop out. But Millikan and Ehrenhaft shared their experimental data; they even recalculated each other’s results. But they disagreed over how these data were to be interpreted. And at this decision node the existence of electrons is not a cause. We can all agree that there are electrons but still use the symmetry principle to home in on the social dimensions of the controversy. What is the relationship between SSK and externalist reliabilism? To begin with Papineau, he is entirely right to stress that SSK case studies do not automatically discredit science. Sci- ence may well be a reliable way of finding out about the world, even though, or precisely because, social interests, negotiations, and conventions play a central role. Lewens’s use of reliabilist externalism is more contentious. He uses it to block Bloor’s move from “there are no absolute proofs to be had that one scientific theory is superior to another” to “there are only locally credible reasons”. Lewens’s objects that although there may be no absolute proofs for the greater reliability of theory t1 over theory t2, it may still be a non-relative, natural fact

272 Scientific realism, social epistemology

that t1 is more reliable than t2. I disagree. What Lewens overlooks is that the reliability of some belief-forming mechanism is not a natural fact but dependent on the choice of reference class (Brandom 2000: 97–122). Let Jones form the belief that he stands in front of a barn even though he stands in front of a barn-façade. Is his belief-forming method – taking a good look from a distance – reliable? It depends. Let there be ten barns and one barn-façade in the county where Jones lives. Then his belief-forming method is reliable. Assume the county is part of a province and that in the province there are ten barns and ninety barn-façades. Now Jones’s method is unreliable. And then consider the state with the ratios again reversed. The upshot is clear: reliability is not a natural fact; it is a measure that needs a human calibration. It isn’t there anyway.5 Throughout this chapter I have said little explicitly about the relativism of SSK. At its heart are the methodological ideas of impartiality and symmetry. I agree with Lewens that these do not threaten an enlightened SR. Against Lewens I have insisted that externalist reliabilism does not block the route to the relativist insight that there are only locally credible reasons. But my objection to Lewens’s use of externalism is not an argument against SR. The scientific realist too can accept that for X to be a reason, there needs to be a context that gives X its meaning and point. This does not mean that X cannot travel or become enshrined in traditions of research. It could even become universally accepted (Bloor 2011: 435).

8 Conclusion I have explored the relationship between one strand of social epistemology, namely SSK, and SR, broadly construed. I have sought to bring out that this relationship is not as clear-cut as either defenders or opponents of SSK or SR have assumed. SSK theorists are not just committed to a minimal realism about the existence of a mind-independent world. They go beyond this minimal position in their uses of both social-scientific and natural-scientific theories. At the same time we have seen that SSK cannot be saddled with being anti-scientific in its mode of explanations or vulnerable to externalist considerations.

Acknowledgements For extensive comments on a first draft, I am indebted to David Bloor, Katherina Kinzel, Jeff Kochan, Tim Lewens, David Papineau, Juha Saatsi, and Nick Tosh. Work on this paper was made possible byERC Advanced Grant #339382.

Notes 1 It would be a mistake to think that theseall offour SSK.tenets On sumthe upsurface, they say nothing, for example, about SSK relativism, nominalism, or “meaning finitism”. I shall not try to explain and sum up these complex commitments at this point. Central elements of these doctrines will surface in the discussion that follows. 2 By “distal cause” here is meant a cause “upstream” from the belief, that is, further away in the causal chain leading to the belief. A proximate cause is close. 3 I am writing “causally connected” rather than “caused” to avoid the impression that for Tosh knowledge must always be causally downstream from the fact. Jim might know that it’s going to rain tomorrow. Tomorrow’s rain does not cause his belief; rather, the present atmospheric conditions cause both his pres- ent belief and tomorrow’s rain (Tosh, in correspondence). 4 Here it is important to recall a commonsense distinction between two kinds of mind independence. Psychological and social kinds are mind dependent insofar as their existence depends on the existence of minds: they are kinds of properties of minds and kinds of relations between minds. But psychological and

273 Martin Kusch

social kinds need not be mind dependent in the following sense: their existence need not owe anything to the specific thinking mind that uses them for purposes of psychological and sociological explanation and prediction. (A) speaks to the second, not the first sense. (At least if we treat social kinds as one kind of natural kind.) 5 Kochan (2008: 34) argues for the possibility of a “sociologistic form of reliabilism”, using Brandom as well as Kusch (2002: 109) as amongst his starting points. I am not entirely sure how close our positions are. We both emphasize the importance of social conventions for understanding judgements of reliability. But his insistence that there are no “natural facts”, that is, that “all facts” are “the outcome of a combination of natural and social causes”, seems to me a form of sociological idealism (2008: 26). It seems to me to run counter to SSK theorists’ insistence on the distinction between “the world” and “our knowledge of it” (Kochan 2010: 130). A plausible social-epistemological form of reliabilism should honour this distinction. Put differently, Kochan and I are agreed that facts about reliability are not natural facts. But we disagree over the question whether there are natural facts at all.

References Barnes, B. (1982) “On the Extensions of Concepts and the Growth of Knowledge,” Sociological Review 30, 23–44. ——— (1992) “Realism, Relativism and Finitism,” in D. Raven, L. van Vucht Tijssen and J. de Wolf (eds.), Cognitive Relativism and Social Science, New Brunswick, NJ: Transaction Publishers, pp. 131–147. Barnes, B. and Bloor, D. (1982) “Relativism, Rationalism and the Sociology of Knowledge,” in M. Hollis (ed.), Rationality and Relativism, Cambridge, MA: MIT Press, pp. 21–47. Barnes, B., Bloor, D. and Henry, J. (1996) Scientific Knowledge: A Sociological Analysis, London: Athlone Press. Bloor, D. (1982) “Durkheim and Mauss Revisited: Classification and the Sociology of Knowledge,”Studies in History and Philosophy of Science 13, 267–297. ——— (1991) Knowledge and Social Imagery (2nd ed.), Chicago, IL: University of Chicago Press. ——— (1999) “Anti-Latour,” Studies in History and Philosophy of Science 30, 81–112. ——— (2011) The Enigma of the Aerofoil: Rival Theories in Aerodynamics, 1909–1930, Chicago, IL: University of Chicago Press. Brandom, R. (2000) Articulating Reasons: An Introduction to Inferentialism, Cambridge, MA: Harvard University Press. Collins, H. (2004) Gravity’s Shadow: The Search for Gravitational Waves, Chicago, IL: University of Chicago Press. Dupré, J. (1993) The Disorder of Things: Metaphysical Foundations of the Disunity of Science, Cambridge, MA: Harvard University Press. Franklin, A. (1986) The Neglect of Experiment, Cambridge: Cambridge University Press. Hacking, I. (1991) “A Tradition of Natural Kinds,” Philosophical Studies 61, 109–126. ——— (1999) The Social Construction of What? Cambridge, MA: Harvard University Press. Holton, G. (1998) The Scientific Imagination, Cambridge, MA: Harvard University Press. Kinzel, K. (2015) “State of the Field: Are the Results of Science Contingent or Inevitable?” Studies in History and Philosophy of Science 52, 55–66. Kochan, J. (2008) “Realism, Reliabilism, and the ‘Strong Programme’ in the Sociology of Scientific Knowl- edge,” International Studies in the Philosophy of Science 22(1), 21–38. ——— (2010) “Contrastive Explanation and the ‘Strong Programme’ in the Sociology of Scientific Knowl- edge,” Social Studies of Science 40(1), 127–144. Kusch, M. (2002) Knowledge by Agreement, Oxford: Oxford University Press. ——— (2011) “Social Epistemology,” in S. Bernecker and D. Pritchard (eds.), The Routledge Companion to Epistemology, London: Routledge, pp. 873–884. Lewens, T. (2005) “Realism and the Strong Program,” British Journal for the Philosophy of Science 56, 559–577. Lynch, M. (1992) “Extending Wittgenstein: The Pivotal Move from Epistemology to the Sociology of Science,” in A. Pickering (ed.), Science as Practice and Culture, Chicago: The University of Chicago Press, pp. 215–265. Papineau, D. (1988) “Does the Sociology of Science Discredit Science?” in R. Nola (ed.), Relativism and Realism in Science, Dordrecht: Kluwer, pp. 37–37. Psillos, S. (1999) Scientific Realism: How Science Tracks Truth, London and New York: Routledge. Sankey, H. (2008) “Scientific Realism and the Inevitability of Science,” Studies in History and Philosophy of Science 39, 259–264.

274 Scientific realism, social epistemology

Shapin, S. (1979) “Homo Phrenologicus: Anthropological Perspectives on an Historical Problem,” in B. Barnes and S. Shapin (eds.), Natural Order: Historical Studies in Scientific Culture, Beverly Hills, CA: Sage, pp. 41–71. Tosh, B. (2006) “Science, Truth and History, Part I: Historiography, Relativism and the Sociology of Scien- tific Knowledge,”Studies in History and Philosophy of Science 37, 675–701. ——— (2007) “Science, Truth and History, Part II: Metaphysical Bolt-Holes for the Sociology of Scientific Knowledge?” Studies in History and Philosophy of Science 38, 185–209. ——— (2008) “Relativism about Reasons,” Philosophia 36, 465–482.

275

PART IV The realism debate in disciplinary context

22 SCIENTIFIC REALISM AND HIGH-ENERGY PHYSICS

Richard Dawid

1 Introduction The present article discusses the implications of high-energy physics (HEP) for the scientific realism debate. High-energy physics is an interesting and multifaceted context for discussing scientific realism. While philosophical discussions of the implications of high-energy physics for the realism debate are still scarce, there are more of them than could reasonably be discussed in a survey of this kind. In the following, I therefore focus on a few important contexts. A number of interesting recent contributions (see e.g., Egg 2014; Williams 2015) had to be omitted due to lack of space. The analysis covers developments in fundamental microphysics roughly from the 1950s onwards. In order to specify those characteristics of high-energy physics that set it apart from earlier microphysics, let us have a brief look at the state of microphysics around 1950. Conceptually, microphysics at the time was based on quantum field theory, the special relativistic extension of quantum mechanics. The technique of renormalization was turning quantum field theory into a workable method of calculating scattering amplitudes at high energies. The parti- cle spectrum had just started to transcend the set of stable particles that made up macrophysical material objects. Two fundamental developments characterize the evolution of high-energy physics from that time onwards. The first is experimental, the second theoretical. Experimentally, the search for new unstable types of particles in experiments that produced highly energetic collisions of electrons or protons turned out to be the most dynamical generator of new developments in microphysics. The production of new particles in collider experiments is based on a basic property of special relativity: rest mass is understood as a form of energy, which provides a conceptual basis for turning kinetic energy into mass energy. Collision energy on that basis can be used for producing new particles. The higher the collision energy, the more massive the particles produced in the collision can be. Quantum mechanics then adds another important ingredient: the stochastic nature of quantum mechanics implies that any process that is allowed based on the relevant conservation laws happens with a given probability. Therefore, generating a certain collision energy is sufficient for creating all particles that can be generated from the initial state according to the relevant conservation laws. The buildup of increasingly powerful particle colliders thus allowed for the discovery of a series of new physical particles that did not exist under everyday circumstances because (i) their generation required higher kinetic energy

279 Richard Dawid than what was available under standard conditions on earth and (ii) once generated, they quickly decayed back into the stable constituents of matter. The theoretical shift of perspective that complemented the described experimental shift hap- pened a little later, from the 1960s onwards. The wealth of new particles at first seemed confusing and conceptually arbitrary. In the 1960s, a new perspective emerged that related the spectrum of particles directly to fundamental structural characteristics of microphysics. Nuclear interactions were now understood as implications of a specific form of symmetry: local gauge symmetry. Local gauge symmetries can be understood as a reparametrization invariance of a physical theory. One can define an internal space of particle degrees of freedom by attributing an additional char- acteristic property, let us call it color, with n possible values, to each fermionic particle of a given type. At first, no physical implication is being attached to the property of having a certain color. The theory then is called locally gauge invariant under unitary rotations in the n-dimensional color-space if it is possible to define the “color-directions” in internal color-space anew at each point in spacetime without changing the theory’s physical implications. As it turns out, most quantum field theories that could be constructed in principle aren’t locally gauge invariant. The only way to achieve gauge invariance is to introduce vector bosons that couple to the fermions in a specific way and in effect play the role of interaction particles (called gauge-bosons). We thus have the peculiar situation that the requirement of turning a symmetry under the variation of a physically empty property into a local gauge symmetry enforces the introduction of a physically very relevant characteristic of the theory, namely a specific form of interaction that is based on particle exchange.1 Gauge2 symmetries assumed a crucial role in high-energy physics for a technical reason. Straightforward calculations of cross-sections in quantum field theory lead to infinite values. Those infinities can be treated in a controlled way that allows for extracting predictions if the the- ory is renormalizable.3 It turns out that the only renormalizable interacting theories that include fermions are gauge theories. Gauge symmetry thus plays a pivotal role for making high-energy physics predictive. As it turns out, gauge symmetry in some cases is spontaneously broken, which means that the gauge symmetry transforms between physically distinguishable particle types. (This is the case for the gauge group of weak interaction and for all possible gauge symmetry structures that reach beyond the standard model.) The precise role of the property of renormalizability in high-energy physics became clearer in the early 1970s within the framework of renormalization group methods (Wilson 1974) and was fully understood in Polchinski (1984). Gauge invariance has also been the first topic in high-energy physics that was addressed in greater depth by philosophers of science (Teller 2000; Healey 2001, 2007; Earman 2002). Since the present chapter focuses on the question of scientific realism, we have to leave those discussions aside. Gauge field theory in conjunction with the discovery that nucleons had constituents (the quarks) that were bound together by a specific gauge interaction called the strong force gen- erated an entirely different view on high-energy physics. Theory became far more predictive than before. The gauge structure to a large degree enforced both the particle content and the interaction structure of the world. The vast spectrum of seemingly arbitrary elementary objects was replaced by a tightly knit theory-based system of elementary particles and interactions that came to be known as the standard model of particle physics. Based on a set of empirical data that specified the general structure of the standard model, a wide range of predictions could be extracted from the theory and was then, step by step, confirmed in collider experiments. Gauge theory inverted the hierarchy between theory and experiment. While up to the early 1970s, theory was busy finding conceptual answers to the phenomena and empirical anomalies discov- ered by experiment, no serious anomaly has been produced up to this point that contradicted

280 Scientific realism and high-energy physics the standard model.4 From the early 1970s onwards, experiment thus followed theory, aiming at testing theory’s predictions. Theorizing in high-energy physics from the mid-1970s onwards was characterized by attempts to develop theories beyond the standard model. One important reason for venturing beyond the standard model was the search for a coherent theory of gravity and nuclear forces. This step was deemed necessary for developing a coherent understanding of the very early phases of the universe. The theory physicists came up with was string theory. Another reason was to explain conspicuous coincidences of measured parameter values. This led to grand uni- fied theories and, in a different context, to cosmic inflation. Yet another theory, supersymmetry, turned out to be contained in string theory and to be related to grand unification. It also offered promises for explaining a number of other conspicuous aspects of high-energy physics. Those theories, despite remaining empirically unconfirmed or, in the case of inflationary cosmology, inconclusively confirmed, to a high degree determine today’s perspective on cosmology and the fundamental characteristics of matter.

2 A first take on the general relevance of HEP for scientific realism In a number of ways, high-energy physics has further eroded what had survived of the intuitive notion of an ontological object that was badly damaged already by quantum physics and quan- tum field theory. To begin with, many unstable particles generated in collider experiments can never be identified individually but only be attributed to an individual scattering event measured in a detector with an (often fairly small) probability. At a more conceptual level, mass loses its status as a fundamental property of objects and is understood in terms of the way fields couple to the vacuum. Hadrons (such as protons and neutrons) consist of constituents (the quarks) that cannot be isolated. The characteristics of hadrons are to a large degree determined by field theoretical effects that cannot be expressed in terms of the dynamics of their real constituents. (For a philosophical discussion of the last points, see Falkenburg [2007].) Falkenburg argues that only a mereological understanding of the term “particle” survives in a particle physics context.) Finally, developments in quantum gravity suggest the dissolution of space-time structure at the most fundamental levels of description. All those points seem to disfavor a realist view on high-energy physics by eliminating or reducing longstanding intuitive elements of physics that had provided conceptual anchoring places for scientific realism. But other developments in high-energy physics may actually be taken to support scientific realism. The standard model of particle physics is a prime example of novel predictive success and therefore strengthens the basis for no-miracles kinds of reason- ing. On a more pragmatic note, high-energy physics today is a field without any perspectives of technical utilization. Experimental data extracted from collider experiments in high-energy physics have no relevance beyond their role in testing theories. If research in high-energy physics was not about finding the truth about the world in some sense, little reason remained for being interested in its results.

3 High-energy physics versus entity realism Let us first have a look at a form of realism that may be expected to be at variance with high- energy physics. Entity realism offers a realist perspective that avoids the subtleties of theoretical physics and aims at grounding realism in the intuitive understanding of experimental procedures (Hacking 1983; Cartwright 1983; see also M. Egg, “Entity realism,” ch. 10 of this volume). More specifically, Hacking’s entity realism relies on the observation that physicists treat specific objects

281 Richard Dawid as tools for probing other aspects of physics. This utilization of physical objects, Hacking argues, presupposes the existence of those objects and therefore justifies realism with respect to them. Hacking makes clear that his argument for realism does not justify realism about all empiri- cally well-confirmed existence claims in microphysics. It is one thing to achieve empirical con- firmation and another thing to use the corresponding objects as tools for probing new physics. In this light, the question arises whether general conceptual arguments might enforce fundamental limits to the reach of Hacking’s approach. The suspicion that fundamental limits of that kind may exist can be related to one of the core criticisms of entity realism. It has been argued (see, e.g., Psillos 1999) that no clear-cut distinc- tion between the experimentalist perspective and the corresponding theory can be made. Any experimentalist causal story about the use of physical objects as tools must be based on theoretical knowledge in order to specify how this tool can be deployed. It is plausible to expect that the use of theory in specifying the object within the experimental story increases with the concep- tual complexity of the theory to be tested. High-energy physics in this light appears as a prime candidate for a research context in which Hacking’s ideas make no sense anymore because the theory’s empirical implications can’t be formulated in terms of an intuitive experimental story. A first confrontation of Hacking’s entity realism with high-energy physics was carried out by Hones (1991), who discusses meson and baryon spectroscopy in the late 1960s. Hones comes to the conclusion that, while the use of π-mesons in resonance physics can be viewed as a nice example of the use of a particle as a tool, Hacking’s approach only provides a very rough view on what is going on in the experiment. Massimi (2004) looks at the testing of the quark hypothesis in the 1970s and comes to a more critical conclusion. Massimi chooses a peculiar point of departure. Quarks are probed by other particles like electrons, but they are not themselves used as instruments in Hacking’s sense. Massimi assumes that the described situation may be sufficient for a weakened version of entity realism that relies on being probed rather than on being used as an instrument. This idea may seem like a far stretch, given that it destroys Hacking’s key idea that manipulation of objects pro- vides a significantly stronger basis for realism than mere testing of the object’s properties. Still, let us accept Massimi’s softened criterion for entity realism and look at her argument. Massimi observes that probing the constituents of nucleons in the 1970s had one important goal: deciding whether the quark model, which assumed effects of gluon exchange based on a gauge field theoretical understanding of strong interaction, or the parton model, which assumed freely moving constituents inside the nucleon, were empirically adequate. Eventually, the probing of the constituents of nucleons showed that the quark model was viable and the parton model was empirically inadequate. Massimi now points out that making this distinction was only possible by taking into account the theoretical characteristics of gauge field theory and the theory behind the experimental signatures collected. Already the specification of the property of experimental sig- natures that indicates the absence of interaction between constituents of the nucleon, the so-called Bjorken scaling, is a theoretically difficult concept that can’t be viewed in terms of a simple and straightforwardly intuitive experimental story. Matching violations of Bjorken scaling with pre- dictions of gauge field theory then requires the full body of gauge theory. A simple experimental story cannot describe what is going on in the experiment. As Massimi puts it, “at a lower = experimental realist level (i.e., experiments plus phenomenological laws) partons and quarks are empirically equivalent. At a higher = scientific realist level (i.e., experiments plus QCD theory), partons and quarks are no longer empirically equivalent” (Massimi 2004: 54). Therefore, only the latter level allows us to understand what the experiments that probe the constituents of nucleons are all about. In other words, even based on the weakened notion of entity realism Massimi is ready to accept, entity realism is incapable of providing a basis for a realist understanding of quarks.

282 Scientific realism and high-energy physics

4 Group structural realism Contrary to entity realism, structural realism focuses on the specifics of physical theory and is advertised by its exponents to account for the decay of intuitive notions of ontology in funda- mental physics. In recent years, a number of structural realists have put emphasis on the role of internal continuous symmetries and on gauge symmetries in high-energy physics in particular (see also I. Votsis, “Structural realism and its variants,” ch. 9 of this volume). The idea that group structure is the most adequate place to anchor structural realism about fundamental physical theories has been named “group structural realism” by Bryan Roberts (2011). In the following, I will focus on internal gauge symmetries, where the case for a structural realist understanding arguably is most plausible. As described in section 1, particle spectra and interaction structures in high-energy physics are determined by the requirements of gauge invar- iance. Interactions are based on gauge boson exchange between those matter (spin 1/2) particles that form representations of the corresponding gauge symmetry group. The crucial role of gauge symmetry lends support to a structural understanding of the theory’s core tenets in a fairly straightforward way. While classical theories and quantum mechanics are based on the specification of the fundamental building blocks of the world, be they particles or fields, to which properties are attributed that determine their dynamics, theory construction in the case of gauge field theory may be taken to start at a structural level with the specification of the theory’s gauge structure. The particle content then is given by a representation of the gauge group and can be extracted in a second step.5 Particle ontology from that point of view looks secondary to structure. Two authors, Holger Lyre (2004) and Aharon Kantorovich (2003, 2009), have tried to make that idea more specific. Lyre (2004) gives three reasons for a primacy of structure.

1 Viewed in terms of representations of gauge symmetries, particles are no individual objects in the sense of observable, identifiable entities that correspond to specific points in internal symmetry space. Objects are only defined, as Lyre puts it, “as members of equivalence classes under symmetry transformations” (Lyre 2004: 663). This suggests, according to Lyre, that objects are strictly secondary to the symmetry structure within which they are embedded. 2 The nature of ontological commitments in quantum field theory is underdetermined. Ontology may be based on field strengths, on potentials, or even on holonomies6 (closed curves characterizing the connection of a manifold). The symmetry statement, however, is unaffected by that choice and therefore seems to allow a unique expression of ontic com- mitment. (Lyre makes the additional point that, if the holonomy interpretation eventually turned out to be preferable, this would also support a structural perspective on realism because the nonlocal character of holonomies is at variance with an interpretation in terms of localized fundamental objects.) 3 Lyre thinks that, in high-energy physics, statements on symmetry structure have proved more stable than claims about ontological objects. This, of course, would play squarely into Worrall’s classic argument that choosing the structural level for realist claims avoids the pit- falls of the pessimistic meta-induction.

Lyre argues that all three arguments favor gauge symmetry as the natural candidate for a structurally realist commitment. Kantorovich chooses a substantially different perspective on the issue. He challenges the mindset behind the ontological realist’s focus on objects. This focus, in Kantorovich’s view, is rooted in the following line of reasoning: while it is doubtful at best whether relations can be

283 Richard Dawid specified in a meaningful way without relata, physics can be fully understood in terms of actual micro-objects, the properties of which determine their dynamics; therefore, it seems plausible to attribute the ontological primacy to objects. This understanding, Kantorovich points out, is not applicable any more in the context of quantum field theory. One core implication of quantum field theory is the generation of new types of particles from radiation or kinetic energy. The particles that can be generated are deter- mined by the corresponding theory’s gauge structure and the representations that are physically realized. As described in section 1, the possible outcomes of a particle collision in a high-energy physics collider experiment are not fully determined by the incoming particles but depend on the internal symmetry structure and the conservation laws that characterize “the world.” Those characteristics of “the world” are not attributable to any objects existing at the initial stage of the collision process. Gauge structure is not a characteristic of existing objects but a feature “glob- ally” attributable to the world. Even if not a single particle of a given type existed at a given point in time, the gauge structure would still imply that its creation was possible. In this sense, Kantorovich claims, structure is primary and ontological objects are secondary. Group structural realism has been criticized on two accounts. It has been pointed out (McKenzie 2013; see also Nounou 2015) that mathematical group structure in itself does not amount to any physical claim. The parameters characterizing particles that sit in the representations of a gauge group need to be interpreted physically, that is in terms of their role in the dynamics of individual objects in order to acquire physical meaning. Only based on those specifications, which correspond to specifying a particle in terms of its position in a specific representation of the (spontaneously broken) symmetry group, can gauge field theory play the role of a physical theory that has the rich spectrum of empirical implications we know. In this light, the primacy of group structure, though plausible in terms of important mathematical features of the theory, looks less natural once one starts viewing the theory in terms of its physical import. (One might add that a theory’s empirical import plays a crucial role in the no miracles argument, a main argument in favor of scientific realism.) Kantorovich’s position is less affected by this line of criticism. His main argument, stating that the theory describes more than actual objects, plays out once the physics behind the group structure has been fully taken into account. A different line of criticism has been put forward in Roberts (2011). Roberts points out that it is highly non-trivial to specify what is meant by real structure in the context of group structural realism. The problem is that symmetry groups themselves can be characterized in terms of their symmetry structure. Technically this is done based on what is called the automorphism group of a given symmetry group. Repeating the step from a group to its automorphism group can lead to an infinite tower of meta-structural characterizations. The group structural realist thus faces the task of specifying which level(s) of structural description she wants to understand realistically and for what reason. This, in Roberts’s view, makes the enterprise of specifying real structure uncomfortably arbitrary. Roberts adds that the argument from higher stability of structure (Lyre’s third argument) might actually exert pressure toward shifting the realist understanding to high- er-level structures, since different groups can have the same automorphism group, which renders the latter potentially more stable under theory change than the former. There is also a more general problem with Lyre’s third argument. Lyre argues that symmetry statements are less prone to being superseded at future stages of theory building than other claims that are more closely bound to a given ontology. The problem is that Lyre’s claim is itself bound to a given state of high-energy physics theory building. It is difficult to predict whether the fun- damental role that has been attributed to gauge symmetry during the last half a century will be the final verdict in high-energy physics. In fact, Joseph Polchinski has speculated (in 2017) that

284 Scientific realism and high-energy physics gauge symmetry might in the end turn out to be an effective phenomenon that does not exist at the most fundamental level. To conclude, whether full-fledged ontic structural realism follows from high-energy physics remains contentious. Lyre and Kantorovich do deserve credit, however, for having highlighted a number of aspects of high-energy physics that significantly reduce the role of ontological objects.

5 Realism and nonempirical confirmation The previous section was focused on analyzing the standard model of high-energy phys- ics, which is an empirically well-confirmed theory. However, high-energy physics today is characterized by a particularly important role of theories that have not found empirical confirmation. In fact, no fundamental theory in high-energy physics that has been developed since 1974 has found empirical confirmation up to this point. Despite the lack of empirical confirmation, theories like grand unified theories, supersymmetry, supergravity, and string theory have played a pre-eminent role in the field for many years. In this light, if one wants to discuss philosophical implications of recent concepts in the field, one needs to discuss empirically unconfirmed theories. But does it make sense to take those theories seriously with regard to their implications for the scientific realism debate? Interestingly, important empirically unconfirmed theories are quite strongly believed to be viable by their exponents despite the lack of empirical confirmation. The degree of trust many physicists have in their theories is at variance with the canonical understanding of theory assessment and confirmation. I have argued elsewhere (Dawid 2013) that, for a number of reasons, it seems advisable to modify that canonical understanding in a way that accounts for the actual status of today’s empirically unconfirmed theories in the eyes of their exponents. Others (see, e.g., Ellis and Silk 2014) have spoken out against taking empirically unconfirmed theories overly seriously. Even if one does not endorse an extension of the concept of theory confirmation along the lines I suggested in Dawid (2013), it arguably makes sense to take well accepted but empir- ically unconfirmed theories seriously from a realist perspective. According to the canonical understanding of scientific theory confirmation, confirmation must be based on empirical data predicted by the theory in question. Arguing for epistemic scientific realism, however, requires going beyond this set of empirical evidence. It relies on the understanding that something in the record of doing science, be it predictive success, a continuity of structural characteristics through successive theories, or something else, indicates that a theory latches onto reality. Arguing for realism with respect to a scientific theory therefore crucially relies on the understanding that con- siderations reaching out beyond empirical confirmation do have epistemic value. But that very understanding also constitutes the basis for the trust physicists have in empirically unconfirmed theories. Scientific realism thus naturally opens up to the question as to how it is affected by theories that are empirically unconfirmed but well trusted by considerable parts of the physics community.

6 String dualities Among the empirically unconfirmed theories that have been developed in high-energy physics during the last four decades, string theory is the most influential one. It aims at providing a uni- fied description of all fundamental interactions based on the basic idea that elementary objects are not point-like but extended in one dimension. The length of those strings is assumed to be far too small for being observable by current experimental methods.

285 Richard Dawid

Two characteristics of string theory arguably constitute high-energy physics’ most substantial novel contributions to the realism debate. The first of those characteristics is the abundance of duality relations in the theory. In order to understand the point, we need to say a few words about the basics of string theory. String theory contains no free parameters. This means that the theory does not allow for parameter values that can be chosen freely at a fundamental level. However, according to the best current understanding, the theory does have a very large number of ground states that correspond to specific parameter values that characterize the actual form of the string theoretical structure on which the world we observe is based. Parameters of that kind are, for example, the string coupling or the radii of string theory’s compact dimensions. (Superstring theory, the kind of string theory that describes bosons and fermions, must have 10 spacetime dimensions to be consistent. Six of them are assumed to be compact, that is they run back into themselves like a cylinder surface with a very small radius.) Which of those groundstates is actu- ally selected is a matter of the dynamics of the system. Since string theory is a quantum theory, the choice of its groundstate is driven by quantum statistics. At a fundamental level, it was initially believed that one could construct five different types of superstring theory. In the mid-1990s, it turned out that those five types of string theory were in fact only different descriptions of the same theory that were related to each other by so-called duality relations. A duality connects two seemingly very different theories that, though being constructed based on different kinds of elementary objects, describe identical observable phe- nomena. In the case of the five seemingly different types of string theory, these theories are based on different internal symmetry structures and different spacetime structures. They also involve higher-dimensional objects (so-called D-branes) of different dimensions. If two theories are dual, one theory with a specific parameter value (corresponding to a specific groundstate of the fun- damental theory) is empirically fully equivalent to the dual theory with the inverted parameter value. Parameter values can for example be the string coupling constant (in the case of S-duality) or radii of compact dimensions measured in units of the string length (in the case of T-duality). S- and T-duality connect all five types of string theory plus an additional 11-dimensional theory called M-theory. This means that, via a series of duality transformations, one can get from any type of string theory to any other. The inversion of parameter values from one theory to its dual implies that there will often be one “natural” formulation, while the dual one will be less handy. For example, if one theory contains a compact dimension much larger than the string length, this dimension behaves in the way we expect a spatial dimension to behave. The extra dimension of the dual theory, being much smaller than the string length, is in some sense too small for having the known characteristics of an extended dimension in which particles propagate. This means that it makes sense to discuss the first theory if we want to make contact with the dynamics of our observed world. It may also happen, however, that the parameter values of the theory and its dual are both close to 1, which means that no theory looks more natural than the other. In 1998 it was understood by Maldacena (1998) that duality relations even reach out beyond string theory proper. String theory with a specific spacetime structure (asymptotic anti de Sitter Space) is conjectured to be empirically equivalent to a specific form of gauge field theory (a conformal field theory) that is based on point-like objects and does not contain a gravitational interaction. Duality relations create serious problems for the ontological realist. Ontological realism is based on the understanding that there exists a set of real objects the world consists of. As has been discussed in a number of papers (Dawid 2007, 2013; Rickles 2011; Matsubara 2013), this seems irreconcilable with the phenomenon of string dualities. A theory that contains duality relations which connect substantially different ontologies cannot be committed to one real set of ontological objects.

286 Scientific realism and high-energy physics

A metaphysical realist might hope to declare one of the dual ontologies the real one, even in the absence of an empirical way of identifying the true ontology. But this move is incompatible with the character of string physics for a number of reasons. First and foremost, string theory can- not be fully understood, let alone calculated, within the framework of one of the dual theories. The entire web of dual theories is needed in order to understand the theory’s overall structure. And the theory’s overall structure, in turn, is necessary for understanding the dynamics that lead towards the selection of a specific string theory groundstate. Focusing on just one type of string theory as the real one thus makes no conceptual sense. Second, as has been pointed out for example in Polchinski (2017), the limits in which one theory is more “natural” than its dual correspond to classical limits: close to those limits, quantum effects are small. Those parts of parameter space where none of the dual theories is preferred correspond to physical situations that are dominated by quantum effects. The implication of this understanding is the following. The adequacy of a specific ontology must be understood as a property of a specific classical limit of the theory rather than of the theory itself. The existence of several types of string theory that are related to each other by duality relations demonstrates that string theory has several classical limits. Away from the classical limits of the theory, there is no way of making sense of any ontological formulation of the theory. It may well be the case that the actual string theory groundstate that corresponds to the world we live in is close to one of the classical limits. The full theory, however, and thereby the dynamics that has led to the selection of that groundstate, cannot be understood just by discussing the corresponding limit. Therefore, ontological scientific realism is no adequate basis for characterizing string theory at a fundamental level. If string theory is viable, ontological scientific realism as a fundamental per- spective on physics is dead. How structural realism fares in the face of string and string/gauge dualities is a more compli- cated question. String dualities imply that there is no fundamental structure that can be attributed to a given spacetime region. Dualities connect very different fundamental structures that can be used to characterize the situation; and spacetime structure itself changes from a theory to its dual. In this light, structural realism only seems compatible with string dualities if understood in a way that is fully decoupled from the notion of embedding structure in spacetime. It must allow for taking the overall structure of string physics, including the body of gauge theoretical structures that is connected to string theory by gauge/gravity dualities, as the true structure without refer- ring to any spacetime background.

7 Final theory claims Vague forms of final theory claims have appeared in physics at various stages. Throughout most of the 18th and 19th centuries, Newtonian physics was, except for the issue of its most convenient formulation, taken to be the last word on the physical phenomena described on its basis. In the late 19th century, a number of physicists thought that the fundamental pillars of physics had been provided by Newton and by Maxwell’s theory of electromagnetism. The advent of quantum mechanics and general relativity changed all this. It was understood that even the most successful theories could eventually turn out deeply inadequate at a fundamen- tal level. Moreover, the understanding emerged that the conceptual incompatibility between the two grand physical frameworks of quantum physics and general relativity most likely would have to be overcome by further deep conceptual changes. The fact that even the most cherished classical theories have been superseded by the conceptually very different successor theories quantum mechanics and general relativity seemed to suggest a perspective of poten- tially never-ending theory succession, which has fuelled antirealist perspectives among many

287 Richard Dawid philosophers of science once the scientific realism debate took center stage in the philosophy of science in the 1970s and 1980s. String theory is considered a promising candidate for a theory that unifies nuclear inter- actions and gravity. Therefore, it is a universal theory of all interactions, which makes it a candidate for a final theory. In this sense, it reconnects to the question of finality as it was raised in the 19th century. But in two respects string theory provides a qualitatively much stronger final theory claim than its classical forbears ever could. First, in the case of Newto- nian gravitation, it was possible to combine the theory in a coherent way with a theory like electromagnetism that described phenomena that were not described by the former. Nothing of this kind is possible in the case of string theory. If a phenomenon that is not covered by string theory were discovered at energy scales below the string scale, the entire theory would have to be discarded.7 Second, and even more significantly, string theory introduces a minimal length scale based on its duality structure. It turns out that every formulation of string theory that introduces length scales below the string length can also be expressed in terms of the inverse length scale (in units of the string length). Therefore, once the entire phenomenology of string theory down to the string length has been specified, all has been said about the sys- tem (see, e.g., Witten 1996). On that basis, string theory suggests that no new physics should occur at all that goes beyond string theory. These two implications of string theory for the first time in the history of physics generate an explicit final theory claim based on a theory’s structure: if string theory is viable, nothing can be added to it and no new physics can arise beyond its own characteristic scale. Theory-based final theory claims, if naively understood, seem to be begging the question. In effect, they amount to the claim that the theory is final if all its implications are true. But whether all implications of the theory are true is exactly what is at issue when a final theory claim is raised. So what can be the significance of string theory’s final theory claim? Dawid (2013, 2013a) argues that a theory-based final theory claim can be relevant in connection with other strategies that aim at understanding the possible alternatives to a given theory. Since those other strategies also play an important role in assessing the current status of string theory, it is argued that string theory’s final theory claim is a substantial though not a conclusive statement. A substantial final theory claim obviously is of high significance for the scientific realism debate. Note that the way the final theory claim is argued for in string physics only asserts the full empirical adequacy of the theory. It does not explicitly address the question of truth. However, we have argued in the previous section that string dualities suggest a conflation of the question of empirical equivalence and the question of truth. On that basis, string theory’s final theory claim can be understood in terms of the theory’s absolute truth. It must be emphasized, though, that string theory has not found a complete formulation yet. It rather resembles a number of statements that are deduced from a set of fundamental posits. The final theory claim should be read as the claim that there is a way to turn that set of statements into a complete theory based on consistency arguments. The resulting theory, whatever it is, is conjectured to be a true description of the world. In this sense, string theory in conjunction with its final theory claim implies a realist interpre- tation. However, due to the conflation of truth and empirical adequacy and the lack of any onto- logical interpretation of the theory’s claims, the emerging kind of realism remains fairly weak. On what grounds does it deserve to be called realism at all? Dawid (2013: chapter 7) proposes a formulation of scientific realism that characterizes the status of string theory (if confirmed) in conjunction with a final theory claim. This position, presented under the name of consistent structure realism, identifies as the substance of the realist conjecture in the given context the claim that there is a level of description of empirical phenomena that lies below mere bookkeeping of

288 Scientific realism and high-energy physics empirical data and that reduces the number of possible alternatives, based on a given set of empir- ical data, to one. The realist commitment then amounts to the understanding that, due to the lack of possible alternatives, empirical data will always agree with that “true” theory of the world.

8 Conclusion If we try to distill a common message from the lines of reasoning presented in this chapter, it may be the following. High-energy physics continues and intensifies the development already dis- cernible in quantum physics and general relativity towards a decay of the intuitive foundations of ontological realism and related concepts like entity realism. Developments in high-energy physics can only be grasped from a theoretical point of view, which renders entity realism inadequate for dealing with it. They are based on abstract mathematical concepts that become increasingly difficult to frame in terms of a realist ontology of objects. Some crucial concepts seem to have no convincing interpretation at a token level at all. The character of contemporary high-energy physics in conjunction with the general mindset behind a realist view on science strongly suggests to treat those approaches in high-energy physics that strongly influence the physical world view despite the lack of empirical confirmation as a serious source of arguments in the realism debate. If one follows that strategy, arguments against ontological realism become even more cogent. String dualities look like the final nail in the coffin of ontological realism. On the other hand, final theory claims do suggest a weak form of scientific realism that focuses on the issues of truth and the lack of scientific alternatives without adhering to traditional views about ontological objects or an external world.

Notes 1 In the simplest “abelian” case, the gauge transformation is a mere U(1)n = phase1). transformation (with The standard model of particle physics does contain a U(1) gauge symmetry that roughly corresponds to the photon. 2 For an instructive collection of philosophical papers on the role of symmetries in physics, see Brading and Castellani (2010). 3 Nonrenormalizable theories are only predictive far below the characteristic energy scale of the nonrenor- malizable sectors. 4 The only discovery so far that, strictly speaking, transcends the original standard model is the neutrino mass. Massive neutrinos, however, always seemed natural in a standard model context and were only left out because they had not been found in experiment. Experimentalists at the LHC search for deviations from the Standard Model at the time this text is written. 5 In the standard model, elementary particles are all attributed to the fundamental representations of the simple gauge groups and therefore seem to be implied immediately by the group choice. In grand unified theories, however, this is not the case, which in a sense emancipates the choice of the particle spectrum from the choice of the gauge group. This may be seen as a first worry about group structural realism. 6 See Healey (2007). 7 Due to the minimal length scale implied by T-duality, this statement covers all energy scales that can be conceptualized in a string theoretical framework.

References Brading, K. and Castellani, SymmetriesE. (2010) in Physics: Philosophical , ReflectionsCambridge: Cambridge University Press. Cartwright, N. (1983)How the Laws of ,Physics Oxford: Lie Clarendon Press. Dawid, R. (2007) “Scientific Realism in the AgePhysics of Stringand Philosophy Theory,” 11, 1–32. ——— (2013)String Theory and the Scientific, Cambridge: Method Cambridge University Press.

289 Richard Dawid

——— (2013a) “Theory Assessment and Final Theory Claim in String Theory,” Foundations of Physics 43(1), 81–100. Earman, J. (2002) “Gauge Matters,” Proceedings of the Philosophy of Science Association 69(3), 209–220. Egg, M. (2014) Scientific Realism in Particle Physics, Berlin: De Gruyter. Ellis, G. and Silk, J. (2014) “Defend the Integrity of Physics,” Nature 516, 321–323. Falkenburg, B. (2007) Particle Metaphysics: A Critical Account of Subatomic Reality, New York: Springer. Hacking, I. (1983) Representing and Intervening, Cambridge: Cambridge University Press. Healey, R. (2001) “On the Reality of Gauge Potentials,” Philosophy of Science 68(4), 432–455. ——— (2007) Gauging What’s Real: The Conceptual Foundations of Contemporary Gauge Theories, Oxford: Oxford University Press. Hones, M. J. (1991) “Scientific Realism and Experimental Practice,”Synthese 86, 29–76. Kantorovich, A. (2003) “The Priority of Internal Symmetries in Particle Physics,” Studies in the History and Philosophy of Modern Physics 34(4), 651–675. ——— (2009) “Ontic Structuralism and the Symmetries of Particle Physics,” Journal for the General Philos- ophy of Science 40(1), 73–84. Lyre, H. (2004) “Holism and Structuralism in U(1) Gauge Theory,” Studies in the History and Philosophy of Modern Physics 35(4), 643–670. Maldacena, J. (1998) “The Large N Limits of Superconformal Field Theories and Supergravity,” Advances in Theoretical and Mathematical Physics 2, 231. Massimi, M. (2004) “Non Defensible Middle Ground for Experimental Realism: Why We Are Justified to Believe in Coloured Quarks,” Philosophy of Science 71(1), 36–60. Matsubara, K. (2013) “Realism, Underdetermination and String Theory Dualities,” Synthese 190(3), 471–489. McKenzie, K. (2013) “Priority and Particle Physics: OSR as a Fundamentality Thesis,” British Journal for the Philosophy of Science 65, 353–380. Nounou, A. (2015) “For or against Structural Realism? A Verdict from HEP,” Studies in the History and Philosophy of Modern Physics 49, 84–101. Polchinski, J. (1984) “Renormalization and Effective Lagrangians,” Nuclear Physics B 231, 269–295. ——— (2017) “Dualities of Fields and Strings,” Studies in History and Philosophy of Modern Physics 59, 6–20. Psillos, S. (1999) Scientific Realism: How Science Tracks Truth, London: Routledge. Rickles, D. (2011) “A Philosopher Looks at String Dualities,” Studies in the History and Philosophy of Modern Physics 42(1), 54–67. Roberts, B. (2011) “Group Structural Realism,” British Journal for the Philosophy of Science 62, 47–69. Teller, P. (2000) “The Gauge Argument,” Philosophy of Science 67(3), 481. Williams, P. (2015) “Naturalness, the Autonomy of Scales, and the 125 GeV Higgs,” Studies in History and Philosophy of Modern Physics 51, 82–96. Wilson, K. G. (1974) “The Renormalization Group and the Epsilon Expansion,” Physics Reports 12, 75–200. Witten, E. (1996/2001) “Reflections on the Fate of Spacetime,” reprinted in C. Callender and N. Huggett (eds.), Physics Meets Philosophy at the Planck Scale, Cambridge: Cambridge University Press.

290 23 GETTING REAL ABOUT QUANTUM MECHANICS

Laura Ruetsche

1 Introduction The intended moral of this contribution is that satellite-level debates in the general philosophy of science stand to gain in depth, illumination, and interest when brought into contact with considerations rooted in particular ground-level scientific theories. Although I think the moral has many instances, I aim here to motivate only one of them: the satellite-level scientific realism debate and its enrichment by contact with the ground-level project of interpreting quantum mechanics (QM). Section 2 rehearses some standard – and apparently plausible – satellite-level moves. Section 3 offers a minimally technical narrative of the decades-long effort to make sense of QM. Episodes in that narrative cast satellite-level moves in a different light – usually, I will suggest, one unbecoming to realism. A brief conclusion offers some thoughts about what to make of this.

2 Satellite level For present purposes, I will take the scope of scientific realism to be a unit (call itT ) of successful science and take the content of the position to be the claim that T is (approximately) true, where T is construed literally and “true” is construed non-epistemically, pragmatically, or instrumentally. The most venerable argument in favor of scientific realism is the Miracles argument. Its force is well conveyed by a rhetorical question. Consider a theory T that enjoys extraordinary empirical successes. Wouldn’t it be a miracle if T worked as well as it did despite getting things wrong? The usual way to understand the Miracles argument is as an abductive argument premised on T’s astonishing success and offering realism about T as the best explanation of that success. Standard criticisms of the argument cast its use of abduction as question-begging (Fine 1996) or offer an explanation of the astonishing success of accepted science that makes no appeal to its truth:

I claim that the success of current scientific theories is no miracle. It is not even sur- prising to the scientific (Darwinist) mind. For any scientific theory is born into a life of fierce competition, a jungle red in tooth and claw. Only the successful theories survive – the ones which in fact latched on to actual regularities in nature. (van Fraassen 1980: 40)

291 Laura Ruetsche

There are standard rejoinders, typically ones entangled with the issue of when explanations are genuine, to these standard criticisms. Regarding the Darwinian account of T’s success, realists ask: what explains why T rather than some other theory has survived? They request an analog of evolu- tionary fitness, a commodity that (i) distinguishes theories that survive from those that perish and (ii) explains their differential survival. Behind the query lies the suspicion that truth (as the realist understands it) is the only sensible analog to fitness. Adjudicating these matters requires probing the nature of scientific explanation – another project that might unfold at both the ground and the satellite level. (See also K. Brad Wray, “Success of science as a motivation for realism,” ch. 3 of this volume.) One stock argument against realism is quasi-a priori; another historical. The quasi-a priori argu- ment is the argument from underdetermination. Like the Miracles argument, it can also be con- veyed by a rhetorical question. Let E be the observable evidence realists would cite in support of belief in T. There are (untold, unthought-of, innumerable) theories T’ that agree with T concerning E (that is, they’re observationally equivalent to T) but contradict T ‘s extra-empirical commitments. The rhetorical question: Why take E as evidence for T and against this host of T ‘s observationally equivalent rivals? (See also D. Tulodziecki, “Underdetermination,” ch. 5 of this volume.) One standard realist response is to express skepticism about the very notion of observational equivalence and/or about the claim that successful theories even have “non-parasitic” observa- tionally equivalent rivals. Putative examples are lamented as results of “logico-semantic trickery” (Laudan and Leplin 1991: 463), and the prospect of future concrete examples moves some realists to bravado: “give us a rival explanation, and we’ll consider whether it is sufficiently serious to threaten our confidence” (Kitcher 1993: 154). Another standard response grants thatT has obser- vationally equivalent rivals but denies that these rivals are equivalent in the sense that matters, the sense of sharing with T significant additional virtues, virtues such as simplicity, parsimony, consistency (both internal and with successful environing theories), grasp of causal structure, and so forth. By presenting these additional super-empirical virtues as truth conducive, realists would break the apparent tie in evidential support between T and its rivals. This debate hinges on a non-trivial question about what virtues, if any, are aptly regarded as evidence for truth. All hands appear ready to accept that true theories are empirically adequate, so that a theory’s empirical adequacy counts as evidence for its truth.1 (They just don’t agree about how strong the evidence is!) But when it comes to other virtues, opinions diverge. “Scien- tists have indeed long tended to look upon the simpler of the two hypotheses as not merely the more likable, but the more likely” (1957: 7), Quine writes. Hempel demurs: “the desideratum of simplicity . . . does play a considerable role in the critical appraisal of theories, but its satisfaction clearly has no bearing on the question of their truth” (Hempel and Jeffrey 2000: 81). Recalling the jungle red in tooth and claw, note that antirealists have a Darwinian explanation of the prev- alence of super-empirical virtues among the theories that survive – that’s how we like theories to be, and we picked them! The historical argument against realism is the notorious pessimistic meta-induction. It too can be couched as a rhetorical question. The history of science is littered with theories that, however profoundly successful and sincerely admired they were in their heyday, are now discredited, their fundamental entities demoted to useful fictions, their fundamental laws recast as tolerable approx- imations in limited regimes. The rhetorical question: why suppose our present best theories will enjoy a cheerier fate? (See P. Vickers, “Historical challenges to realism,” ch. 4 of this volume.) Worrall (1989) launched an armada of positions known as Structural Realisms. The core idea is to take the points of both the Miracles argument and the Pessimistic Meta-induction to heart by recognizing what is preserved across theoretical change: “structure,” for instance, the structure shared by the equations to which Fresnel subjected the luminiferous ether and Maxwell the

292 Getting real about quantum mechanics electromagnetic field. That this preserved structure gets things fundamentally right explains the remarkable success of theories in a structure-linked sequence; that each such theory supplements this structure, with a flawed account of what is structured, eventuates in its demise. The structural realist urges us to be realists about the preserved structure, not the detailed account of what’s structured (see I. Votsis, “Structural realism and its variants,” ch. 9 of this volume). The enticing advice leaves undecided what, exactly, a structural realist about T believes – yet another question that could be addressed at both satellite and ground levels.2

3 Quantum mechanics

A. Interpreting QM Question: What does a realist believe? Answer: If T is appropriately successful, she believes that T is (approximately) true.

Presented so schematically, the answer is not very informative. So let’s de-schematize. Let’s con- sider the question of realism from the standpoint of a specific scientific theory. And let’s let that theory be QM. Why QM? An architectonic reason is that QM is what this chapter is about. A more sub- stantial reason is that QM looks like ideal fodder for realism. The factual premise of the main argument for realism about T asserts T to be empirically successful. And the history of science contains no better example of an empirically successful theory than QM. QM ranges in applica- tion from quarks to black holes; its predictions have been verified to many significant figures; it’s fostered and been vindicated by technologies from the iPod to Large Hadron Collider; it unifies the non-gravitational phenomena recognized by physics at the cusp of the 21st century. If the Miracles argument has the force realists attribute to it, that force should apply especially to QM. The de-schematized Miracles argument concludes: QM is (approximately) true. The confirmed realist believes this conclusion. The central question of this section is: what does the confirmed realist thereby believe? For van Fraassen, this is “the question of interpretation: Under what conditions is the theory true? What does it say the world is like?” (1991: 242). To meaningfully believe a theory, we must understand it, and “to understand a scientific theory, we need to see how the world could possibly be the way that the theory says it is. An interpretation tells us that” (ibid., 336–337). An interpretation of QM tells the realist about QM what she believes when she believes QM. Two notes about the interpretive project, so understood. First, it has a modal dimension. In order to say what the world is like according to QM, we need to say something about how QM organizes, characterizes, and discriminates between the possibilities it recognizes. What it takes for QM to be true is for the actual world to be among those possibilities. Articulate belief in QM’s truth requires a grasp of what those possibilities look like and how they fit together. The second note is a complication. Soon we’ll encounter positions prominent in the debate over quantum foundations that don’t fit directly into the template for interpretation just given. Bohmian mechanics is an example. Bohmian mechanics is a theory distinct from standard QM: it’s set in a different state space and features equations of motion that have no parallel in standard QM. Bohmian mechanics describes particles skittering hither and yon in such a way that stand- ard QM’s predictions about (position) measurement statistics are upheld. Bohmian mechanics thereby answers a quasi-interpretive question both very near to and very far from van Fraassen’s official one. That question is:

How could the world possibly be, that the phenomena are the way QM says they are?

293 Laura Ruetsche

Whether to count the Bohmian answer as an interpretation of QM is an idle bookkeeping matter. What’s important is that Bohmians offer a way of making sense of the provocative datum that QM succeeds as well as it does. Their way counts QM as what I’ll call a merely effective theory. The contrast is with a fundamental theory. A merely effective theory is one whose scope is limited and whose empirical adequacy within that scope results from its capacity to mimic the implications, for in-scope phenomena, of more underlying theories. It is fashionable to regard present-day quantum field theories as exemplars of effective theories. For example, quantum electrodynam- ics (QED) applies at low (relative to infinity) energies/long (relative to the Planck scale) length scales. Although the details of adequate high-energy theories are unknown, subtle renormali- zation group arguments are taken to establish that, no matter what those details are, they imply that QED is highly accurate for low-energy phenomena.3 As an effective theory, QED is not the final word. Whatever the final word is, physicists believe that it vouches for QED’s efficacy within the low-energy regime. A fundamental theory, by contrast, vouches for itself. No additional physics need be mustered to explicate a fundamental theory’s empirical success. Such a theory, taken literally, purports to describe physical reality completely and directly. An interpretation of the theory aspires to articulate this description, and it is the truth of this description the Miracles argument invokes to explain the theory’s success. Bohmians interpret standard QM as a merely effective rather than fundamental theory insofar as they anchor QM’s empirical adequacy in theoretical considerations extrinsic to standard QM. Unlike present-day field theorists, Bohmians produce the additional theory in some detail. The important analogy – that in each case the effective theory’s success is explained in terms of its capacity to approximate what more basic theory says about phenomena in its scope – remains. The distinction between interpreting a theory as fundamental and interpreting a theory as merely effective matters for the realism debate. An interpretation of T as fundamental is unprob- lematic fodder for realism. Believing that interpretation, the realist believes that T is true; the fundamental interpretation equips that belief with content. But an interpretation of T as merely effective does not necessarily mesh so neatly with realism. Simpler to handle are stances – such as that of Bohmian mechanics – which supply the underlying theory securing T’s effectiveness. The realist can believe the underlying theory (provided she finds a suitable interpretation of it) and appeal to that belief to adjust her attitude toward T. (Some options are: true but incomplete, false but approximate . . .) Harder to handle are interpretations of T as merely effective, which – as with the stance that takes QED to be an effective low-energy approximation to unspecified “Planck scale physics” – decline to supply the details of the underlying theory. They leave open the possibility that fundamental physical reality differs radically from the picture suggested by a naïve reading of the successful effective theories. Contrast, for example, the naïve ontologies for an effective theory of particles interacting in 3 + 1 spacetime dimensions via a variety of fields and for an underlying theory of strings in a 26-dimensional manifold. An interpretation of QM as effective could stop short of saying what the world is fundamen- tally like while saying more about why QM is empirically adequate than the absurdly minimalist position that the world is such that QM is empirically adequate. How apt such an attitude is for realism, structural or otherwise, will depend on its details.4

B. QM: a crash course Here’s a rudimentary guide (consigning many technical details to footnotes) to QM, one that highlights its offences against our classical expectations, offences which pose interpretive chal- lenges. Take the simplest case of a single particle of mass m confined to a line. Classical mechanics assigns the particle a state by equipping it with precise values for its position on the line and its

294 Getting real about quantum mechanics momentum along the line. All of the particle’s other properties are determined by its position and momentum – for instance, the particle’s kinetic energy is its momentum squared, divided by twice its mass. Thus, given the particle’s classical state, we can predict with certainty the values of all its other physical properties. The laws, symmetries, and dynamics of the theory are expressed by how these properties are inter-related, inter-relations which are captured by a mathematical object called the Poisson bracket, which also affords a compact expression of the system’s dynamics. By contrast, the quantum theory of our particle attributes it a state which is a vector in a vector space and associates position, momentum, and other properties (aka observables) with mathematical objects called operators on that vector space.5 Typically, the state vector does not fix the values of these observables but instead offers a probability distribution over possible values.6 Given a pair of quantum observables, there is usually a trade-off in the informativeness of the probability distributions the state vector defines over their possible values: the more accurately the state vector predicts the value of one, the less accurately it predicts the value of the other. A mathematical object called the commutator bracket sets the terms of this trade-off and also structures the collection of quantum observables in a way that expresses the quantum theory’s laws and symmetries. Among these laws is the Schrödinger equation, which describes how the quantum system changes over time. QM’s primary foundational problems are simply stated but fiendishly difficult to resolve. One problem is non-locality. Quantum states impose instantaneous correlations between distant systems. Einstein, whose special theory of relativity (STR) is popularly understood to prohibit superluminal signal propagation, called this “spooky action at a distance.” In 1964, John Bell showed that any local hidden variable theory – that is, any theory attributing the correlations to common causes prop- agating non-superluminally – is committed to a set of inequalities not predicted by standard QM. Subsequent experiments reveal nature to violate these Bell inequalities and uphold the quantum predictions. Distant quantum correlations can’t be understood in terms of local common causes.7 It does not follow that QM and STR are inconsistent. The latter is not a theory of causation but an account of the symmetries of Minkowski spacetime. Consistency with STR demands only that theories set in Minkowski spacetime respect its symmetries – a demand inapplicable to ordinary non-relativistic QM and met by the Lorentz-invariant quantum theories of the standard model. The measurement problem is prompted by a natural line of retreat from the Bell Inequalities. In QM, a bivalent atom typically occupies a state which is a superposition of its excited and unexcited states:

|psi> = a|excited> + b|unexcited> (SUPER)

The coefficients a and b determine measurement probabilities via the Born Rule. Naively, we might expect these probabilities to reflect contingent ignorance of which condition – excited or unexcited – the atom in fact occupies. But extending the expectation of value-definiteness to large enough sets of observables generates contradictions (e.g., the Bell inequalities) with the quantum statistical algorithm. The natural, and traditional, line of retreat is to exercise extreme caution in attributing deter- minate observable values to quantum systems. Caution is embodied by the semantic rule:

(CAUT) When and only when a quantum state predicts with certainty the value of an observable does that observable have a determinate value.

According to (CAUT), an atom described by (SUPER), superposed between its excited and unexcited states, is neither determinately excited nor determinately unexcited. It’s in some unset- tling and unsettled nether condition marked by an absence of fact about its energy level.

295 Laura Ruetsche

This is eerie, but maybe atoms are just odd. To generate the measurement problem, imagine you are intent on measuring the atom’s energy. You might devise a coupling between the atom and an unfortunate cat, a coupling so engineered that if the atom is unexcited, the cat is unharmed, but if the atom is excited, the cat dies. Schematically, with the arrow denoting the measurement evolution, the final state of the cat system records the initial state of the atom system:

|excited>|alive> → |excited>|dead> (MEAS) |unexcited>|alive> → |unexcited>|alive>

One problem with this experimental protocol is that if the atom–cat interaction obeys the Schrödinger equation, then any measurement which is good insofar as it transcribes values of atom observables (excited or not) to values of cat observables (dead or not) is an epic failure insofar as, if the atom’s initial state is (SUPER), the cat winds up in a superposition of biostates:

(a|excited> + b|unexcited>)|alive> → a|excited>|dead> + b|unexcited>|alive> (FAIL)

To avoid empirical contradiction, we have embraced the interpretive rule that superposition signals absence of fact. The rule forces us to regard the cat superposed between life and death as neither alive nor dead. But for one thing, cats in such predicaments are unprecedented in our experience. And for another, if the cat is neither alive nor dead, our measurement has no out- come. Schrödinger’s cat dramatizes the quantum measurement problem: if measurements unfold according to quantum mechanical law, then they do not eventuate in outcomes. Put baldly: if quantum mechanics were true, then we’d never be able to gather data confirming it.

C. Some interpretations Making sense of all this is the problem of interpreting QM. Already we have encountered several positions that seem like non-starters. “Naïve realism” casts QM as an effective theory offering a statistical summary of phenomena more completely described by a local hidden variable theory. Naïve realism looks like a non-starter because heroic contortions are required to insulate it from falsification via (e.g.) experimental violation of the Bell Inequalities. Retreats from naïve realism that couple the semantic rule (CAUT) with Schrödinger dynamics fare no better: they imply that measurements typically lack outcomes. This predicament inspires a third interpretive stance, commonly encountered in QM textbooks: Copenhagen with collapse. This view retains the (CAUT) rule but jettisons universal Schrödinger evolution, maintaining instead that during measurement processes a radically different kind of time development seizes the system, delivering it to a state (CAUT) can interpret as corre- sponding to a determinate measurement outcome. This measurement collapse is discontinuous and stochastic, with collapse probabilities mirroring Born Rule probabilities. (Indeed, on this view, collapse processes explain the Born Rule!) Most parties take Copenhagen with collapse to be a non- starter because it entertains two radically incompatible kinds of state evolution without articu- lating precise criteria distinguishing contexts when one applies from contexts when the other applies. In the stock presentation, the distinction hinges on the linchpin notion of measurement; variations seek to define that notion in terms of consciousnesses or the “size” of the measuring apparatus or gravitational distaste for superposition. None of these variations has many adherents. Other positions have more life in them. Dynamical collapse schemes, such as the GRW model, operate within the scope of (something like) (CAUT): only observables whose values can be predicted with (near) certainty are determinate. They rescue Schrödinger’s cat from limbo

296 Getting real about quantum mechanics by demoting the Schrödinger equation from the status of fundamental law. In its place, they offer a stochastic dynamics tuned to approximate Schrödinger evolution for isolated systems and to mimic wave function collapse (including its reproduction of Born Rule statistics) for large systems. Thus dynamical collapse schemes interpret QM as an effective theory, one that’s strictly false (because Schrödinger evolution is not universal) but one that works well enough in its usual domain of application. These schemes can be made relativistic, at least for non-in- teracting particles. Bohmian approaches have already been mentioned. According to Bohmian mechanics, every system has at every time a determinate position and a determinate velocity. Dynamical equa- tions are imposed that guarantee that if it’s ever the case that the distribution of positions among systems in an ensemble satisfies the Born Rule statistics prescribed by some quantum psi>| , then at all other times that distribution will satisfy the Born Rule statistics prescribed by the appropriate Schrödinger evolute of |psi>. In the Bohmian picture, QM is an effective theory that isn’t so much false as incomplete. Although the underlying theory has a reassuringly classi- cal particle ontology, in other respects, Bohmian mechanics stands in tension with expectations honed by non-quantum physics, notably STR’s demand for Lorentz invariance. Without a way to make Bohmian mechanics Lorentz invariant, the consistent believer in Bohmian mechanics must regard STR as merely empirically adequate. Dynamical collapse schemes and Bohmian mechanics interpret QM as an effective theory. Each goes some way toward supplying the underlying theory whose deliverances QM approx- imates. Thus each proffers the realist something to believe – just not QM (on the GRW picture) and just not just QM (on the Bohmian one). A third family of interpretations – relative state or Everett interpretations – promises to interpret QM directly. What they add to the standard quantum formalism is not additional underlying physics but rather additional overlying meta- physics. On these interpretations, quantum states always Schrödinger evolve, and what rescues Schrödinger’s cat from limbo is the fact that the problematic post-measurement superposition (FAIL) describes not one world (in which the cat lacks a biostate) but (at least) two; one in which the cat is determinately alive, another in which the cat is determinately dead.8 On the Everett picture, every time a measurement is performed, the world splits into (at least) as many offspring worlds as the measurement has distinct outcomes. The Schrödinger-evolved superposition or universal wave function describes the collection of worlds; each world in the collection exhibits determinate outcomes for each measured observable. Making this picture acceptable requires solving two problems. One is the counterpart of a problem facing Copenhagen with collapse: what determines when worlds multiply and when they remain single? Another is how to understand quantum probabilities: if every outcome happens in some world, what could it possibly mean to assign outcomes probabilities different from 1? Everettians have devised a variety of responses to these challenges. Wallace’s (2012) – which, having appealed to “emergence” to soften the problems, appeals to decoherence to meet the first and to decision theory to meet the second – is current state of the art. Supposing the challenges to be met, this family of approaches shares the feature that their metaphysics is profligate.

D . . . and realism Although much more could be said, we have enough on board to revisit the realism debate. Let’s start with a glaring problem for realists who have de-schematized the Miracles argument by set- ting T = QM. Propelled by the irresistible truth of that argument’s first premise, they conclude that QM is true. The problem is: what do they believe when they believe this conclusion? The answer is an interpretation of QM, but for many realists, the interpretations currently on offer

297 Laura Ruetsche fail to deliver satisfactory explanations of QM’s success and/or predicate their explanation on denials of other theories (prominently STR) these realists would much rather believe than QM. Moreover, this embarrassment of choice certainly looks like a variety of underdetermination – the underdetermination of interpretation by theory – both relevant to the realism debate and resistant to the skepticism some satellite-level realists express about the very idea of underdeter- mination. The alternative interpretations are not creatures of “logico-semantic trickery,” nor or they to be brushed aside as future contingencies whose merits we can assess if and when they arrive. They are at hand, the considered efforts of people trying in full earnest to understand the theory. Dynamical collapse, Bohm, and Everett in particular each have sincere and serious advocates. Viewed from ground level, the Miracles argument looks weaker than it looks at satellite level, because at the ground level lurk thorny questions about the content of realism. The Underdetermination argument, by contrast, looks stronger at ground than at satellite level, because the anti-realist can produce meaningful examples of underdetermined alternatives. Viewed from the satellite, the Underdetermination argument against realism and the Miracles argument for reason sat side by side, issuing conflicting conclusions but not interacting substan- tially beyond that. Viewed from the ground level, the underdetermination of interpretation by theory affords some insight into unobvious ways the Miracles argument might go wrong. Some of these possibilities also attenuate the realist’s appeal to super-empirical virtues to break under- determination ties. The most perverse possibility is illustrated by the measurement problem: there are interpretations under which realism’s claim to explain QM’s success fails, because under those interpretations, if QM is true, measurements generally don’t have outcomes! Realists might well regard this as a highly fantastic cautionary tale, illustrating only the obvious point that realism-undermining interpretations aren’t viable vehicles for realism. The argument form of the Miracles argument is, after all, inference to best explanation, not inference to any old account. Just as our job as scientists is to identify, from among the collection of theories consist- ent with phenomena, the one that best explains the phenomena, our job as realists is to choose, from among the smorgasbord of interpretations of a successful theory, the one that best explains its success. So let’s focus on potentially winning interpretations. Even if – maybe particularly because – a variety of contenders are available, there remain several Miracles argument–undermining possi- bilities. Their generic form echoes a satellite-level dialectic: the criteria by which the realist hopes to select the winning interpretation fail to single any such out. The tie-breaking super-empirical virtues include simplicity, internal and external consistency, parsimony, unifying power, an so on. An ethnographic observation: lending impulse to the debate over how to interpret QM are two sorts of disagreements between aspirants to believe in the theory. First, different interpreters fail to agree about which virtues are on the list and/or about how to prioritize them. Second, in spite of agreeing about the virtues and their relative potency, different interpreters disagree about which interpretations instantiate them. Some brief illustrations:

(i) Disagreement about what virtues matter Consistency with other theories is often cited as an extra-empirical virtue, and sophisticated defences of realism emphasize the role background theories play in the juggernaut of the mature sciences. Consistency with STR is a candidate super-empirical virtue for an interpretation of QM. But interpreters of QM disagree about how important that virtue is. Wallace is ready to dismiss Bohmian and dynamical collapse approaches for failing to exhibit the virtue: “the most impor- tant observation about these theories is that they only really exist in the non-relativistic domain” (2012: 33). But for advocates of the approaches Wallace would thereby dismiss, “the cost exacted

298 Getting real about quantum mechanics by those theories which retain Lorentz invariance is so high that one might rationally prefer to reject Relativity as the ultimate account of spacetime structure” (Maudlin 2002: 220), for instance by demoting STR to the status of merely instrumentally adequate. Goldstein describes a virtue that, for many Bohmians, trumps Lorentz invariance:

arguably any physical theory with any pretense to precision, requires as part of its for- mulation a specification of the “local beables,” of “what exists out there,” of what the theory is fundamentally about which I would prefer to call the primitive ontology of the theory. (1998: 9)

Maudlin elaborates:

We begin by thinking there are localized objects inhabiting a low-dimensional space, whose behavior we wish to explain. The obvious way for a physical theory to accom- plish this task is to postulate that there are localized objects in a low-dimensional spa- cetime [. . .] that constitute macroscopic objects, and to provide those objects with a dynamics that yields the sort of behavior we believe occurs. (2010: 133–134)

Both agree that any view that isn’t in the business of describing “primitive ontology” isn’t doing what an interpretation of QM ought. The Everett interpretation isn’t in that business. And Wallace doubts whether it is a virtue of a quantum interpretation to rest on a primitive ontology: “Science is interested with interesting structural properties of systems, and does not hesitate at all in studying those properties just because they are instantiated ‘in the wrong way’” (2012: 58).

(ii) Disagreement about which interpretations manifest the virtues that matter Would-be realists about QM are in broad agreement about the desirability of a super-empirical virtue variously called simplicity or naturalness. They are, however, in broad disagreement about which approaches to the quantum realm are distinguished by possessing that virtue. Compare Goldstein and Wallace:

Bohmian mechanics is, it seems to me, by far the simplest and clearest version of quan- tum theory. (Goldstein 1998: 14)

Really, the “Everett interpretation” is just quantum mechanics itself, read literally, straightforwardly – naively, if you will – as a direct description of the physical world, just like any other microphysical theory. ( Wallace 2012: 2)

The literature teems with similar examples. At the satellite level, anti-realists questioned whether super-empirical virtues tracked truth rather than baser commodities keyed to accidents of human interest. Collectively, the disagreement even among those inclined to be realists about QM, about which super-empirical virtues matter and which interpretations manifest them, suggests that the question persists at the ground level.

299 Laura Ruetsche

A structural realist might react to the underdetermination of interpretation by quantum the- ory with the judo-like hopes that (i) there is some structure all these interpretations share and that (ii) identifying that structure constitutes a specification of the content of structural realism about QM. Point (ii) would assuage the satellite-level concern about structural realism that its content is hopelessly indeterminate. Alas, even confining attention to the contender interpretations, it is not at all clear they have in common any structure of interest for realism. Contender interpretations attribute QM different types of state spaces (for Everett, it’s a Hilbert space; for Bohm, a space of particle configurations) and differenttypes of dynamics (deterministic Schrödinger evolution, sto- chastic collapse, deterministic guidance equation). If anything, the content problem for structural realism seems even more acute when posed at the ground level and with respect to QM than it did at the satellite level.

E. Interpreting QM∞ The stalwart realist can remain resolute in the face of the foregoing reservations. The existence of disputes, over which super-empirical virtues matter and which interpretations of QM best manifest them, hardly entails that such questions lack informative answers singling out some interpretation of QM as best. That interpretation supplies content to warranted realism about QM. For philosophers, this view has the added attraction of leaving philosophy some work to do in developing the scientific image: the work of determining what true picture of the world, illuminated by which virtues, science affords. Unfortunately, I think it is difficult to square this philosophy-affirming account of realism and quantum interpretation with the practice and interpretation of some of our most important quantum theories, including quantum field theories and other quantum theories that deal with

“infinite” systems. Labelling these theoriesQM ∞, I’ll sketch the special interpretive problems they pose, then indicate why I believe theories of this sort often lack a single “winning” interpretation that makes good enough sense of enough of their successes for belief in that interpretation to be abductively warranted. Section 3.B’s crash course in QM suggests that, different as quantum and classical theories are, they are also similar. At their hearts lie a structuring of physical magnitudes afforded in the classical case by the Poisson bracket and in the quantum case by the commutator bracket. This inspires a recipe for generating, from a classical theory, a quantum theory that is its quantization. To follow this Hamiltonian quantization recipe, start with the Poisson bracket between the classical position and momentum magnitudes, and try to find a vector space on which act a pair of oper- ators satisfying a commutator bracket that mirrors the classical Poisson bracket. What you are looking for is a vector space representation of the canonical commutation relations (or CCRs) defining the quantum theory you seek. Once you find a representation of the CCRs, use the operators bearing the representation to generate further quantum magnitudes standing to one another in functional and nomic relationships; having thus assembled your collection of quantum magnitudes, define a family of quantum states as those which assign well-behaved probabilities to possible values of those magnitudes. The upshot: a quantum theory, regarded as the quantization of the classical theory you started with. Recipes are only as good as their results are consistent. About this Hamiltonian quantiza- tion recipe, we might worry: is it possible to follow it starting from the same classical theory and obtain different quantum theories? The Stone-von Neumann theorem assures us that it is not – provided that the classical theory we start from concerns systems with finitely many degrees of freedom (finitely many particles moving in finitely many dimensions, say). No matter how different a pair of representations of the CCRs quantizing such a theory might

300 Getting real about quantum mechanics seem, those representations will always prove to be notational variants on one another. They will agree about what’s physically possible, as well as about what structures of properties physical possibilities instantiate. If a classical theory is suitably finite, its quantization is essentially unique. Classical field theories are not suitably finite. The systems they address are fields, specified (in the simplest case) by assigning a number (the field’s strength) to each point of space. Because there are infinitely many points of space, a field enjoys infinitely many degrees of freedom. We can still follow the Hamiltonian quantization recipe to quantize a classical field theory. The result is a quantum field theory (QFT) – but not a unique one. Having moved outside the scope of the Stone-von Neumann theorem, we confront infinitely many apparently physically distinct ways to quantize a given classical field theory. Different quantizations can differ on such physically basic questions as whether there are particles at all, and if there are, whether it’s possible to have only finitely many of them. For each QFT, there are infinitely many rival vector space structures, keyed to infinitely many distinct representations of the CCRs constituting the theory, that seem equally qualified to serve as that QFT. Two broad strategies of response to the non-uniqueness suggest themselves immediately. The privileging strategy is to identify the theory with a unique physically significant vector space representation of the CCRs and consign rival representations to the dustbin of mathematical artefacts. Ascending a level of abstraction, the abstraction strategy identifies the theory with fea- tures all representations of the CCRs share – thereby consigning features parochial to particular representations to the dustbin of physically superfluous structure.

Ruetsche (2011) examines the uses to which theories of QM∞ are put, in the hopes that a winning interpretive strategy, a strategy that makes the most sense of the most uses, will emerge.

Theories of QM∞ are used in many contexts – particle physics, cosmology, black hole thermo- dynamics, solid state physics, homely statistical physics – and with many aims – to model, explain, predict, and serve as launching pads for the development of future physics. An interpretive strat- egy that secures one aim in one context may frustrate another aim in another – or even in the same – context. The privileging strategy has worked capitally for standard particle physics, which privileges a representation by requiring obedience to the symmetries of a particularly simple (Minkowski) spacetime; the representation privileged anchors a fundamental particle notion. Still, there are aspects of standard particle physics – for instance, the “soft photons” involved in certain scatter- ing experiments – that cannot be modeled in the privileged representation but can be modeled by discarded representations. And some explanatory agendas involving particles exceed the confines of a single privileged representation: accounts of cosmological particle creation, for instance, appeal to different (and rival) representations, privileged at different epochs in the history of the cosmos. What’s more, QM∞ abounds in other explanatory agendas – symmetry breaking, ferromagnetism, superconductivity, the dynamics of an expanding universe – that invest a variety of representations with physical significance. These explanations would beham - strung by the privileging strategy. The abstraction strategy lends aid and comfort to some of these agendas. But not all of them. For instance, among the “surplus” properties the abstrac- tion strategy consigns to physical irrelevance are the order properties that distinguish between the distinct phases in a phase transition, as well as the properties that enable us to makes sense of the dynamics of mean field models. There are worthwhile physical projects promoted by each strategy, worthwhile projects frustrated by each strategy, and worthwhile physical projects frustrated by both strategies. My conclusion: in general there is no winning interpretive strat- egy. Rather, different and rival interpretations make sense of different aspects and elements of theoretical success.

301 Laura Ruetsche

4 Another virtue?

QM/QM∞ dramatizes the pedestrian but important point that a successful theory does not automatically admit any interpretation that explain any of its successes, much less a single inter- pretation that explains (almost) all of them. Of course, even if there’s a winning interpretation, it’s contentious whether a realist attitude toward that interpretation is permitted or mandated. At the satellite level, permission and obligation are the focus of dispute. At the ground level, the prior question – realism about what? – comes to the fore. There are many reasons to be interested in the prior question even if you’re inclined to side against realists in the satellite-level debate. I will close by gesturing at some of them. A possible lesson of the last section is that the standard lists of super-empirical virtues are missing an important virtue, one possessed in abundance by QM. This virtue is ambiguity, in the sense of the capacity to admit a variety of interpretations, none of which dominates to the extent of stifling all others. One might react that, without a dominant interpretation, we don’t under- stand QM. I would respond that keeping tabs on rival interpretations, their accomplishments and demerits, is understanding QM. Recalling the scientific jungle red in tooth and claw, one can view ambiguity as a virtue akin to genetic diversity in a breeding population – a resource of constrained adaptability that enables QM to meet the demands and expectations, many and varied, a living scientific theory faces. A guess that can be falsified by the future of science is that because ambiguity is adaptive, successor theories will share with QM the feature that no single interpretation emerges as the best. If the guess is correct, ambiguity isn’t a temporary frailty of current theory but an abiding strength of science as humans practice it.

Notes 1 But see part 3.D! 2 “Entity realism,” an important family of positions I don’t have space to discuss properly, rallies around Hacking’s slogan, “it’s real if we can spray it.” (See M. Egg, “Entity realism,” ch. 10 of this volume.) A connected omission is the troubled status of the notion of causation in the quantum realm. I suspect that QM destabilizes our grip on causation to such an extent that entity realisms founder when they venture upon quantum waters. 3 See Wallace (2006) for more. 4 A suspicion I lack space to develop here is that Wallace’s (2012) Everettian interpretation of QM is a halfway house not wholly suited to realism. 5 Somewhat more precisely: the quantum account of a system is set in a complex Hilbert space appro- priate to that system; possible pure states of the system correspond to unit vectors in the Hilbert space; physical properties of the system (aka observables) correspond to self-adjoint operators on that Hilbert space. According to the quantization algorithm, possible values of the observable corresponding to the

self-adjoint operator V are given by V’s eigenvalues vi. 6 Via the Born rule: the probability that a V measurement, performed on a system whose state is given by 2 the unit vector |psi>, yields the outcome vi is || , where |vi> is the eigenvector associated with the eigenvalue vi and the bracket is the Hilbert space inner product. I am assuming that states are pure and observables non-degenerate. For a generalization, see Redhead (1987). 7 Redhead (1987) elaborates. 8 As Barrett (1999) and Wallace (2012) elaborate, variations and subtleties abound.

References Barrett, J. (1999) The Quantum Mechanics of Minds and Worlds, Oxford: Oxford University Press. Fine, A. (1996) The Shaky Game: Einstein, Realism, and the Quantum Theory (2nd ed.), Chicago: University of Chicago Press.

302 Getting real about quantum mechanics

Fraassen, B. C. Van (1980) The Scientific Image, Oxford: Oxford University Press. Fraassen, B. C. Van (1991) Quantum Mechanics: An Empiricist View, Oxford: Oxford University Press. Goldstein, S. (1998) “Quantum Theory without Observers,” Physics Today 51(3), 42–47. Hempel, C. G. and Jeffrey, R. (2000) Selected Philosophical Essays, Cambridge: Cambridge University Press. Kitcher, P. (1993) The Advancement of Science: Science without Legend, Objectivity without Illusions, Oxford: Oxford University Press. Laudan, L. and Leplin, J. (1991) “Empirical Equivalence and Underdetermination,” The Journal of Philosophy 88(9), 449–472. Maudlin, T. (2002) Quantum Non-locality and Relativity, New York: Wiley. Maudlin, T. (2010) “Can the World Be Only Wave Function?” in S. Saunders, J. Barrett, A. Kent, and D. Wallace (eds.), Many Worlds? Everett, Quantum Theory, and Reality, Oxford: Oxford University Press, pp. 121–143. Quine, W.V.O. (1957) “The Scope and Language of Science,” British Journal for the Philosophy of Science 8, 1–17. Redhead, M. (1987) Incompleteness, Nonlocality, and Realism: A Prolegomenon to the Philosophy of Quantum Mechanics, Oxford: Oxford University Press. Ruetsche, L. (2011) Interpreting Quantum Mechanics: The Art of the Possible, Oxford: Oxford University Press. Wallace, D. (2006) “In Defence of Naiveté: The Conceptual Status of Lagrangian Quantum Field Theory,” Synthese 151(1), 33–80. ——— (2012) Quantum Mechanics and Its Emergent Metaphysics, Oxford: Oxford University Press. Worrall, J. (1989) “Structural Realism: The Best of Both Worlds?” Dialectica 43(1–2), 99–124.

303 24 SCIENTIFIC REALISM AND PRIMORDIAL COSMOLOGY

Feraz Azhar and Jeremy Butterfield

1 Introduction

1.1 Scientific realism about the claims of modern cosmology We will discuss scientific realism in relation to cosmology, especially primordial cosmology. Here “primordial” means for us “so early that the energies are higher than those which our established theories successfully describe”. This will turn out to mean: times earlier than about 10–11 seconds after the Big Bang (!). We begin by stating our allegiance to scientific realism. We take scientific realism to be the doctrine that most of the statements of the mature scientific theories that scientists accept are true, or approximately true, whether the statement is about observable or unobservable states of affairs. Here, “true” is to be understood in a straightforward correspondence sense, as given by classical referential semantics. And, accordingly, scientific realism holds our acceptance of these theories to involve believing (most of ) these statements – that is, believing them to be true in a straightforward correspondence sense. This characterization goes back, of course, to van Fraas- sen (1980: 7–9). It is not the only characterization: judicious discussions of alternatives include Stanford (2006: 3–25, 141f.) and Chakravartty (2011: section 1). But it is a widely adopted characterization – and will do for our purposes. We are scientific realists in this sense. We concede that to defend this position in general requires precision about the vague words “mature”, “accept”, “observable”, “true” and (perhaps especially) “approximately true”. But we will leave this general defense to more competent phi- losophers (see e.g. Psillos (1999, 2009), and other chapters of this volume). We will also duck out of a general defense of our sense of scientific realism for today’s cosmological theories. It would be an ambitious task, for it would involve two major projects. One would have to first define what parts of these theories are indeed “mature and accepted”. Then one would have to argue that most of these parts are “true, or approximately true” in a correspondence sense, whether they are about observable or unobservable states of affairs. These projects outstrip both our knowl- edge of cosmology and (a rather different matter) our knowledge of what is the state of play in cosmology: that is, what the community of cosmologists regards as accepted. What we propose to do instead is to describe how one theme central to the debate about scientific realism, namely the under-determination of theory by data, plays out in the context

304 Scientific realism, primordial cosmology of primordial cosmology. To motivate and clarify this proposed project, we should immedi- ately make four comments. Our first three comments discuss under-determination of theory by data and scientific realism in relation to all of modern cosmology. The fourth focuses on primordial cosmology.

(a) Undermining scientific realism? No: The under-determination of theory by data is of course usually considered a threat to scientific realism. (See D. Tulodziecki, “Underdetermination”, ch. 5 of this volume.) Does this mean that by illustrating it, we will undermine scientific realism – and, being scientific realists, make life harder for ourselves? No: rather, the illustrations clarify – and agreed: limit – what a scientific realist should take as definitively established by modern cosmology. We can already state the main reason why not (and its details will be filled out in what follows). We have taken scientific realism to be a claim along the lines “we can know, indeed do know, about the unobservable”. But that does not imply that – and it is simply no part of scientific realism to claim that – “all the unobservable is known”; or even “all the unobservable is knowable”. (b) Care about formulating the under-determination of theory by data: Our discussion will show that cosmology provides novel perspectives on the idea of under-determination of theory by data. For example, in sections 2 and 3, our discussion will show how primordial cosmology threatens the usual philosophical distinction between (i) under-determination by all data one could in principle obtain and (ii) under-determination by all data obtainable in prac- tice or up to a certain stage of enquiry. (Following Sklar [1975: 380f.], (ii) is often called “transient under-determination”.) Agreed: the philosophical literature has already made this distinction precise in various ways, corresponding to precise meanings of “principle” and “practice”. But primordial cosmology threatens the basic contrast between (i) and (ii): for data about the early universe is so hard to get that what is not obtainable in practice looks very much unobtainable in principle! To sum up: one needs to be careful about how one formulates the under-determination of theory by data; and accordingly, one should be wary of very general arguments about how under-determination of theory by data threatens scientific realism. (c) Other projects: This also suggests a more general moral about the debate over scientific real- ism. Maybe one should be wary of general claims about other topics, including topics that seem to favor scientific realism, such as novel predictions, explanatory power, and agreement of results obtained from independent lines of evidence: as against claims about how these topics play out in a specific scientific context. (See e.g. K. Brad Wray, “Success of science as a motivation for realism”, ch. 3 of this volume; D. Tulodziecki, “Underdetermination”, L. Henderson, “Global versus local arguments for realism”, ch. 12 of this volume.) Certainly, one could readily support this general moral with details of the ambitious task we ducked out of: a general defense of scientific realism for today’s cosmological theories. Modern cosmology is a vast science, with many sub-fields. So, to take just one philosophical topic as an example: surely no single distinction between theory and observation will be enlightening for cosmology. Indeed, this is so, even if we focus on one area of enquiry, such as establishing astronomical and cosmological distances. The sophisticated methods and instruments used for this (cf. e.g. Pasachoff et al. [1994]; Rowan-Robinson [2004: chapter 3]; and Longair [2003: chapter 18, 478–498, 2006: chapters 7, 11, 13]) surely defy any simple and uniform theory-observation distinction.

305 Feraz Azhar and Jeremy Butterfield

(d) Why primordial cosmology?: Our fourth comment concerns why we choose to focus on pri- mordial cosmology, that is, on issues about under-determination of theory by data that arise for times so soon after the Big Bang that the energies are higher than those which our established theories successfully describe. We have two reasons for this choice. First: as we announced in (b), primordial cosmology illustrates the under-determination of theory by data in interestingly specific forms. Second: being scientific realists, we believe that much of the history of the observable universe for times later than about 10–4 seconds after the Big Bang (!) is now, indeed, established – so that under-determination of theory by data does not “bite”.1

1.2 Prospectus We thus turn to yet earlier times – to primordial cosmology – and focus on two main issues illu- minating the under-determination of theory2 byBut data.as we said in section 1.1, the issues do not threaten scientific realism: they simply limit what a scientific realist should take as definitively established by modern cosmology. The first issue (section 2) concerns the difficulty of observationally probing the very early universe. Thus expressed, this is hardly news: we would expect it to be difficult! But the issue is more specific. In the last thirty years, it has become widely accepted- that at times much ear lier (logarithmically) than one second after the Big Bang, there was an epoch of accelerating expansion (dubbedinflation ) in which the universe grew by many orders of magnitude. The conjectured mechanism for this expansion was a physicalinflaton field,field,φ, the subject to an appropriate potentialV(φ); (or maybe by a set of such fields, but many models use a single field). Nowadays, the evidence for this inflationary framework lies principally in its explanation of some main features of the cosmic microwave background radiation (CMB) discovered in 1964 – and nowadays, cosmology’s main probe of3 theHow - early universe. ever, this evidence leavesundetermined very the detailed physics of the inflationary epoch: in particular, it allows many choices for the shape Vof(φ ), theand potentialthere are nowadays many different models of inflation. The second issue (section 3) concerns the difficulty of confirming a cosmological theory that postulates multiversea , that is a set of domains (universes) each of whose inhabitants (if any) cannot directly observe or otherwise causally interact with other domains. This issue arises because many models of inflation amount to such a theory. That is: according to many such models, in an epoch even earlier than that addressed in our first issue – and even harder to- access – a vast number of dif ferent domains came into being, of which just one gave rise to the universe we see around us, while the others “branched off” and are forever inaccessible to us. That is, of course, roughly speaking: more details in section 3. For now, we emphasize that this picture is very speculative: the physics describing this earlier epoch is not known. The energies and temperatures involved are far above those that are so successfully described by the standard model of elementary particle physics and far above those probed by the observations described in relation to our first issue (as in section 2). Nevertheless, cosmologists have addressed the methodological question how we can possibly confirm a multiverse theory, often by using toy models of such a multiverse. The difficulty is not just that it is hard to get evidence. We also need to allow for the fact that what evidence we get may be greatly influenced, indeed distorted, by our means of observation: that is, by our location in the multiverse. Thus one main and well-known aspect concerns the legitimacy of anthropic explanations. That is, in broad terms: we need to ask: “how satisfactory is it to explain an observed fact by appealing to its absence being incompatible with the existence of an observer?”

306 Scientific realism, primordial cosmology

For all the issues we discuss, it will be clear that much remains unsettled as regards both physics and philosophy. But, as we said, we maintain that these remaining controversies do not threaten scientific realism, essentially because scientific realism is a claim along the lines “we can know, indeed do know, about the unobservable”, and is not at all committed to claiming that “all the unobservable is knowable”.

2 The very early universe: inflation? So we turn to questions about the very early universe: roughly speaking, about times earlier than 10− 11 seconds after the Big Bang. We will confine ourselves to the most widely accepted frame- work for understanding this regime: inflation.4 Thus section 2.1 begins by introducing inflation and so functions as a prospectus for both this section and section 3. Section 2.2 discusses the conjectured mechanisms for it, and section 2.3 presents this section’s main point about scientific realism: that the details of the mechanism are seriously under-determined by the data.

2.1 The idea of an inflationary epoch As mentioned in section 1.2: from about 1980 onwards, cosmologists have proposed that there was a very early (and so brief!) epoch of very rapid, indeed accelerating, expansion. They have made three main claims about this epoch, claims which will dominate this section and the next, so that we give them mnemonic labels. (Three): Such an epoch solves three problems faced by the existing Big Bang model, which by 1980 was well established (it had already been dubbed “standard” by Weinberg [1972: 469] and Misner, Thorne, and Wheeler [1973: 763]). These problems are the “flatness”, “horizon”, and “monopole” problems. (Inflaton): If this epoch was appropriately caused – namely by a conjectured inflaton field – it would lead to characteristic features of the CMB: namely, characteristic probabilities for the amplitudes and frequencies of the slight wrinkles (unevennesses) in the CMB’s temperature distribution. (Branch): This mechanism for inflation would naturally involve a branching structure in which, during the epoch, countless spacetime regions branch off and themselves expand to yield other universes so that the whole structure is a “multiverse” whose component universes cannot now directly observe/interact with each other, since they are causally connected only through their common origin in the very early universe. The evidence for, and status of, these three claims varies. Most cosmologists regard claim (Three) as established: that is there was an epoch of accelerating expansion (however it may have been caused), and the occurrence of this epoch solves the flatness, horizon, and monopole problems. But cosmologists agree that the cause of this epoch – its mechanism: the dynamical factors that started it and then played out so as to end it – is much more conjectural, and therefore, so also is the third claim, (Branch).5 The early papers proposed that this mechanism was a scalar field, φ, the inflaton field, evolving subject to a certain potential V(φ); and that this mechanism led to characteristic fea- tures of the CMB. It is testimony to the strength of this proposal that it remains the most popular mechanism, and there remain versions of it which are confirmed by the CMB data. But this mech- anism, and so the claim (Inflaton), is undoubtedly more speculative than the mere occurrence of the epoch of accelerating expansion. So we must beware of the ambiguity of the word “inflation”: it can refer either to the epoch of accelerating expansion (also called the “inflationary epoch”), how- ever it was caused, or to the (more speculative) mechanism for its occurrence. Finally, the idea that the mechanism for inflation spawns many universes, that is claim (Branch), is even more speculative.

307 Feraz Azhar and Jeremy Butterfield

We will set aside the least controversial claim, (Three): that is the claim that there was an infla- tionary epoch (however it was caused) and that such an epoch solves the three problems – flatness, horizon, and monopole – faced by previous general relativistic cosmological models. Agreed: this claim is a natural topic for philosophers, since two of the three problems are problems, not so much of the empirical adequacy of general relativistic cosmology, as of explanation. But for that very reason, the claim has already attracted philosophical discussion (e.g. Earman [1995: chapter 5, especially 142–159]; Earman and Mosterín [1999: sections 4–7, 14–26]; Smeenk [2013: section 6, 633–638]; Butterfield [2014: section 4.1–4.2, 65–67]; McCoy [2015]; and Azhar and Butterfield [2016: section 4.1.1]); and so we set it aside. We just report the consensus that the solutions of all three problems come out quantitatively correct, if we postulate that, for example, the inflationary epoch began at about 5 × 10–35 seconds after the Big Bang and ended at about 10–34 seconds after the Big Bang, during which the scale factor of the universe grew by a factor of about e50 ≈ 1022! (For details, cf. e.g. Liddle [2003: 106–107]; Serjeant [2010: 55].) The solutions provided by such numbers are resilient, in that they are independent of what might have caused the inflationary epoch . . . but of course, one still asks: what caused that epoch?

2.2 What caused the accelerating expansion? In answering this question, the first thing to stress is that we are here in the realms of speculation: recall Weinberg’s warning in endnote 5! We shall confine ourselves to reporting a “minimal” answer: which postulates a scalar field,φ , the inflaton field, evolving subject to a certain potential V(φ) – cf. claim (Inflaton) at the start of section 2.1. As we will see, a few assumptions about this mechanism yield some characteristic predictions for subtle features of the CMB. And since these features have been observed by a sequence of increasingly refined instruments (such as the satel- lites COBE, WMAP, and Planck), these confirmed predictions are nowadays regarded as a more important confirmation of inflation than its solution of section 2.1’s three problems. There is, however, a very considerable under-determination of the mechanism of inflation by our data – both today’s data, and perhaps, all the data we will ever have. This (unfortunate!) predicament will be the topic of section 2.3.

2.2.1 The inflaton field We begin with a simple classical picture of inflation, postponing quantum considerations to section 2.2.2. We begin by assuming that at approximately 10− 35 seconds after the Big Bang, the stress-energy of the universe was dominated by that associated with some (yet-to-be-discovered) scalar field φ (t, x), known as the inflaton and that the evolution of this scalar field is determined by a potential energy density V(φ) and by its coupling to the gravitational field. We also assume that the dynamics of the scalar field is very simple, as follows. The scalar field is homogeneous: that is the same throughout space, so that the field is then only a function of 6 time, φφ()tx, ≡ 0 ()t . And the potential has a single minimum towards which the field “rolls”, much as a classical particle would roll towards the minimum of a potential, subject to a force in the direction of the negative of the potential’s gradient. It turns out that accelerating expansion of the underlying spacetime can occur when the potential energy density dominates over the kinetic energy density of the inflaton field: that 1 is Vt()φφ≫ ɺ 2 (): a regime called slow-roll inflation. In a bit more detail: what one requires 002 for accelerating expansion is a fluid with a pressure P that is negative; more specifically,

308 Scientific realism, primordial cosmology

1 P <− ρ, where ρ is the energy density of the fluid. For the scalar field (which can be 3 1 ɺ 2 thought of as a fluid in this sense), whereVt ()φφ00≫ (), the pressure P comfortably sat- isfies this constraint. 2 Inflation comes to an end when the inflaton finds its way to the minimum of the potential; typically, the slope of the potential gets larger, and the kinetic energy density of the inflaton increases so that accelerating expansion can no longer occur. The inflaton oscillates (with a decaying amplitude) around the minimum of the potential, and the stored energy is released into particles of the standard model of particle physics through a process known as reheating. Although this is just one means through which the universe can expand by the amounts needed to solve the three problems mentioned in section 2.1, taking this slow-roll scenario seriously – that is, analyzing single-field slow-roll inflationary models and deducing observational parameters – has hitherto been cosmologists’ predominant way of exploring and assessing the idea of inflation.

2.2.2 Connecting the inflaton to the CMB If this was all there was to the theory of inflation, it would probably not have the following amongst cosmologists that it does today. Arguably the most impressive success of the inflationary paradigm is its providing a mechanism for understanding the origin of subtle features of the CMB. The setting for this success is a quantum treatment of the (putatively classical) story sketched in section 2.2.1, and it is to this connection between the inflaton and the CMB that we now briefly turn. The basic observable associated with the CMB is the intensity of radiation as a function of frequency and direction in the sky (Hu and Dodelson 2002). To a very good approximation, the CMB has a black-body spectrum with an average temperature of T ≈ 273. K (= –270.42 degrees C). But the temperature is not exactly uniform across the sky; there are small fluctua- tions about this mean on the order of 10–5 in size. That is: given the temperature T(ϑ, ϕ) for a given direction (ϑ, ϕ) in the sky, we subtract the mean temperature T and define a dimensionless temperature anisotropy

ΔT TT()ϑϕ, − ()ϑϕ,:= T T (2.1)

It is this quantity that is about ± 10–5 for any direction (ϑ, ϕ). Inflationary cosmology proposes that these anisotropies come from small perturbations in the inflaton field, arising from its quantum fluctuations – perturbations which can then be connected (via the appropriate transfer function) to observables, such as the angular power spectrum of temperature fluctuations of the CMB. The proposed mechanism is that as the (homogeneous) inflaton φ 0(t) rolls down the inflaton δφ tx, potential, it acquires spatially dependent quantum fluctuations (). These fluctuations mean that inflation will end at different points of space x at different times, and this leads to fluctuations in energy density after the end of inflation, ultimately giving rise to fluctuations in CMB tem- perature as a function of position in the sky. One can connect quantities associated with quantum fluctuations during inflation (viz. scalar and tensor power spectra), with angular power spectra for temperature in the CMB (and for polarization, though we won’t focus on polarization ani- sotropies here – see Baumann and Peiris [2009] for further details). Conversely, measured power spectra in the CMB can be used to infer primordial power spectra. As reported by the Planck Collaboration, inferred primordial power spectra are indeed con- sistent with single-field slow-roll models (see Ade et al. [Planck Collaboration], 2016, for more

309 Feraz Azhar and Jeremy Butterfield details). But how many such single-field slow-roll models are consistent with CMB measure- ments? It is to this question and the underlying under-determination that its answer reveals that we now turn.

2.3 A plethora of models The fact that inflation provides a mechanism for understanding both (i) very large-scale homoge- neous features of our universe (which are the topic of section 2.1’s three problems) and (ii) much smaller-scale inhomogeneities apparent in the CMB (section 2.2) suggests that inflationary mod- els should be highly constrained. Indeed, they are. But there remains a wide variety of possible inflationary models which yield these impressive successes. And this variety is not a mere matter of (a) margins of error or (b) simplicity: like a case where a physical field, for example a potential function, can be ascertained either (a) only within certain bounds or (b) only by assuming it is simple in some precise sense (e.g. being a polynomial of degree at most four). Instead, inflationary models that differ by substantially more than matters (a) and (b) yield predictions that are the same – at least, the same so far as we can confirm them. Agreed: one might hope that this under-determination is not a problem with the theory of inflation per se but only a reflection of the effective nature of the theory. That is: although primordial inflation operates at very high energies (~ 1016 GeV), it is expected to be some low- energy limit of a fundamental theory whose details are yet to be worked out and which will, one hopes, overcome the under-determination. Be that as it may, we now give some details about the wide variety of possible inflationary models. Broadly speaking, inflationary model building is pursued along three main avenues in terms of:

(i) a single scalar field (as outlined in section 2.2.1); or (ii) multiple scalar fields (known as “multi-field inflation”); or (iii) non-scalar degrees of freedom.

As in section 2.2, we shall focus on (i) and on models in which the scalar field is minimally cou- pled to gravity, which is presumably, the simplest means of realizing inflation. First, the good news. Such models predict negligible non-Gaussianities in the CMB and are thus promising candidates for providing a description of the state of the very early universe. And some of them disagree with each other as regards some observations we may indeed be able to make. For example: one can categorize such models according to the range of field values Δφ through which the inflaton rolls (down the potential), between the time when the largest scales now observed in the CMB were created and the end of inflation. When this rangeΔ φ is smaller than the Planck scale, Δφ < MP1, we say there is “small-field inflation”, whereas large-field infla- tion occurs when Δφ > MP1. One observational difference between these two categories is the size of the primordial gravitational waves they predict: large-field inflation (but not small-field inflation) predicts gravitational waves that we may indeed be able to see in the near future. But there is also bad news: that is, a worrying plethora of models. In a bid to understand the various possibilities for inflationary models, Martin, Ringeval, and Vennin (2014) have catalogued and analyzed a total of 74 (!) distinct inflaton potentials that have been proposed in the litera- ture, all of them corresponding to a minimally coupled, slowly rolling, single scalar field driving the inflationary expansion. And a more detailed Bayesian study (Martin, Ringeval, Trotta, and Vennin, 2014), expressly comparing such models with the Planck satellite’s 2013 data about the CMB, shows that of a total of 193 (!) possible models – where a “model” now includes not just an

310 Scientific realism, primordial cosmology inflaton potential but also a choice of a prior over parameters defining the potential – about 26% of the models (corresponding to 15 different underlying potentials) are favored by the Planck data. A more restrictive analysis (appealing to complexity measures in the comparison of different models) reduces the total number of favoured models to about 9% of the models, corresponding to nine different underlying potentials (though all of the “plateau” variety). To sum up: although this is an impressive reduction of the total number of possibilities, there remains, nevertheless, an under-determination of the potential by the data, an under-determination that is not a mere matter of (a) margins of error, or (b) simplicity of the kind noted at the start of this section. And of course, we have not surveyed the other avenues, (ii) and (iii) earlier, through which inflation might be implemented: let alone the rival frameworks mentioned in endnote 4. Agreed, and as we said at the start of this section: one can hope that this under-determination will be tamed as theories are developed that describe the universe at energy levels higher than those at which inflation putatively operates. But – as we shall discuss in section 3 – this regime of yet higher energies may yield another, perhaps more serious problem of under-determination: a problem relating to distance scales that far outstrip those we have observed and, indeed, those that we will ever observe.

3 Confirming a multiverse theory

3.1 Eternal inflation begets a multiverse One of the more startling predictions of a variety of inflationary models is the existence of domains outside our observable horizon, where the fundamental constants of nature, and per- haps the effective laws of physics more generally, vary (Guth 1981; Guth and Weinberg 1983; Steinhardt 1983; Vilenkin 1983; Linde 1983, 1986a, 1986b; Linde [2017] is a history). More specifically: such a “multiverse” arises through eternal inflation, in which once inflation begins, it never ends. Eternal inflation is predominantly studied in two broad settings: (1) false-vacuum eternal inflation and (2) slow-roll eternal inflation.

(1) False-vacuum eternal inflation can occur when the inflaton fieldφ , which is initially trapped in a local minimum of the inflaton potentialV (φ) (a state that is classically stable but quan- tum mechanically metastable – that is, a “false” vacuum), either: (a) tunnels out of the local minimum to a lower minimum of energy density, in particular to the “true vacuum”, that is, the true ground state: (as described by Coleman and De Luccia [1980]); or (b) climbs, thanks to thermal fluctuations, over some barrier inV (φ), to the true vacuum.

The result is generically a bubble (i.e. a domain) where the field value inside approaches the value of the field at the true vacuum of the potential. If (i) the rate of tunneling is significantly less than the expansion rate of the background spacetime and/or (ii) the temperature is low enough, then the “channels” (a) and-or (b) by which the field might reach the true vacuum are frustrated. That is: inflation never ends, and the background inflating spacetime becomes populated with an unbounded number of bubbles (cf. Guth and Weinberg [1983] and Sekino, Shenker, and Susskind [2010]).7

(2) In slow-roll eternal inflation, quantum fluctuations of the inflaton field overwhelm the classical evolution of the inflaton in such a way as to prolong the inflationary phasein some regions of space. When this is sufficiently probable, eternal inflation can again ensue

311 Feraz Azhar and Jeremy Butterfield

(Vilenkin [1983]; Linde [1986a], [1986b]; see Guth [2007] for a lucid summary and Crem- inelli et al. [2008] for a recent discussion). It is striking that the self-same mechanism that gives rise to subtle features of the CMB (as discussed in section 2.2.2) can, under appropriate circumstances, give rise to a multiverse.

Though it is not clear how generic the phenomenon of eternal inflation is (Aguirre 2007a; Smeenk 2014), it remains a prediction of a wide class of inflationary models. So in this section, we turn to difficulties about confirming cosmological theories that postulate a multiverse. There is a growing literature about these difficulties. It ranges from whether there could, after all, bedirect experimental evidence for the other universes (via imprints of bubble collisions in our domain; see Feeney et al. [2011] and references therein), to methodological debates about how we can possibly confirm a multiverse theory without ever having such direct evidence. We focus here on the methodological debates.8 Thus for us, the main theme will be – not that it is hard to get evidence – but that what evidence we get may be greatly influenced by our location in the multiverse. In the jargon: we must beware of selection effects. So one main and well-known theme is about the legitimacy of so-called anthropic explanations. In broad terms, the question is: how satisfactory is it to explain an observed fact by appealing to its absence being incompatible with the existence of an observer? We of course cannot take on the large literature about selection effects and anthropic explana- tions: (cf. e.g. Davies [1982]; Earman [1987]; Barrow and Tipler [1988]; McMullin [1993]; Rees [1997]; Bostrom [2002]; Mosterín [2005]; and Carr [2007]). We will simply adopt a scheme for thinking about these issues which imposes a helpful “divide and rule” strategy on the various (often cross-cutting and confusing!) considerations (section 3.2). This scheme is not original to us: it combines proposals by others – mainly Aguirre, Tegmark, Hartle, and Srednicki. Then in section 3.3, we will report results obtained by one of us (Azhar 2014, 2015, 2016, 2017), applying this general scheme to toy models of a multiverse (in particular, models about the proportions of various species of dark matter). The overall effect will be to show that there can be a severe problem of under-determination of theory by data.

3.2 A proposed scheme In this section, we will adopt a scheme that combines two proposals due to others:

(i) a proposal distinguishing three different problems one faces in extracting from a mul- tiverse theory definite predictions about what we should expect to observe: namely, the measure problem, the conditionalization problem, and the typicality problem (Aguirre and Tegmark [2005] and Aguirre [2007b]); (section 3.2.1); (ii) a proposal due to Srednicki and Hartle (see Srednicki and Hartle 2010; Hartle and Srednicki 2007) to consider the confirmation of a multiverse theory in Bayesian terms (section 3.2.2).

3.2.1 Distinguishing three problems: measure, conditionalization, and typicality Our over-arching question is: given some theory of the multiverse, how should we extract pre- dictions about what we should expect to observe in our domain? A natural means to address this question is to somehow define the probability Pp(), assuming a physical theory T, of a given value, p say, of a set of observables. But calculational complexities and selection effects make this

312 Scientific realism, primordial cosmology difficult. Aguirre (2007b) proposes a helpful “divide and rule” strategy, which systematizes the various considerations one has to face. In effect, there are three problems, which we now describe.

3.2.1.A: THE MEASURE PROBLEM.

To define a probability distribution P, we first need to specify the sample space: the type of object M that are its elements – traditionally called “outcomes”, with sets of them being called “events”. Each observable will then be a random variable on the sample space, so that p for a set of observables , Pp () is well defined. One might take each M to be a domain in the sense mentioned so that Pp() represents the probability of p occurring in a randomly chosen domain: where P may or may not be uniform on the sample space.9 But it is not clear a priori that domains should be taken as the outcomes, that is, the basic alternatives. For suppose, for example, that T says domains in which our observed values occur are much smaller than domains where they do not occur. So if we were to split each of these latter domains into domains with the size of our domain, the probabilities would radically change. In short, the problem is that there seems no a priori best way of selecting the basic outcomes. Besides, this problem is made worse by various infinities that arise in eternal inflation. Mathematically natural measures over reasonable candidates for the outcomes often take infinite values, and so probabilities often become ill defined (including when they are taken as ratios of measures). Various regularization schemes have been introduced to address such issues, but the probabilities obtained are not independent of the scheme used. This predicament – the need, for eternally inflating spacetimes, to specify outcomes and measures, and to regularize the measures so as to unambiguously define probabilities – is known as the measure problem. For a review of recent efforts, cf. Freivogel (2011). (Note that this measure problem is not the same as the problems besetting defining measures over initial states of a cosmological model; cf. Carroll [2014: footnote 13]).

3.2.1.B: THE CONDITIONALIZATION PROBLEM.

Even if one has a solution to the measure problem, a second problem arises. It is expected that for any reasonable T and any reasonable solution to the measure problem, the proba- bilities for what we will see will be small. For in eternal inflation, generically, much of the multiverse is likely to not resemble our domain. Should we thus conclude that all models of eternal inflation are disconfirmed? We might instead propose that we should restrict atten- tion to domains (or more generally: regions) of the multiverse, where we can exist. That is, we should conditionalize: we should excise part of the sample space and renormalize the probability distribution and then compare the resulting distribution with our observations. This process of conditionalization can be performed in three main ways (see Aguirre and Tegmark 2005):

(i) we can perform no conditionalization at all – known as the “bottom-up” approach; (ii) we can try to characterize our observational situation by selecting observables in our models that we believe are relevant to our existence (and hence for any future observa- tions and experiments that we may perform) – known as the “anthropic approach”; (iii) we can try to fix the values of each of the observables in our models, except for the observable we are aiming to predict the value of – known as the “top-down” approach.

313 Feraz Azhar and Jeremy Butterfield

As one might expect, there are deep, unresolved issues, both technical and conceptual, for both (ii) and (iii). For the anthropic approach, (ii), one faces the difficult question of how precisely to characterize our observational situation and so which values of which observables we should conditionalize on. In the top-down approach, (iii), it is unclear how we can perform this type of conditionalization in a practical way. And on both approaches, we expect the observable we aim to predict the value of and the conditionalization scheme to be separated in an appropriate way – but it is not clear how to go about doing this (see Garriga and Vilenkin [2008: section III] and Hartle and Hertog [2013] for discussions).

3.2.1.C: THE TYPICALITY PROBLEM.

Even if one has solved (at least to one’s own satisfaction!) both the measure and conditionali- zation problems, a third problem remains: the typicality problem. Namely: for any appropriately conditionalized probability distribution, how typical should we expect our observational situ- ation to be, amongst the domains/regions, to which the renormalized probability distribution is now restricted? In other words: how much “away from the peak and under the tails” can our observations be without our taking them to disconfirm our model? In the next section, we will be more precise about what we mean by “typicality”; but for now, the intuitive notion will suffice. Current discussions follow one of two distinct approaches. One approach asserts that we should always assume typicality with respect to an appropriately conditionalized distribution. This means we should assume Vilenkin’s “principle of mediocrity” or something akin to it (see, for example, Vilenkin [1995]; Bostrom [2002]; and Garriga and Vilenkin [2008]). The other approach asserts that the assumption of typicality is just that – an assumption – and is thus subject to error. So one should allow for a spectrum of possible assumptions about typicality; and in aiming to confirm a model of eternal inflation, one tests the typicality assumption in conjunction with the measure and conditionalization scheme under investiga- tion. This idea was developed within a Bayesian context most clearly by Srednicki and Hartle (see Srednicki and Hartle 2010; Hartle and Srednicki 2007), and it is to a discussion of their proposal that we now turn.

3.2.2 The Srednicki-Hartle proposal: frameworks Srednicki and Hartle (2010) argue that in what they call a “large universe” (i.e. a multiverse), we test what they dub a framework: that is, a conjunction of four items: a cosmological (multiverse) model T, a measure, a conditionalization scheme, and an assumption about typicality. (Agreed, the model T might specify the measure; but it can hardly be expected to specify the condition- alization scheme and typicality assumption.) If such a framework does not correctly predict our observations, we have license to change any one of its conjuncts10 and to then compare the distribution derived from the new framework with our observations. One could, in principle, compare frameworks by comparing probabilities for our observations against one another (i.e. by comparing likelihoods); or one can formulate the issue of framework confirmation in a Bayesian setting (as Srednicki and Hartle do). To be more precise, let us assume that a multiverse model T includes a prescription for com- puting a measure, that is, includes a proposed solution to the measure problem: this will simplify the notation. So given such a model T, a conditionalization scheme C, and a typicality assumption, which for the moment we will refer to abstractly as ξ: we aim to compute a probability distribution

314 Scientific realism, primordial cosmology

T C for the value D0 of some observable. We write this as PD(|0 ,,ξ). Bayesian confirmation of T C frameworks then proceeds by comparing posterior distributions PD(,,|ξ 0 ), where PD(|T ,,C ξξ),P ()T C , PD(,T C ,|ξ ) = 0 , (3.1) 0 PD(|T ′,CT′′,)ξξP ′′,,C ′ ∑{}TC′′,,ξ′ 0 () and P ()T ,,C ξ is a prior over the framework{}T ,,C ξ . How then do we implement typicality assumptions? Srednicki and Hartle (2010) develop a method for doing so by assuming there are a finite number N of locations where our observa- tional situation obtains. Assumptions about typicality are made through “xerographic distribu- tions”, that is various ξs, which are probability distributions encoding our beliefs about at which of these N locations we exist. So if there are spacetime locations xA, with A = 1, 2, . . ., N, where our N observational situation obtains, the xerographic distribution ξ is a set of N numbers ξξ≡ {}A , N A=1 such that ∑ ξA = 1 . Thus typicality is naturally thought of as the uniform distribution, that is A=1 ξA = 1/N. Assumptions about various forms of atypicality correspond to deviations from the T C uniform distribution. Likelihoods PD(|0 ,,ξ) in Eq. (3.1) can then be computed via: N T C T C PD(|00,,ξξ)(= ∑ APD[]A |,), where D0[A] denotes that D0 occurs at location A. A=1 (Admittedly, this equation is somewhat schematic: see Srednicki and Hartle (2010: Appendix B) and Azhar (2015: sections II A and III) for more detailed implementations.) In this way, different assumptions about typicality, expressed as different choices of ξ, can be compared against one another for their predictive value. Azhar (2015) undertakes the task of explicitly comparing different assumptions about typicality in simplified multiverse cosmo- logical settings. He shows that for a fixed model, the assumption of typicality, that is, a uniform xerographic distribution (with respect to a particular reference class) achieves the maximum likelihood for the data D0 considered. But typicality does not necessarily lead to the highest like- lihoods for our data if one allows different models to compete against each other. This conclusion is particularly interesting when for some model, the assumption of typicality is not the most natural assumption. Hartle and Srednicki (2007: section II) make the point with an amusing parable. If we had a model according to which there were vastly more sentient Jovians than Earthlings, we would surely be wrong, ceteris paribus, to regard the natural corresponding framework as disconfirmed merely by the fact that according to it, we are not typical observers. That is: it is perfectly legitimate for a model to suggest a framework that describes us as atypical. And if we are presented with such a model, we should not demand that we instead accept another model under which typicality is a reasonable assumption. Indeed, demanding this may lead us to choose a framework that is less well confirmed in Bayesian terms.11

3.3 Different frameworks, same prediction As an example of how the proposal described in section 3.2.2 can lead to further interesting insights, we describe recent work that investigates simplified frameworks in toy models of a mul- tiverse. The results will again illustrate our main philosophical theme: the under-determination of theory by data.12 Aguirre and Tegmark (2005) considered a multiverse scenario in which the total number of species of dark matter can vary from one domain to another. They assumed the occurrence of the different species to be probabilistically independent of one another and then investigated how different conditionalization schemes can change the prediction of the total number of dominant

315 Feraz Azhar and Jeremy Butterfield species (where two species are called “dominant” when they have comparable densities, each of which is much greater than the other species’ densities). Azhar (2016) extended this analysis by considering (i) probabilistically correlated species of dark matter and (ii) how this prediction varies when one makes various assumptions about our typicality in the context of various con- ditionalization schemes. We will thus conclude this section by outlining one example of this sort of analysis and highlighting the conclusions about under-determination thus obtained. In the notation of section 3.2.1; assume that T is some multiverse model (with an associated measure), which predicts a total of N distinct species of dark matter. We assume that from this T theory we can derive a joint probability distribution P(,ηη12,...,|ηN ) over the densities of different dark matter species, where the density for species i, denoted by ηi, is given in terms of a dimensionless dark matter-to-baryon ratio ηii:=Ω / Ωb. We observe the total density of dark N matter ηη:= (where η will refer to the observed value of η). In fact, according to results ∑i=1 i obs ≈ recently released by the Planck collaboration, ηobs 5 (Ade et al. 2016). T In Azhar (2016), some simple probability distributions P(,ηη12,...,|ηN ) are postulated, from which considerations of conditionalization and typicality allow one to extract predictions about the total number of dominant species of dark matter. The conditionalization schemes studied in Azhar (2016) (and indeed in Aguirre and Tegmark 2005) include examples of the “bottom-up”, “anthropic”, and “top-down” categories discussed in section 3.2.1.13 And considerations of typ- icality are implemented in a straightforward way, by taking typical parameter values to be those that correspond to the peak of the (appropriately conditionalized) probability distribution.14 Here is one example of the rudiments of this construction. Top-down conditionalization T corresponds to analyzing probability distributions P(,ηη12,...,|ηN ) along the constraint sur- N face defined by our data, that is ηη:=≈5. The number of species of dark matter that ∑i=1 i contribute significantly to the total dark matter density, under the assumption of typicality, is then the number of significant components that lie under the peak of this distribution on the constraint surface. Azhar (2016) shows that different frameworks can lead to precisely the same prediction. In par- ticular, the prediction that dark matter consists of a single dominant component can be achieved through various (mutually exclusive) frameworks. Besides, the case of multiple dominant com- ponents does not fare any better. In this sense, any future observation that establishes the total number of dominant species of dark matter will under-determine which framework could have given rise to the observation. The moral of this analysis is thus that if in more realistic cosmological settings, this under- determination is robust to the choice of which observable we aim to predict the value of (as one would expect), then we must accept that our observations will simply not be able to confirm any single framework for the inflationary multiverse.

4 Envoi So much by way of surveying how cosmology, especially primordial cosmology, bears on scientific realism. Let us end by very briefly summarizing our position. We have espoused scientific realism as a modest thesis of cognitive optimism: that we can know about the unobservable and that indeed we do know a lot about it. Cosmology causes no special trouble for this thesis. Indeed, the thesis is well illustrated, we submit, by countless results of modern cosmology: astonishing though these results are as regards the vast scales of distance, time, or other quantities, such as temperature, energy, and density, that they involve (cf. Azhar and Butterfield [2016: section 3]). Of course, probing ever more extreme regimes of distance, time or these other quantities tends to call for more inventive and diverse techniques as regards theory as well as instrumentation.

316 Scientific realism, primordial cosmology

So it is unsurprising that probing the very early universe, that is earlier than 10–11 seconds after the Big Bang, involves intractable cases of under-determination of theory by data. We discussed precise forms of this predicament within inflationary cosmology: problems about ascertaining the details of the inflaton field, for example its potential (section 2), and problems about con- firming a multiverse theory (section 3). But as we said in section 1.1, we do not see these cases of under-determination as threatening the modest cognitive optimism of scientific realism. They amount just to “epistemic modesty”. They are compatible with the characteristic “epistemic optimism” of scientific realism – that much is already known, and yet more can be: an upbeat note on which to end.

Acknowledgements FA’s work is supported by the Wittgenstein Studentship in Philosophy at Trinity College, Cam- bridge. FA thanks Jim Hartle, Mark Srednicki, and Dean Rickles for conversations about some of the work reported in section 3. For comments on a previous version, we thank: Anthony Aguirre, Bernard Carr, Erik Curiel, Richard Dawid, George Ellis, Michaela Massimi, Casey McCoy, John Norton, Martin Rees, Svend Rugh, Tom Ryckman, David Sloan, Chris Smeenk, Alex Vilenkin, and Henrik Zinkernagel; and not least, the editor Juha Saatsi.

Notes 1 This second reason is a bold claim which obviously needs to be justified, even though we have ducked out of attempting a general defense of scientific realism for modern cosmology. We cannot do so here – but we do so in a longer, online version of this chapter (Azhar and Butterfield 2016: section 3). 2 We should register at the outset that we set aside several other issues, which we list in Azhar and But- terfield (2016: section 1). 3 The CMB, though very homogeneous and isotropic, has tiny irregularities whose structure (especially how their size depends on angle) gives detailed evidence about the interplay between gravitation tend- ing to clump matter, and radiation pressure opposing the clumping. This interplay links directly to parameters describing the universe as a whole, such as whether the average density is large enough for gravitational attraction to eventually overcome the expansion. 4 Guth (1981) is a seminal paper. But we stress at the outset that inflation remains a speculation and that there are various respectable alternatives. For maestri being skeptical about inflation, we recommend: Ellis (1999: 706–707, 1999a: A59, A64–65, 2007: section 5, 1232–1234); Earman (1995: 149–159); Earman and Mosterín (1999); Hollands and Wald (2002); Penrose (2004: 735–757, chapters 28.1–28.5); and Ijjas, Steinhardt, and Loeb (2013); though we of course also recommend replies, such as Guth, Kaiser, and Nomura (2014). For reviews of some alternatives, such as string gas cosmology, cf. e.g. Brandenberger (2013), or, aimed at philosophers, Brandenberger (2014). 5 Thus Weinberg says: “So far, the details of inflation are unknown, and the whole idea of inflation remains a speculation, though one that is increasingly plausible'' (2008: 202). 6 Agreed, one might object that assuming homogeneity undercuts the claim to have solved the horizon etc. problems. But in reply: there is active research to ascertain how much homogeneity is needed to secure inflation (see Brandenberger [2017] and references therein); and anyway, we aim in this chapter only to lay out the main ideas. 7 After about 2000, this type of eternal inflation gained renewed interest in light of the idea of a string landscape, in which there exist multiple such metastable vacua (see, for example, Susskind [2003] and references therein). 8 We assume, therefore, that the multiverse consists of many (possibly an infinite number of ) domains, each of whose inhabitants (if any) cannot directly observe or otherwise causally interact with other domains. 9 One imagines that the domains are defined as subsets of a single eternally inflating (classical) spacetime: cf. Vanchurin (2015). 10 Philosophers of science will of course recognize this as an example of the Duhem-Quine thesis. See D. Tulodziecki, “Underdetermination”, ch. 5 of this volume.

317 Feraz Azhar and Jeremy Butterfield

11 Aficionados of the threat of “Boltzmann brains” in discussions of the cosmological aspects of foundations of thermal physics will recognize the logic of the Jovian parable. We cannot here go into details of the analogy and the debates about Boltzmann brains. Suffice it to say that the main point is: a model that implies the existence of many such brains may be implausible, or disconfirmed, or have many other defects; but it is not to be rejected just because it implies that we – happily embodied and enjoying nor- mal lives! – are not typical. In other words: it takes more work to rebut some story that there are zillions of creatures who are very different from me in most respects but nevertheless feel like me (“share my observations/experiences”) than just the thought “if it were so, how come I am not one of them?” For the story might also contain a good account of how and why, though I feel like them, I am otherwise so different. 12 This section is based on Azhar (2014, 2016), which both build upon the work of Aguirre and Tegmark (2005). The scheme we use does not explicitly address the measure problem; we simply assume there is some solution, so that probability distributions over observables of interest can indeed be specified. For more details about the material in sections 3.2 and 3.3, cf. Azhar (2017). 13 To be more precise: in the bottom-up and top-down cases, the total number N of species is fixed by assumption, and one looks for the total number of dominant species, while in the anthropic case, N is allowed to vary, and one looks for the total number N of equally contributing components. 14 The issue of how this characterization of typicality relates to xerographic distributions is subtle and beyond the scope of our discussion here.

References Ade, P.A.R. et al. [Planck Collaboration] (2016) “Planck 2015 Results. XX. Constraints on Inflation,”Astron- omy and Astrophysics 594, A20. Aguirre, A. (2007a) “Eternal Inflation, Past and Future,” arXiv:0712.0571 [hep-th]. ——— (2007b) “Making Predictions in a Multiverse: Conundrums, Dangers, Coincidences,” in B. Carr (ed.), Universe or Multiverse? Cambridge: Cambridge University Press, pp. 367–386. Aguirre, A. and Tegmark, M. (2005) “Multiple Universes, Cosmic Coincidences, and Other Dark Matters,” Journal of Cosmology and Astroparticle Physics 1(2005), 003. Azhar, F. (2014) “Prediction and Typicality in Multiverse Cosmology,” Classical and Quantum Gravity 31, 035005. ——— (2015) “Testing Typicality in Multiverse Cosmology,” Physical Review D 91, 103534. ——— (2016) “Spectra of Conditionalization and Typicality in the Multiverse,” Physical Review D 93, 043506. ——— (2017) “Three Aspects of Typicality in Multiverse Cosmology,” in M. Massimi, J.-W. Romeijn and G. Schurz (eds.), EPSA15 Selected Papers (European Studies in Philosophy of Science, vol. 5), Cham, Switzerland: Springer, pp. 125–136. Azhar, F. and Butterfield, J. (2016) “Scientific Realism and Primordial Cosmology,” a longer version of this paper online at http://philsci-archive.pitt.edu/12192 and arXiv:1606.04071 [physics.hist-ph]. Barrow, J. D. and Tipler, F. J. (1988) The Anthropic Cosmological Principle, Oxford: Oxford University Press (First published 1986). Baumann, D. and Peiris, H. V. (2009) “Cosmological Inflation: Theory and Observations,”Advanced Science Letters 2, 105–120. Bostrom, N. (2002) Anthropic Bias: Observation Selection Effects in Science and Philosophy, New York: Routledge. Brandenberger, R. H. (2013) “Unconventional Cosmology,” in G. Calcagni, L. Papantonopoulos, G. Siopsis and N. Tsarnis (eds.), Quantum Gravity and Quantum Cosmology (Lecture Notes in Physics, 863), Berlin: Springer, pp. 333–374. ——— (2014) “Do We Have a Theory of Early Universe Cosmology?” Studies in History and Philosophy of Modern Physics 46, 109–121. ——— (2017) “Initial Conditions for Inflation – a Short Review,” International Journal of Modern Physics D 26, 1740002. Butterfield, J. (2014) “On Under-Determination in Cosmology,”Studies in History and Philosophy of Modern Physics 46, 57–69. Carr, B. (ed.) (2007) Universe or Multiverse? Cambridge: Cambridge University Press. Carroll, S. (2014, forthcoming) “In What Sense Is the Early Universe Fine-Tuned?” in B. Loewer, E. Wins- berg and B. Weslake (eds.), Time’s Arrows and the Probability Structure of the World, Cambridge, MA: Harvard University Press; arXiv:1406.3057 [astro-ph.CO].

318 Scientific realism, primordial cosmology

Chakravartty, A. (2011) “Scientific Realism,” in E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. URL: http://plato.stanford.edu/entries/scientific-realism Coleman, S. and De Luccia, F. (1980) “Gravitational Effects on and of Vacuum Decay,” Physical Review D 21, 3305–3315. Creminelli, P., Dubovsky, S., Nicolis, A., Senatore, L. and Zaldarriaga, M. (2008) “The Phase Transition to Eternal Inflation,”Journal of High Energy Physics 9(2008), 036. Davies, P.C.W. (1982) The Accidental Universe, Cambridge: Cambridge University Press. Earman, J. (1987) “The SAP Also Rises: A Critical Examination of the Anthropic Principle,” American Philosophical Quarterly 24, 307–317. ——— (1995) Bangs, Crunches, Whimpers, and Shrieks: Singularities and Acausalities in Relativistic Spacetimes, New York: Oxford University Press. Earman, J. and Mosterín, J. (1999) “A Critical Look at Inflationary Cosmology,”Philosophy of Science 66, 1–49. Ellis, G.F.R. (1999) “Before the Beginning: Emerging Questions and Uncertainties,” Astrophysics and Space Science 269–270, 693–720. ——— (1999a) “83 Years of General Relativity and Cosmology: Progress and Problems,” Classical and Quantum Gravity 16, A37–A75. ——— (2007) “Issues in the Philosophy of Cosmology,” in Part B of J. Butterfield and J. Earman (eds.), Philosophy of Physics (Part B, Vol. 2 of the North Holland Series, Handbook of the Philosophy of Science), Amsterdam: Elsevier, pp. 1183–1285. Feeney, S. M., Johnson, M. C., Mortlock, D. J. and Peiris, H. V. (2011) “First Observational Tests of Eternal Inflation,”Physical Review Letters 107, 071301. Fraassen, B. C. van (1980) The Scientific Image, Oxford: Clarendon Press. Freivogel, B. (2011) “Making Predictions in the Multiverse,” Classical and Quantum Gravity 28, 204007. Garriga, J. and Vilenkin, A. (2008) “Prediction and Explanation in the Multiverse,” Physical Review D 77, 043526. Guth, A. H. (1981) “Inflationary Universe: A Possible Solution to the Horizon and Flatness Problems,” Physical Review D 23, 347–356. ——— (2007) “Eternal Inflation and Its Implications,” Journal of Physics A: Mathematical and Theoretical 40, 6811–6826. Guth, A. H., Kaiser, D. I. and Nomura, Y. (2014) “Inflationary Paradigm after Planck 2013,” Physics Letters B 733, 112–119. Guth, A. H. and Weinberg, E. J. (1983) “Could the Universe Have Recovered from a Slow First-Order Phase Transition?” Nuclear Physics B 212, 321–364. Hartle, J. and Hertog, T. (2013) “Anthropic Bounds on Λ from the No-Boundary Quantum State,” Physical Review D 88, 123516. Hartle, J. B. and Srednicki, M. (2007) “Are We Typical?” Physical Review D 75, 123523. Hollands, S. and Wald, R. (2002) “Essay: An Alternative to Inflation,” General Relativity and Gravitation 34, 2043–2055. Hu, W. and Dodelson, S. (2002) “Cosmic Microwave Background Anisotropies,” Annual Review of Astron- omy and Astrophysics 40, 171–216. Ijjas, A., Steinhardt, P. J. and Loeb, A. (2013) “Inflationary Paradigm in Trouble after Planck 2013,” Physics Letters B 723, 261–266. Liddle, A. (2003) An Introduction to Modern Cosmology (2nd ed.), Chichester: John Wiley. Linde, A. D. (1983) “The New Inflationary Universe Scenario,” in G. W. Gibbons, S. W. Hawking and S.T.C. Siklos (eds.), The Very Early Universe, Proceedings of the Nuffield Workshop, Cambridge, 21 June to 9 July, 1982, Cambridge: Cambridge University Press, pp. 205–249. ——— (1986a) “Eternal Chaotic Inflation,”Modern Physics Letters A 1, 81–85. ——— (1986b) “Eternally Existing Self-Reproducing Chaotic Inflationary Universe,” Physics Letters B 175, 395–400. ——— (2017) “A Brief History of the Multiverse,” Reports on Progress in Physics 80, 022001. Longair, M. (2003) Theoretical Concepts in Physics: An Alternative View of Theoretical Reasoning in Physics (2nd ed.), Cambridge: Cambridge University Press (First Edition 1984). ——— (2006) The Cosmic Century: A History of Astrophysics and Cosmology, Cambridge: Cambridge Uni- versity Press. Martin, J., Ringeval, C., Trotta, R. and Vennin, V. (2014) “The Best Inflationary Models after Planck,”Journal of Cosmology and Astroparticle Physics 3(2014), 039. Martin, J., Ringeval, C. and Vennin, V. (2014) “Encyclopædia Inflationaris,” Physics of the Dark Universe 5–6, 75–235.

319 Feraz Azhar and Jeremy Butterfield

McCoy, C. D. (2015) “Does Inflation Solve the Hot Big Bang Model’s Fine-Tuning Problems?” Studies in History and Philosophy of Modern Physics 51, 23–36. McMullin, E. (1993) “Indifference Principle and Anthropic Principle in Cosmology,” Studies in History and Philosophy of Science 24, 359–389. Misner, C. W., Thorne, K. S. and Wheeler, J. A. (1973) Gravitation, San Francisco: W. H. Freeman. Mosterín, J. (2005) “Anthropic Explanations in Cosmology,” in P. Hájek, L. Valdés-Villanueva and D. Wes- terståhl (eds.), Logic, Methodology and Philosophy of Science: Proceedings of the Twelfth International Congress, London: King’s College Publications, pp. 441–471. Pasachoff, J. M., Spinrad, H., Osmer, P. S. and Cheng, E. S. (1994) The Farthest Things in the Universe, Cam- bridge: Cambridge University Press. Penrose, R. (2004) The Road to Reality: A Complete Guide to the Laws of the Universe, London: Jonathan Cape. Psillos, S. (1999) Scientific Realism: How Science Tracks Truth, London: Routledge. ——— (2009) Knowing the Structure of Nature: Essays on Realism and Explanation, Basingstoke: Palgrave-Macmillan. Rees, M. (1997) Before the Beginning: Our Universe and Others, New York: Simon and Schuster. Rowan-Robinson, M. (2004) Cosmology (4th ed.), Oxford: Oxford University Press. Sekino, Y., Shenker, S. and Susskind, L. (2010) “Topological Phases of Eternal Inflation,” Physical Review D 81, 123515. Serjeant, S. (2010) Observational Cosmology, Cambridge: Cambridge University Press. Sklar, L. (1975) “Methodological Conservatism,” The Philosophical Review 84, 374–400. Smeenk, C. (2013) “Philosophy of Cosmology,” in R. Batterman (ed.), The Oxford Handbook of Philosophy of Physics, Oxford: Oxford University Press, pp. 607–652. ——— (2014) “Predictability Crisis in Early Universe Cosmology,” Studies in History and Philosophy of Modern Physics 46, 122–133. Srednicki, M. and Hartle, J. (2010) “Science in a Very Large Universe,” Physical Review D 81, 123524. Stanford, P. K. (2006) Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives, New York: Oxford University Press. Steinhardt, P. J. (1983) “Natural Inflation,” in G. W. Gibbons, S. W. Hawking and S.T.C. Siklos (eds.), The Very Early Universe, Proceedings of the Nuffield Workshop, Cambridge, 21 June to 9 July, 1982, Cambridge: Cambridge University Press, pp. 251–266. Susskind, L. (2003) “The Anthropic Landscape of String Theory,” in B. Carr (ed.), Universe or Multiverse? Cambridge: Cambridge University Press; arXiv:hep-th/0302219, pp. 247–266. Vanchurin, V. (2015) “Continuum of Discrete Trajectories in Eternal Inflation,” Physical Review D 91, 023511. Vilenkin, A. (1983) “Birth of Inflationary Universes,” Physical Review D 27, 2848–2855. ——— (1995) “Predictions from Quantum Cosmology,” Physical Review Letters 74, 846–849. Weinberg, S. (1972) Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity, New York: John Wiley. ——— (2008) Cosmology, Oxford: Oxford University Press.

320 25 THREE KINDS OF REALISM ABOUT HISTORICAL SCIENCE

Derek Turner

1 Introduction: three issues for historical realism The German paleontologist Adolf Seilacher once wrote that “ammonoids are for paleontolo- gists what Drosophila is in genetics” (Seilacher 1988: 67). The ammonoids are therefore a good place to begin thinking about realism in historical science. The fossil record for ammonoids is excellent because their coiled shells fossilized readily, and it tells a detailed story about evolution- ary history. Although their soft parts rarely fossilized, they were cephalopods and probably had beaks and tentacles similar to modern squids; the chambered nautilus is a distant living relative. The ammonoids were very abundant in late Paleozoic and Mesozoic seas, and the group as a whole persisted for some 300 million years, eventually going extinct with the non-avian dinosaurs 66 million years ago. Ammonoid paleobiology raises three different issues for scientific realism: whether we should be realists about the ammonoids themselves; whether we should be realists about trends inferred from the ammonoid fossil record; and whether we should be realists about the underlying evolutionary tendencies that scientists have posited in order to explain the trends. Realism about concrete things, events, and processes in the past seems natural and intuitive. Although we can’t observe past entities (like ammonoids), one might reason that at least they were the same sorts of things that we can observe today. It seems important that certain sorts of coun- terfactuals are true: if there were ammonoids alive today, we could catch one and touch it. The past, even the deep geological past, does not confront us with the sorts of bizarre, mind-bending results that have long motivated varieties of instrumentalism and antirealism in physics. For this reason alone, one might think that realism about historical science is a somewhat easier sell than realism about fundamental physics. Indeed, some have suggested that scientific realists might have a relatively easier time defending their view when it comes to historical science (Carman 2005; Stanford 2011). Nevertheless, some philosophers (including this one) have seriously entertained non-realist views about historical science. Section 2 provides an overview of the philosophical discussion of realism about concrete entities and events of the deep past – that is, about things like ammonoids. Even in this relatively easy case, scientific realism is not mandatory, though it does have a strong intuitive pull. Strange as it may sound, if we start by asking whether we should be realists about ammonoids, we are liable to miss some of the most important aspects of historical scientific practice.1 When paleontologists study ammonoids, we can think of them as positing an unobservable entity – an

321 Derek Turner organism that lived 100 million years ago or more – in order to explain an observable item, such as a fossil. There is something right about that construal, but it also misses what the scientists themselves care most about, which is documenting and explaining patterns and trends. At least since the 1970s, when Jack Sepkoski pioneered the use of large databases to study patterns in the fossil record, this is where much of the action has been in paleontology (Sepkoski 1978, 1979; Ruse 1999: chapter 11). During the Mesozoic, there was a noticeable trend in the ammonoids toward greater orna- mentation. Early on, many ammonoids had smooth and rather boring shells. Later, more and more of them had shells with more pronounced ribs and keels, as well as spines and nodules (Monnet, Klug, and De Baets 2015). Scientists often care more about these sorts of historical trends than about any particular prehistoric organism or event. Unfortunately, when we’re talking about statistical phenomena such as trends, it is by no means clear what it would even mean to be a realist. Next, consider the inference from pattern to process. Ward (1981) argued that the trend toward greater ornamentation in ammonoids was adaptive. The Mesozoic saw the evolution of a variety of predators, such as teleost fish, whose basic approach to finding food involved crushing ammonoids. So if the added ornamentation made it tougher for predators to crush the shells, natural selection could have driven the trend. Another possibility is that the trend was passive (Gould 1996; Turner 2011: chapter 7). If you start out with shells that are smooth and boring, then even if changes in shell sculpture are random, there is nowhere for evolution to go except in the direction of greater ornamentation. Determining whether the trend is passive versus driven tells us a lot about the process that produced it (McShea 1994). Another possibility is that the trend is driven but not by natural selection (McShea 2005; McShea and Brandon 2010). Increasing ornamentation in Mesozoic ammonoids is in fact a lovely example of increasing structural complexity. McShea and Brandon (2010) propose to explain this in terms of an underlying tendency toward complexity increase, which they call the ZFEL (short for “zero-force evolutionary law”). What if it is just a deep fact about biological systems that their structural complexity always tends to increase? A realist view about such evolutionary tendencies is easy to articulate but difficult to defend. Thus, realism about ammonoids (or about the science of ammonoids) is not mandatory, even though it might seem like the easy case (section 2). We do not even have a clear idea what it would mean to be a realist about patterns or trends in ammonoid evolution (section 3). And real- ism about underlying historical tendencies proves difficult to defend (section 4). When we shift the focus from concrete historical posits to historical trends and tendencies, the outlook for realism in historical science is worse than it might have seemed at first. Philosophers with realist leanings who wish to resist this conclusion can read this chapter as offering a road map for the defense of realism about the past: first, try to eliminate the non-realist views about the concrete posits of historical science; next, try to say what it would mean to be a realist about trends and patterns; finally, think about how to build a case for a realist view of historical tendencies. It is probably a good idea not to make too much of the term “historical science.” We can just take “historical science” to mean any attempt to use scientific methods, construed in the broadest possible way, to understand the past. Thus, historical science is an important part of archaeology, cosmology, evolutionary biology, geology, and paleontology. Some philosophers, such as Cleland (2002, 2011), have tried to go further than this rather boring characterization and to suggest that historical science (or at least, paradigmatic historical science) has a distinctive method that sets it apart from experimental science. However, there are important kinds of research in historical science that do not fit Cleland’s description very well (Turner 2009). It might be fairer to say

322 Realism about historical science that scientists are “methodological omnivores” when they study the past, using whatever tech- niques or approaches seem most likely to gain empirical traction (Currie 2015). There are many different ways of doing historical research, and some of those overlap considerably with other kinds of science that are not obviously historical. For these reasons, it may not be too helpful to try to demarcate historical from other kinds of science, but we also do not need a sharp line of demarcation in order to pose interesting questions about historical science (Currie and Turner 2016). That includes questions about realism.

2 Do we have to be realists about ammonoids? One example of a philosopher who has explored in great detail what it would mean to give up on realism about the past is Michael Dummett (2005). Dummett is concerned with the meaning of statements about the past, and he explores the consequences of a justificationist (antirealist) semantic theory. On such a proposal, facts about the past turn out to be relative to the present and future evidence. So the version of antirealism that Dummett considers seems to have the con- sequence that there is no fact of the matter about whether it rained on a given spot 3,000 years ago, since that sort of claim is underdetermined. Though he does not engage with the practice of historical science, Dummett’s work shows, if nothing else, that semantic realism about the past is philosophically optional. In some of my own earlier writing (Turner 2004, 2007), I also explored an idea that I called the asymmetry of manipulability. Some, though by no means all, arguments for scientific realism (or at least, some versions of it) depend on the claim that we can manipulate unobservables even if we cannot see them (Hacking 1983; Boyd 1983, 1990). Scientists can and do manipulate unob- servable subatomic particles, as for example when smashing particles together in a supercollider. But we obviously cannot manipulate the past experimentally. This asymmetry between the unobservables of historical science and those of experimental science suggests that some central arguments for scientific realism will have less force in historical contexts. This asymmetry of manipulability does not completely undermine realism about the past, but it does raise the pos- sibility that the overall case for realism about the past might be considerably weaker than many philosophers realize. Moreover, anyone who thinks that direct experimental manipulation confers some sort of epistemic advantage would seem perforce committed to the view that historical science operates at a disadvantage. What would a non-realist or antirealist view of historical science look like? Some sort of social constructivist view is not entirely implausible. Historians of paleontology have pointed out some of the ways in which our understanding of evolutionary history can interact with going social and political concerns. For example, David Sepkoski has suggested (personal communica- tion) that there have been important feedbacks between paleontological research on prehistoric mass extinction events and broader social issues, ranging from Cold War–era fears of a nuclear winter to the current biodiversity crisis. Of course, social constructivists often go beyond such observations to claim that things in the world somehow depend on us, our values, theories, and conceptual schemes. Social constructivist views about historical science remain relatively under-explored, though see Parsons (2001) for discussion of constructivism in connection with dinosaur science. There are other nonrealist views that fall short of social constructivism. One example is Arthur Fine’s natural ontological attitude (or NOA), which just takes what scientists say about what’s real at face value while abstaining from further philosophical theory (Fine 1984).2 Fine’s view applies quite smoothly to historical science (Turner 2007: chapter 7). The proponent of NOA – or what one might call the “natural historical attitude” – does not exactly deny what

323 Derek Turner the realist asserts. The thought, rather, is just that realism is more philosophy than we really need. Realists about historical science invariably make claims about metaphysics (e.g., about the mind and theory independence of the past) or about the nature of truth that do not obviously contribute much to our understanding of the practice of historical science. From this quietist perspective, social constructivists make the same mistake that realists do, for they make rival claims about mind dependence, the nature of truth, and so forth that do little to help us understand the science. Both realists and constructivists will endorse what the scientists actually say, but they offer different glosses, or different philosophical add-ons. In response to this quietist challenge, a realist about historical science, like any other kind of realist, needs to offer a defense at the meta-level of the kind of philosophy that the scientific realist is doing. One central issue of the realism debate that has received a lot of attention from philosophers interested in historical science is underdetermination. Is underdetermination an especially seri- ous problem in historical science? It may be fair to say, however, that there is a consensus that the degree of underdetermination in historical science is basically an empirical issue, because it’s an empirical issue just how much information historical processes preserve about the past. Some of the focus has accordingly shifted to trying to understand how scientists do occasion - ally succeed in overcoming underdetermination (see, e.g., Forber and Griffiths 2011; Bromham 2016) and how thinking about underdetermination is itself part of the practice of historical science (Turner 2015). Still, underdetermination is definitely an issue in historical science that those with realist leanings should not ignore. (See D. Tulodziecki, “Underdetermination,” ch. 5 of this volume.) Thus, realism with respect to historical science is not as obviously correct as it has seemed to many philosophers. The quietist natural ontological (or historical) attitude is a genuine contender in this context, as perhaps are various kinds of constructivism and Dummett-style semantic anti- realism. Realism about the past might have a high degree of prima facie plausibility, but it is by no means the only game in town. Nor is it even clear how much weight we should give to prima facie plausibility, since we are talking about a domain in which some of the most successful scien- tific ideas have themselves been prima facie implausible – evolution, continental drift, prehistoric asteroid collisions, and so on. Notice, however, that the questions in play here all concern the status of concrete historical unobservables – the concrete entities and processes of the deep past. Are ammonoids (and not just our representations of them) social constructs? Or should we say that they were “really real,” independently of our thoughts and theories about them, and that over time we are getting a more and more accurate picture of what they were really like? These questions about concrete historical things and events – ammonoids and mass extinctions – are worth getting clear about. Nevertheless, some of the most perplexing and least discussed issues in historical science concern abstract items such as historical trends (the focus of section 3) and historical tendencies (section 4). The study of trends and the positing of tendencies are both central to historical science as it is practiced today, and both raise questions that connect with the realism debate. Moreover, when we shift the focus from concrete historical items to abstract ones, realism rapidly loses any intuitive obviousness that it might have started with.

3 Realism about the trend toward greater ornamentation in ammonoids An historical trend is just a persisting, directional change in some variable of interest. Although I will focus on paleobiology here, there are many other examples of trends that one might wish to study, from global warming to falling housing prices to grade inflation to changes in gene frequencies in a population. It’s also worth drawing a distinction between population-level trends

324 Realism about historical science

(e.g., an increase in the average life expectancy of humans over a given time interval) versus individual-level trends (e.g., height increase over the lifetime of a particular person). A population- level trend is a directional change in some aggregate measure, while an individual-level trend is a directional change in some property of an individual. More generally, a trend is just a special case of a pattern. For example, stasis is also an important evolutionary pattern, though it is just the opposite of a directional trend. (For discussion of the issue of trends vs. tendencies, see McShea and Rosenberg 2008: 147–151.) The patterns and trends that paleobiologists care about usually involve aggregate measures at larger scales, such as body size increase (also known as Cope’s rule) and complexity increase. McShea (1998) surveys some of the candidates for larger-scale evolutionary trends. But trends also occur at smaller scales, and the trend toward greater ornamentation in ammonoids is a good example to work with. A lot of work in paleontology goes into documenting such trends – that is, into showing that they are real. But what exactly does it mean to say that a historical trend is real in the first place? This is where the going gets a little more difficult for the scientific realist. To start with, consider the difference between concrete objects, like ammonoids, and abstrac- tions such as “average amount of ornamentation of ammonoid shells.” Historical trends are abstract items – statistical phenomena. They are good examples of what Woodward and Bogen (1988) call “phenomena” as contrasted with data. In the present context, the data are just fossils, and the evolutionary trend – the phenomenon – must be inferred from them. The phenome- non, then, becomes an explanatory target in its own right: an evolutionary pattern or trend is an historical phenomenon that needs explaining. What might it mean to be a realist with respect to historical phenomena such as patterns and trends? One opening suggestion is that the scientific realist about trends would have to be a realist about abstract objects. But that suggestion seems like a false start (Dennett 1991). When scien- tists disagree, as they sometimes do, about whether this or that is a real trend, they are not having a disagreement about the metaphysical status of abstract objects or of statistical phenomena in general. It is far more helpful to think of the realist about trends as someone who is committed to the view that (putting it very roughly) the historical trends that scientists have good evidence for are (in some sense) real and that they underlie and explain the data. But even this modest approach soon runs into complications. Whether a trend shows up as real depends on a variety of factors, such as (1) the time interval under consideration; (2) which clade(s) we decide to study; (3) whether we are looking at the minimum, maximum, mean, or some other value for the trait in question; (4) how we decide to individuate traits in the first place; and (5) which trait(s) we decide to track. A trend is only real relative to a given specification of these parameters. For example, Gould (1997) once argued that Cope’s rule of evolutionary size increase is merely a psychological artifact. His point was that paleontologists are victims of confirmation bias: they approach things with a prior conviction that selection drives size increase and that selection is an optimizing force, so when they look at the fossil record, they tend to fixate on clades and time intervals, relative to which Cope’s rule shows up as real. And of course it is real, as Gould would have to allow – but only relative to that specification of parameters, which is entirely up to us. In light of this point, should we say that Gould was a realist or an antirealist about Cope’s rule? On the one hand, he looks like an antirealist, since he’s saying that the reality of an evolutionary trend depends on us – that is that trends are only real relative to how we specify parameters (1) through (5), and nothing “out there” in nature tells us how to do that. But on the other hand, from a certain angle, Gould might also look like a realist. For once the parameters have been spec- ified, it turns out that there is an objective, mind-independent fact of the matter about whether

325 Derek Turner the trend is real relative to that specification. Gould would have to concede, for example, that if you focus only on Jurassic ammonoids, you see size increase (Monnet, Klug, and De Baets 2015). The issues are relatively straightforward; the problem is that it just isn’t clear what to make of “realism” and “antirealism” as opposing views in this context. Scientists do often disagree about whether a trend is real. But in cases where there is prior agreement on the parameter specifica- tion, this just means that they have conflicting hunches about an open empirical issue. In other cases, they might just disagree about the parameter specification. We might ask what it would mean to be a realist about a trend like “evolutionary size increase,” but that way of describing the trend leaves the parameters less than fully specified. That description gives us the trait (5), but not much else to go on. For example, are we talking about the whole history of ammonoids or just the Jurassic period? Are we talking about mean body size or maximum body size? Once the issues are suitably clarified, it’s not clear that we gain much by talking about realism versus antirealism about historical trends. Even once the parameters have been specified, patterns and trends can vary with respect to the signal-to-noise ratio. This is a point that Dennett (1991) makes vividly. A trend such as increasing ornamentation in ammonoid shells could be very robust with little noise. We might see orna- mentation increasing steadily in all lineages, with the heavily ornamented ones flourishing and speciating, while the boring ones either evolve toward greater ornamentation or go extinct. In other words, both natural selection within lineages and sorting among them could contribute to the trend. But evolutionary history almost always includes a lot of noise. Particular lineages do not always cooperate with the broader pattern. Within-lineage and between-lineage processes can work against each other. The trend could be so slight, with reversals, wobbles, and episodes of stasis, that we can barely discern it. So how much noise should we tolerate before we decide that a trend isn’t real? That is just a judgment call. Trends and patterns are just statistical phenomena. And one might think that the scientific realism debate was never supposed to be about statistical phenomena. Rather, it was supposed to be about the status of the unobservable items that scientific theories refer to or that scientists posit for predictive and explanatory purposes. So perhaps the initial focus of discussion on the status of concrete ammonoids got things right. But in that case, there is a clear mismatch between the way that philosophers interested in scientific realism typically frame things and the practice of historical science. Paleobiological research today is all about trends and patterns. The individual ammonoids themselves take a back seat. Moreover, paleobiological theorizing usually targets the trends and patterns rather than the individual organisms. For example, Buckman’s first law of covariation says that in ammonoids, the degree of ornamentation of the shells covaries with the degree of involution/evolution of the shell. “Evolution” in this context is just a geometrical term: in “evolute” shells, the outer whorl of the shell does not cover up the inner ones. Buck- man’s law describes a further pattern, perhaps best characterized as a meta-pattern: increasing evolution of shells (in the geometrical sense of “evolution”) and increasing ornamentation tend to co-occur. Explaining the co-occurrence of these trends is a further challenge for theory in paleobiology and evolutionary developmental biology (Monnet, De Baets, and Yacobucci 2015). The important point is that the target of the theorizing is usually the pattern and rarely the individual prehistoric animal. The individual concrete posits of our theory of prehistory – the posits that do the work of supplying causal explanations of where the fossils came from – play less of a role in the scientific work than the patterns in the historical record. And it’s by no means clear what it would mean to be a realist or an antirealist about those patterns. There is, in other words, a disconnect between the ways in which philosophers of science have traditionally framed the questions of the realism debate and the practice of an awful lot of historical science.

326 Realism about historical science

4 Realism about the tendency toward greater ornamentation in ammonoids Let’s shift now from patterns to processes and from historical trends to tendencies. In general, paleontologists are interested in drawing inferences from patterns to processes. They posit certain sorts of historical processes in order to explain historical patterns and trends. Once we put it this way, it seems like we are back on the familiar ground of the scientific realism debate. We are talking not about the status of statistical phenomena but about the status of the things scientists posit in order to make sense of those phenomena. The trend toward increasing ornamentation in ammonoid shells is a lovely candidate for explanation by McShea and Brandon’s zero-force evolutionary law, or ZFEL. McShea and Bran- don (2010) argue that in evolutionary systems, there is always a bias in favor of greater struc- tural complexity (see also Brandon 2010). Adding spines and nodules to a shell is a clear way of increasing its structural complexity. McShea and Brandon maintain that when a system is in its zero-force condition – that is, when no external forces are acting upon it – complexity and diversity will increase. On their view, complexity and diversity turn out to be two sides of the same coin, since they conceive of structural complexity as diversity or heterogeneity of part types. Their approach turns traditional views about the relationship between natural selection and complexity upside down. Traditionally, the thought had always been that where complexity increases over evolutionary history, it must be favored by natural selection. McShea and Brandon argue, however, that complexity increase should be the default expectation for evolving systems; it’s what we should expect to see when other forces that might act on the system, including nat- ural selection, are put out of action. They actually invoke natural selection to explain stasis and reduction of complexity in some lineages, such as eye loss in cave-dwelling species. McShea and Brandon’s ZFEL is an instance of what we might call an inertial state model of evolutionary change (or perhaps an ISM for short). Generally speaking, an inertial state model partitions the possible states of a system into two groups, while also specifying a set of relevant external forces. When there are no external forces operating on the system, it will be in its inertial state(s). When the system deviates from its expected inertial state, the deviation is to be explained in terms of the operation of external forces. McShea and Brandon’s ZFEL is helpfully contrasted with the Hardy-Weinberg principle. The Hardy-Weinberg principle describes what happens in a population from one generation to the next, assuming that no external forces (selection, drift, nonrandom mating, mutation, or migration) are operating on it. The principle says that if you start with certain allele frequencies in one generation, you will get certain genotype frequencies in the next. The allele frequencies will remain stable. Ever since Sober (1984), it has been common for philosophers of biology to think of the Hardy-Weinberg principle as describing the zero-force condition for biological pop- ulations. Thus, the difference between the Hardy-Weinberg principle and the ZFEL is striking: the former treats stasis (with respect to allele frequencies) as the inertial state of any population, while the latter treats directional change (i.e., complexity/diversity increase) as the inertial state. The ZFEL and the Hardy-Weinberg principle also appear to clash when it comes to what counts as an external force. For example, the Hardy-Weinberg principle treats random genetic drift as a force that can affect gene frequencies in a population. The ZFEL entails a different perspective on drift, treating it as what happens in the zero-force condition. This clash of per- spectives has been the subject of lively debate in recent philosophy of biology (Barrett et al. 2012; Brandon and McShea 2012; Gouvêa 2015). The details of the discussion of the ZFEL are less important in the present context than the larger issue of how to choose between rival ISMs. Suppose we pick out a trait that we care about, such as

327 Derek Turner complexity, and define it, as McShea and Brandon do, in a way that makes it easy to apply to the fossil record. We start by documenting trends in the fossil record, such as the increasing ornamen - tation of ammonoid shells. To explain the trends, we want to introduce a picture of the underlying evolutionary tendencies. But conflicting pictures are available. There is the traditional population genetics view, expressed by the Hardy-Weinberg principle, which treats stasis as the inertial state and explains directional change through the operation of forces such as natural selection. Then there is the ZFEL, which says that the increasing ornamentation is exactly what we should expect when no external forces are acting on the system. According to the first perspective, the increasing shell ornamentation through the Mesozoic is a tell-tale signature of natural selection. According to the second, the very same trend is evidence that natural selection is relaxed. This clash of theoretical perspectives – or of ISMs – is an excellent test case for thinking about realism in historical science. For their part, McShea and Brandon shy away from a realist interpretation of the ZFEL. Hav- ing introduced this contrast between the ZFEL and the Hardy-Weinberg principle, they write,

The question we want to address here is this: are there objective matters of fact that settle what count as forces in a particular science, and so what counts as the zero-force con- dition, or is this a matter of how we set out our theory, and so a matter of convention? (McShea and Brandon 2010: 102)

This question connects in obvious ways with the realism debate. Although McShea and Brandon do not put it this way, we can easily define realism here – or perhaps realism about ISMs – as the view that there is an objective, mind- and theory-independent fact of the matter about what should count as forces and what should count as the zero-force condition. By contrast, conventionalism or anti-realism would be the view that there is no objective fact of the matter here for us to discover and that the difference between rival inertial state models has more to do with our own subjective expectations about natural systems than with the deep structure of those systems themselves. McShea and Brandon then make a conventionalist move:

We will not dare to try to answer this question in general, though we will share our suspicions: in some cases objective facts will settle the matter, but in most cases they will not. But in the present case it is clear that we must take a conventionalist stance ( sensu Reichenbach 1938). (McShea and Brandon 2010: 102)

In other words, they are taking an antirealist perspective on their own inertial state model. But why?

What counts as a zero-force condition depends on our choice of how to characterize an evolutionary system. We have chosen a quite minimal characterization, namely any system in which there is reproduction and heritable variation. We think that there are good reasons for this choice – in other words, that it is not an arbitrary choice. But it is a choice and there are alternative ways of theorizing. (McShea and Brandon 2010: 102–103)

McShea and Brandon’s argument for conventionalism is a little difficult to make out here. The problem is that it is hard to see how the decision to focus on evolutionary systems (characterized here as systems exhibiting reproduction and heritable variation) actually constrains our choice between rival ISMs. The inertial state for such systems might be complexity increase, but it

328 Realism about historical science might also be stasis. The problem of choice arises after we decide what sort of system to focus on, because rival ISMs give us different ways of conceptualizing those systems. Gouvêa (2015) defends a pluralist view of the ZFEL and the Hardy-Weinberg principle. She seems to endorse McShea and Brandon’s conventionalism but then draws the consequence that we do not have to make a final, once-and-for-all choice between rival ISMs. Different inertial state models of evolutionary systems have different advantages and disadvantages. Gouvêa argues that nothing prevents biologists from using apparently conflicting ISMs for different purposes. She suggests that the Hardy-Weinberg principle and the ZFEL have distinct advantages. The former has a computational advantage because the theoretical apparatus of population genetics is based on it. The ZFEL, on the other hand, has a heuristic advantage, perhaps because of its promise to generate explanations of both macro- and microevolutionary phenomena. So, Gouvêa wonders,

Why should we choose at all? Though they share a common epistemic structure, the two interpretations definitively explain opposite sorts of phenomena: one accounts for stasis and the other for change. (2015: 379)

One striking feature of the ongoing discussion of the ZFEL among philosophers of biology is that no one, as far as I can tell, is explicitly defending a thoroughgoing realism about the laws and forces doing the explaining, in the sense of maintaining that there is an objective fact of the matter about which inertial state model is the true or correct one. When we turn from trends to evolutionary tendencies, the problem of articulating the realist’s commitment gets a lot easier. The realist holds that there is an objective, mind- and theory-independ- ent fact of the matter about whether stasis or change should be the inertial state for evolutionary sys- tems. It’s much more difficult, however, to see the way to a good argument for realism in this context. I will not attempt here to provide a full assessment of the prospects for realism about inertial state models or about evolutionary tendencies. My goal, rather, is just to identify the challenge and thus to push back against the idea that historical science represents a slam dunk for realism. One important feature of rival ISMs is that they make sense of the same range of empirical phenomena; they just offer different explanatory perspectives on those phenomena. The challenge for the realist will be to identify some aspect of the relevant science – some empirical success, for example – that can only be explained (or, more modestly, is best explained) by invoking the hypothesis that one (and only one) of these rival ISMs is accurately representing the objective, mind-independent fact of the matter about evolutionary tendencies. So far, no one has tried to do that.

5 Conclusion If we start by asking, “What are the theoretical posits of historical science?” there are at least three possible answers. And the challenges of formulating and defending scientific realism are different in each of these three cases. Conveniently, research on Mesozoic ammonoids highlights all three of these questions:

1 Should we be aboutrealists the ammonoids qua concrete organisms? 2 Should we be aboutrealists trends in ammonoid , evolutionand what would that even mean? 3 Should we be aboutrealists the tendency toward greater complexityin ammonoids?

One limitation of virtually all of the discussion of realism in historical science so far is that it has focused exclusively on (1). But most of the action in ammonoid paleobiology concerns (2) and (3).

329 Derek Turner

The discussion in this chapter is not intended as an exhaustive critique of realism about historical science. But it should, I think, leave scientific realists somewhat off balance, especially those who might be tempted to say that realism is easier to defend in historical contexts than, say, in the context of fundamental physics. Even with respect to question (1), which should be the easy one, there are viable non-realist alternatives. However, the deeper problems for realism in the context of historical science concern (2) and (3). With respect to (2), it’s not even clear how to formulate a realist view of trends and patterns. And with respect to (3), the realist view is difficult to motivate and defend. Maybe there is a way to do it, but no one has even tried, and antirealist views – whether McShea and Brandon’s conventionalism or Gouvêa’s pluralism – are well represented in the ongoing discussion in the philosophy of biology. What this means is that even if we grant everything that the realist might want to say about the concrete past – that is, in response to question (1) – the realist is still in an awkward situation. That’s because in practice, paleobiological research focuses less on questions about particular prehistoric organisms and more on documenting and explaining historical trends. So even in the best case, there is something of a mismatch between the interpretation of science that realism offers and the actual practice of historical paleobiology. In section 1, I said that historical science is many things and that there are many ways of doing it: many methods, techniques, and inference patterns. So one limitation of the discussion of this chapter is that it has only focused on one type of research. The example is quite representative of what paleobiologists do much of the time, so it is fair to think of ammonoid research as a central example of historical science. But there are lots of different ways of reconstructing the deep past. Think, for example, of the differences between (i) using molecular clock studies to estimate evolutionary divergence times; (ii) using computer simulations to model ancient climates; (iii) using stratigraphy to predict the occurrence of oil and natural gas deposits; and (iv) using different kinds of stone scrapers to try to carve a statue just like the Venus of Willendorf in order to figure out what tools ancient artists might have used. The heterogeneity of historical science, just like the heterogeneity of science more generally, means that discussions of realism and its rivals will necessarily be fragmented. The prospects for a unified interpretation of historical science of the sort that was promised by early contributions to the realism debate are not terribly good.

Acknowledgments I thank Juha Saatsi for helpful editorial and philosophical suggestions. I am also grateful to the KLI in Klosterneuburg, Austria, for a visiting fellowship that supported my work on this paper.

Notes 1 On some understandings of observability, such as van Fraassen’s (1980: 16), ammonoids might count as observable. But there is clearly also a sense in which we cannot observe things or events in the past. For example, we today cannot observe the Battle of Waterloo, although many did in fact observe it. For fur- ther discussion of these issues, see Turner (2007: ch. 3). 2 For further discussion of Fine’s work, see also J. Asay, “Realism and theories of truth,” ch. 30 of this volume.

References Barrett, M., Clatterbuck, H., Goldsby, M., Hegelson, C., McLoone, B., Pearce, T., Sober, E., Stern, R. and Weinberger, N. (2012) “Puzzles for ZFEL: McShea and Brandon’s Zero Force Evolutionary Law,” Biology and Philosophy 27, 725–735. Boyd, R. (1983) “On the Current Status of Scientific Realism,” Erkenntnis 19(1–3), 45–90.

330 Realism about historical science

——— (1990) “Realism, Approximate Truth, and Philosophical Method,” in W. Savage (ed.), Scientific Theories, Minneapolis, MN: University of Minnesota Press, pp. 355–391. Brandon, R. N. (2010) “A Non-Newtonian Model of Evolution: The ZFEL View,” Philosophy of Science 77, 702–715. Brandon, R. N. and McShea, D. W. (2012) “Four Solutions for Four Puzzles,” Biology and Philosophy 27, 737–744. Bromham, L. (2015) “Testing Hypotheses about Macroevolution,” Studies in History and Philosophy of Science Part A 55: 47–59. Carman, C. (2005) “The Electrons of the Dinosaurs and the Center of the Earth,” Studies in History and Philosophy of Science 36(1), 171–173. Cleland, C. (2002) “Methodological and Epistemic Differences between Historical and Experimental Sci- ence,” Philosophy of Science 69, 474–496. ——— (2011) “Prediction and Explanation in Historical Natural Science,” British Journal for the Philosophy of Science 62(3), 551–582. Currie, A. (2015) “Marsupial Lions and Methodological Omnivory,” Biology and Philosophy 30, 187–209. Currie, A. and Turner, D. D. (2016) “Introduction: Scientific Knowledge of the Deep Past,”Studies in History and Philosophy of Science Part A 55: 43–46. Dennett, D. (1991) “Real Patterns,” Journal of Philosophy 88(1), 27–51. Dummett, M. (2005) Truth and the Past, New York: Columbia University Press. Fine, A. I. (1984) “The Natural Ontological Attitude,” in J. Leplin (ed.), Scientific Realism, Berkeley, CA: University of California Press, pp. 261–277. Forber, P. and Griffith, E. (2011) “Historical Reconstruction: Gaining Epistemic Access to the Deep Past,” Philosophy and Theory in Biology 3(August 2011). doi:http://dx.doi.org/10.3998/ptb.6959004.0003.003 Fraassen, B. C. Van (1980) The Scientific Image, Oxford: Oxford University Press. Gould, S. J. (1996) Full House: The Spread of Excellence from Plato to Darwin, New York: Harmony Books. ——— (1997) “Cope’s Rule as Psychological Artifact,” Nature 385, 199–200. Gouvêa, D. (2015) “Explanation and the Evolutionary First Law,” Philosophy of Science 82(3), 363–382. Hacking, I. (1983) Representing and Intervening, Cambridge: Cambridge University Press. McShea, D. W. (1994) “Mechanisms of Large-Scale Evolutionary Trends,” Evolution 48(6), 1747–1763. ——— (1998) “Possible Largest-Scale Trends in Organismal Evolution: Eight ‘Live Hypotheses,’” Annual Review of Ecology and Systematics 29, 293–318. ——— (2005) “The Evolution of Complexity without Natural Selection: A Possible Large-Scale Trend of the Fourth Kind,” Paleobiology 31(2 Supp), 146–156. McShea, D. W. and Brandon, R. N. (2010) Biology’s First Law: The Tendency for Diversity & Complexity to Increase in Evolutionary Systems, Chicago, IL: University of Chicago Press. McShea, D. W. and Rosenberg, A. (2008) Philosophy of Biology: A Contemporary Introduction, London: Routledge. Monnet, C., De Baets, K. and Yacobucci, M. M. (2015) “Buckman’s Rules of Covariation,” in C. Klug (ed.), Ammonoid Paleobiology: From Macroevolution to Paleogeography, Dordrecht: Springer, pp. 67–94. Monnet, C., Klug, C. and De Baets, K. (2015) “Evolutionary Patterns of Ammonoids: Phenotypic Trends, Convergence, and Parallel Evolution,” in C. Klug (ed.), Ammonoid Paleobiology: From Macroevolution to Paleogeography, Dordrecht: Springer, pp. 95–142. Parsons, K. (2001) Drawing Out Leviathan: Dinosaurs and the Science Wars, Bloomington, IN: Indiana Univer- sity Press. Reichenbach, H. (1938) Experience and Prediction, Chicago, IL: University of Chicago Press. Ruse, M. (1999) Mystery of Mysteries: Is Evolution a Social Construction? Cambridge, MA: Harvard University Press. Seilacher, A. (1988) “Why Are Nautiloid and Ammonite Sutures So Different?” Neues Jahrbuch für Geologie und Paläontologie 177, 41–69. Sepkoski, J. J., Jr. (1978) “A Kinetic Model of Phanerozoic Taxonomic Diversity I: Analysis of Marine Orders,” Paleobiology 4(3), 223–251. ——— (1979) “A Kinetic Model of Phanerozoic Taxonomic Diversity II: Early Phanerozoic Families and Multiple Equilibria,” Paleobiology 5(3), 222–251. Sober, E. (1984) The Nature of Selection: Evolutionary Theory in Philosophical Focus, Cambridge, MA: MIT Press. Stanford, P. K. (2011) “Damn the Consequences: Projective Evidence and the Heterogeneity of Scientific Confirmation,”Philosophy of Science 78(5), 887–889.

331 Derek Turner

Turner, D. D. (2004) “The Past vs. the Tiny: Historical Science and the Abductive Arguments for Realism,” Studies in History and Philosophy of Science A 35, 2–27. ——— (2007) Making Prehistory: Historical Science and the Scientific Realism Debate, Cambridge: Cambridge University Press. ——— (2009) “Beyond Detective Work: Empirical Testing in Paleobiology,” in D. Sepkoski and M. Ruse (eds.), The Paleobiological Revolution: Essays on the Growth of Modern Paleontology, Chicago, IL: University of Chicago Press, pp. 201–214. ——— (2011) Paleontology: A Philosophical Introduction, Cambridge: Cambridge University Press. ——— (2015) “A Second Look at the Colors of the Dinosaurs,” Studies in History and Philosophy of Science Part A 55: 60–68. doi:10.1016/j.shpsa.2015.08.012 Ward, P. (1981) “Shell Sculpture as Defensive Adaptation in Ammonoids,” Paleobiology 7, 96–100. Woodward, J. and Bogen, J. (1988) “Saving the Phenomena,” The Philosophical Review 97(3), 303–352.

332 26 SCIENTIFIC REALISM AND THE EARTH SCIENCES

Teru Miyake

1 Introduction The twentieth-century debate between scientific realists and instrumentalists was driven by wor- ries about microphysics, while an earlier incarnation of the debate can be seen in early modern astronomy. These are both cases where our access to the objects of inquiry is limited in certain ways – in microphysics because atoms are too small for us to detect directly with our senses and in astronomy because the stars and planets are so far away from us (although directly visible). Another major science in which our access to the objects of inquiry is severely limited is earth science, especially the part of it that has the aim of acquiring knowledge about the deep interior of the earth. Not surprisingly, there have always been doubts about what we can possibly come to know about the earth’s interior, especially its deepest regions. The earth sciences have a rich history not only of scientific achievements but also of epistemological discussion about how we can possibly acquire knowledge about the earth’s deep interior. My focus in this chapter is on our knowledge of the earth’s deep interior, not of its surface. This means, unfortunately, that I must pass over a large portion of the literature on epistemolog- ical issues in geology, some of which is relevant to the issue of scientific realism. The history of geology was shaped by several celebrated controversies, from the neptunist-plutonist controversy in the early nineteenth century through the plate tectonics revolution in the twentieth. These debates have been covered in detail, and with philosophical sophistication, by authors such as Greene (1982), Laudan (1987), Oldroyd (1996), and Oreskes (1999). Within each of these debates are epistemological issues that may be analyzed in terms of underdetermination or in terms of historicist approaches to the philosophy of science. I will not be discussing these debates in this chapter for two reasons. First, the existing literature has already given them a detailed treatment, and the interested reader may consult the references I have listed. Second, I believe that the case from the earth sciences that has the most direct bearing on the issue of scientific realism is the acquisition of knowledge about the earth’s deep interior due to its parallels to epistemological problems of microphysics. A characteristic feature of twentieth-century philosophical debates about scientific realism is the distinction between observable and unobservable1 We entities.might make a parallel dis- tinction between features of the earth that are observable because they are on or near the earth’s surface and features that are unobservable because they are buried deep within the earth. Of

333 Teru Miyake course, there are immediately obvious differences. The properties of the earth’s deep interior are not in principle unobservable – if we could bring them up to the surface, they would be observ- able in every sense of the word. Further, we take these features to be describable by the same laws that describe macroscopic features we see at the surface. In plain English: it’s just rocks, all the way down. The problem of acquiring knowledge about the earth’s deep interior is not taken to have the kind of deep metaphysical significance that some philosophers attribute to the parallel problem in microphysics. It does, however, have a similar epistemic structure. There is very little hope of our ever having direct access to the properties or features being investigated, so our access to them is always going to be through investigation of their effects on properties or features that are directly observable. This presents deep epistemological problems, as it does in the case of microphysics. The state of knowledge about the earth’s deep interior at the beginning of the twentieth century can be summed up in the seismologist Richard Oldham’s lament:

Many theories of the earth have been propounded at different times: the central sub- stance of the earth has been supposed to be fiery, fluid, solid, and gaseous in turn, till geologists have turned in despair from the subject, and become inclined to confine their attention to the outer crust of the earth, leaving its centre as a playground for mathematicians. (Oldham 1906)

We can confidently say that we now know far more details about the earth’s deep interior than we did in 1906. The question I will be focusing on in this chapter is: how did the acquisition of this detailed knowledge first become possible? Now, one straightforward and simple answer is that in 1906, we were just beginning to use a new type of observation to obtain knowledge about the earth’s deep interior: seismic wave observations. This is true, but we then need to ask the question: what is it about seismic wave observations that made it possible to obtain such detailed knowledge about the earth’s deep interior where previously we could not? I believe the answer lies in the particular way in which seismological observations allowed for a procedure in which seismologists could ask, and provisionally answer, an interlocking set of questions about the earth’s deep interior. Through these interlocking questions, ever more ways of gaining epistemic access to the earth’s deep interior could be found, and ever better agreement between these methods of access could be achieved. My aim is to show how this procedure works and to sketch out how it developed historically in seismology. In the next three sections, I will explain how we were able to gain epistemic access to the earth’s deep interior in three different eras: the era prior to the development of seismology, the first few decades of seismology, and more recent seismology. In the final section of this chapter, I will discuss more general issues about realism with regard to the earth’s deep interior, in particular issues regarding robustness, modeling, and underdetermination.

2 Gaining access: before seismology As the seismologist Richard Oldham indicates, we had good estimates in 1906 of the earth’s gross properties – size, shape, and mean density – and we knew that the density increases towards the center and that the earth’s deep interior must have high temperatures. These facts are, however, compatible with a wide range of possibilities regarding the earth’s interior. Solid, liquid, and gas models of the earth’s interior were all considered plausible at the beginning of the twentieth century.2 But exactly why couldn’t we obtain better knowledge about the earth’s deep interior before seismological observations became possible?

334 Scientific realism and earth sciences

In order to answer this question, it’s important to understand that even the relatively meager knowledge we had about the earth’s deep interior at the beginning of the twentieth century was a great achievement. Before Newton, we had better epistemic access to the far-away planets than we did to the earth’s deep interior, right beneath our feet. Although they were so far away, we could at least see the planets, and, more importantly, measure and record their motions relative to the fixed stars. In contrast, we really did not have any epistemic access at all to the earth’s deep interior – the only way of gathering knowledge about the earth’s interior was by going down into deep mines or studying volcanoes. As Newton revolutionized astronomy, however, so he revolu- tionized earth science by giving us a new means of gaining epistemic access to the earth’s interior. Newtonian gravity can be conceptualized as a field whose strength everywhere is determined by the mass distribution of the attracting object. An exploration of an object’s gravitational field could potentially, then, give us information about the object that gives rise to the field. Newton’s theory made various questions about the earth’s interior answerable. For exam- ple: What is the mean density of the earth’s interior?3 Newton himself suggested two ways of answering this question. The first way is through measurements of the strength of gravity at the earth’s surface. In the mid-eighteenth century, Pierre Bouguer derived an equation for the ratio of the strength of gravity at the top of a tableland region of the earth and its strength at sea level as a function of the height of the tableland, the radius of the earth, and the ratio of the density of the tableland to the mean density of the earth (σ/σ’). He then carried out pendulum measurements at high altitude in Quito, Peru, and came up with the result of approximately 4.5 for σ/σ’. The result relies on a highly idealized model of the earth and the tableland, and Bouguer himself realized the high uncertainty of any such models, but we can see that the Newtonian theory here enables us to bring surface measurements to bear on questions about the earth’s deep interior. Bouguer did later experiments using a different model, involving the influence of a nearby mountain on a plumb bob. Similar experiments done later by Nevil Maskelyne and others yielded a value of 4.5 g/cm3 for the mean density of the earth, later corrected to 4.95 g/cm3 by better estimates of the density of the mountain. Experiments of this type continued into the late nineteenth century, yielding results of 5.3 g/ cm3 in 1855 and 5.77 g/cm3 in 1880. The second way of determining the earth’s mean density is by directly measuring the attrac- tion between two masses in a laboratory. This attraction can then be compared to the calculated gravitational attraction of the earth, determining values for the earth’s mass and mean density. Henry Cavendish carried out a celebrated series of laboratory experiments in the late eighteenth century to measure the earth’s mean density in this way, which ultimately resulted in a value of around 5.4 g/cm3 for the earth’s density. Similar experiments4 carried out in the late nineteenth century resulted in various values around 5.5 to 5.7 g/cm3. I now want to make a couple of points about gaining access to the earth’s deep interior. First point: a well-developed mathematical theory gives us epistemic access to hitherto inaccessible worldly features. Given a theory of gravity, models can be constructed that relate quantities representing properties of the earth’s deep interior (e.g., mass density) to quantities representing properties that can be observed on the earth’s surface (e.g., strength of surface gravity). Other theories can also be used to construct models that connect properties of the earth’s deep interior with properties at its surface, and these models can also potentially be used to make inferences about the deep interior. The nineteenth century saw a great flowering of sophisticated physical theories of heat, electricity, magnetism, and mechanical waves. All of these theories can and have been used to make inferences about properties of the earth’s deep interior. The theory of elastic waves was particularly crucial for the problem of gaining access to the earth’s deep interior, as we shall see.

335 Teru Miyake

Second point: these measurements of the earth’s mean density are theory and model mediated. They are theory mediated in that they presuppose that the theory of gravity is accurate to a suf- ficient degree of approximation. They are also model mediated in that they further presuppose that the models being constructed, such as Bouguer’s tableland model, are in some way capable of yielding results that are an approximation to the actual values of quantities that represent real features of the earth, such as its mean density.5 A worry that might be raised with regard to, say, Bouguer’s measurement is whether his tableland model is a “close enough approximation” to the actual situation. But exactly what does that mean? What the model directly tells us about is a counterfactual situation where there is an earth-like massive object that is homogeneous and spherically symmetric except for a tableland region with a particular size and shape. When the only force involved is the force due to the gravity of the massive body, there is a mathematical relation between the strength of gravity at the top of the tableland and the ratio σ/σ’ of the mean densities of the earth-like object and the tableland, such that if I know the values of the strength of gravity at the top of the tableland, I can infer the value of σ/σ’. Now, what is the relation between this counterfactual situation and the real earth? If I take the value of σ/σ’ calculated using this model to be an approximation to the actual value for the earth, I am assuming that the differences between the earth and the model will result in a relatively minor difference in the value of the gravitational field strength – that if we took the model and added in further details, the calculated value of σ/σ’ would get closer and closer to the true value for the earth. The hope here is that any discrepancies between the value that the model gives for σ/σ’ and the true value would be due to the particularities of the model, which could potentially be corrected for. Now, how do we get evidence that this is indeed the case? As I have mentioned, there were two major ways of measuring the earth’s mean density. Over time, measurements using each of these methods were refined, and as they were refined, the resultant values for the earth’s mean density tended to converge. By “refining,” I mean that factors that were not accounted for in ear- lier models were taken into account in later ones, resulting in modifications to the mean density measurement. For example, in the mountain experiments, corrections could be made for the shape of the mountain, or better estimates of its density. What is being shown through the convergence of these measurements is that the discrepancies between the different measurements of the earth’s mean density are due to the particularities of the models being used in the measurements.6 It’s to the extent that we can account for and independently confirm the reasons for departures from convergence, that we can have confidence that we are measuring real properties of the earth. Throughout the eighteenth and nineteenth centuries, gross properties of the earth other than mean density were measured and refined. Clairaut, Laplace, and others advanced theoretical investigations of the figure of the earth and its moments of inertia. William Hopkins, Lord Kel- vin, and others brought the theory of physical chemistry, the theory of heat, and astronomical evidence to bear on the problem of the earth’s interior. Still, radically different models of the earth, including solid, liquid, and gaseous interiors, were postulated and were all serious contend- ers, even well into the seismological era.7 Apart from facts about gross properties of the earth, we could not have much claim to having knowledge about the earth’s deep interior. Why couldn’t we go any further than this? The key, I think, is understanding the kind of investigation that seismic wave observations allowed.

3 Gaining access: the seismological era The deep interior of the earth is vast and immensely complicated. It seems like a hopeless endeavor, on the face of it, to determine in great detail facts about the interior of such a complicated object, given only observations at its surface. The reason why surface gravity

336 Scientific realism and earth sciences measurements could not give us more detailed information about the earth’s interior is ulti- mately a matter of underdetermination. (See D. Tulodziecki, “Underdetermination,” ch. 5 of this volume.) In effect, what mean density measurements are telling us is what the properties of a homogeneous earthlike object would be if we were to measure the same surface gravity at its surface as we see on the earth. But the earth’s interior is not homogeneous. And heterogeneous earthlike objects can be radically different from each other yet give rise to the same surface grav- ity measurements.8 If the earth’s interior is heterogeneous and we want to know the individual properties of each of the heterogeneous parts, we need a way of separating out the contributions from each of these parts. But gravity does not seem to give us a way to do this. Now let’s consider seismic waves. An earthquake occurs when there is a certain kind of sud- den motion along a fault that results in the generation of seismic waves. Seismic waves propagate outwards from this seismic source through the earth’s interior and along its surface, and they are detected by seismometers located at various points on the earth’s surface. A model can be constructed, using the theory of elastic waves, that connects surface measurements of seismic waves with the mechanical state of the earth’s deep interior. The theory of elastic waves had been developed decades before the development of modern seismology, for an entirely different purpose. The theory of the luminiferous ether took light to be waves in an elastic solid medium, and a great deal of work was done in the mid-nineteenth century to work out a mathematical theory of waves in elastic media. In 1828, Poisson showed that a homogeneous elastic medium would support two kinds of waves, dilatational waves (known as P waves in seismology) and shear waves (S waves), while later papers by Rayleigh and Love showed that in addition to these, a third and fourth kind of wave can be created at the surface of such a medium. In a homogeneous and isotropic medium, the velocity of P waves depends upon the rigidity, incompressibility,9 and density of the medium, while the velocity of S waves depends upon the rigidity and density. Thus, the speed of both P waves and S waves will depend upon the local mechanical proper- ties of the medium as they are propagating through it. If there are discontinuities in a medium where density or the elastic constants change abruptly, both P and S waves will be reflected and refracted in a manner analogous to optics. If the mechanical properties change smoothly as the waves propagate through the medium, the waves will follow curved paths through the medium. At the end of the nineteenth century, P and S waves were merely mathematical constructs, as far as seismologists were concerned. Waves that correspond to these two types of mathematical constructs would exist in a suitably elastic earth, but it could have been the case, as indeed some early seismologists thought, that complex heterogeneities in the earth’s crust would so affect P and S waves that by the time they got to seismometers, we would not be able to separate them out. Luckily, this turned out not to be the case. Oldham was able to identify P, S, and Rayleigh waves in seismographic observations from a large Indian earthquake of 1897. Because a liquid medium cannot support S waves, the detection of S waves that apparently traveled through the earth’s interior was taken to show that it must be solid. Early seismic wave observations made it clear that waves that travel the same angular distance across the surface of the earth have approximately the same travel times, showing that the earth can reasonably be approximated as homogeneous and spherically symmetric. These travel times could then be compiled and used to make infer- ences about the mechanical properties of the earth’s interior. Suppose we have a non-rotating earthlike object that is spherically symmetric, homogeneous, and elastically isotropic, in which P and S wave velocities are dependent only on the distance from the center of the object. If we are given the functions α(r) and β(r) that respectively represent P and S wave velocities as a function of radius, then for any given angular distance on this earthlike sphere, a simple integration will allow us to calculate the times for P and S waves to travel that distance. The inverse problem, that of finding the functions α(r) and β(r) given travel times, can be solved through techniques

337 Teru Miyake for solving integral equations developed by Abel. Gustav Herglotz, Emil Wiechert, and others applied these techniques to seismology in the first decade of the twentieth century. Another important development in early seismology was the discovery of the earth’s core and the determination of its properties. A series of investigations of the effects of the gravitational force of the sun and the moon on the precession and nutation of the earth and tidal forces on the earth’s interior carried out in the mid-nineteenth century by Hopkins, Kelvin, and George Darwin, showed that the earth’s interior must have a very high rigidity. A liquid interior, favored by many geologists in the early nineteenth century, would have no rigidity. One way to keep a liquid interior while maintaining the required rigidity was to have an earth with a liquid layer and a highly rigid, solid core. Seismological evidence changed the tentative nature of such hypotheses about the earth’s core. In 1906, Oldham noticed what he took to be a delay in the arrival of S waves at an angu- lar distance of 130 degrees from the seismic source, which he took to show that S waves are passing through a core with slower wave speeds. This was prior to the development of sophis- ticated travel time techniques by Herglotz and Wiechert that could account for the curved paths taken by P and S waves through the earth’s interior. Using these more sophisticated techniques, Beno Gutenberg showed in 1912 that no unreflected P waves can be detected in a “shadow zone” between 108 and 140 degrees from the seismic source, and P waves beyond 140 degrees had a considerable time delay. This was taken to indicate the existence of a core with a radius of 2900 km. Once the existence of the core is established, a question you can ask about it is whether it is solid or liquid. One way to answer this is to remember that S waves cannot pass through a liq- uid medium, so if S waves that pass through the core could be detected, this would be evidence for the core’s solidity. Into the 1920s, there were no convincing observations of S waves passing through the earth’s core, but other considerations such as the high rigidity of the earth’s interior required by astronomical observations seemed to support a solid core. In 1926, Harold Jeffreys showed that there is a discrepancy between the distribution of rigidity within the earth implied by astronomical observations and that implied by seismic wave observations if it is assumed that the mantle and the core are both solid. He concluded that the core must have extremely low rigidity and is therefore likely liquid. In the 1930s, Inge Lehmann postulated the existence of an inner core to account for certain anomalous P waves that could be detected within the shadow zone. Subsequent investigations by Gutenberg, Keith Bullen, and others, continuing into the 1970s, gradually drew together evidence that the inner core is solid, and the observation in the early 2000s of a shear wave that passes through the inner core (the PKJKP phase) is now taken as definitive evidence that it is solid. Let me now make three points about the particular way in which seismic wave observations allowed us to gain epistemic access to the earth’s deep interior. First, investigations here could be carried out in terms of a series of questions: Is there a core? What is the core diameter? Is the core liquid or solid? Is there an inner core within the core, with an abrupt discontinuity? Where is the discontinuity located? Is the inner core solid or liquid? Similar questions could, and indeed were, asked with regard to other parts of the earth, for example the mantle: What is the distribu- tion of seismic velocities within the mantle? Are there discontinuities within the mantle? Where are these discontinuities located? These questions are interlocking in the sense that the answers to some of them have a bearing on the answers to some of the other questions. For example, a determination of the diameter of the core will have a bearing on the calculation of rigidity of the earth’s interior, which will in turn have a bearing on the question whether the core is liquid or solid. The distribution of seismic velocities in the mantle also has a bearing on this issue, because seismic velocities are related to rigidity, and the rigidity of the mantle must be estimated in order

338 Scientific realism and earth sciences to determine the rigidity of the core. We can find many more connections between these and other questions if we delve further into the details. Second point: for most of these questions, there are multiple ways of bringing evidence to bear on answering them. The use of multiple independent methods to attempt to measure or detect the same property or feature of the world has been termed “robustness” by Wimsatt (1981, 2007).10 In a related vein, Hacking (1983: 200–201) commented that you can have greater confi- dence that you are observing real bodies through a microscope, and not artifacts, if you can detect the same bodies using two or more microscopy techniques, each relying on different physical processes. Franklin (1986) examined this and further strategies for distinguishing between real phenomena and artifacts, in the context of experimental physics, in an “epistemology of exper- iment”. I am broadly in agreement with the epistemological views of Hacking and Franklin, but there is a major difference between seismology and the cases they discuss. The ability to intervene plays an important role in these epistemic strategies, but seismologists are limited in that respect. In microscopy, if two different methods seem to indicate different things about what is being observed, you might re-calibrate or otherwise adjust the microscopes in an attempt to get the images to agree. Such adjustments and interventions are less easily done in seismology, because seismologists are not in a position to isolate or control the things they are trying to observe. For example, a seismologist who wants to determine the rigidity of the earth’s core must deal with the fact that between the earth’s core and the earth’s surface, there is a vast volume of the earth’s interior about which very little is known. This leads me to my third point. Third point: the interlocking questions allow for a kind of exploration to be carried out through attempts to account for discrepancies or other anomalies. I have so far focused on the confirmatory aspects of convergence, but there is another aspect to the procedure of accounting for departures from convergence. It is helpful to think about this procedure in terms of exploration – searching for whatever factors are giving rise to the discrepancies or anomalies. For example, Lehmann postulated the existence of an inner core to account for anomalous observations of P waves in Gutenberg’s shadow zone. These observations are only anomalous against the back- ground of the postulation of a core, its size, and the particular ways in which P waves were taken to propagate through the earth, which was dependent on determinations of the seismic wave velocities in the mantle. Once this inner core was postulated, attempts could be made to measure its rigidity and diameter, which would result in further corrections to calculations of rigidity, as well as further exploratory attempts to find waves that travel through the inner core. As I have mentioned, what makes observation of features of the earth’s interior difficult is that we have very little antecedent knowledge about vast parts of it, and our means of controlling or intervening in it are limited. So instead of the kinds of adjustment and calibration that Hacking and Franklin suggest can give us confidence in the case of experimental apparatus, seismologists must do their “adjustment” by other means. How can they go about doing that? Well, it turns out, luckily, that different properties and features of the earth can be accessed in different ways. Seismic waves come in different forms – P waves, S waves, Rayleigh waves, and Love waves. There are different modes of observation of seismic waves – travel times and normal modes (as I will discuss in what follows). There are various different phenomena that are related to seismic wave propagation that have effects on observations, including refraction, diffraction, reflection, anisot- ropy, and anelasticity. And there are ways other than seismic waves of bringing evidence to bear on the earth’s interior – through observations of the earth’s rotation or through tidal observations. Thus, for each of the interlocking questions I have listed, there are many different ways of deter- mining an answer. And these answers have a bearing on answers to some of the other questions, which in turn can be determined in many different ways. Instead of the kind of adjustment and calibration that Hacking and Franklin suggest can be done in some experimental contexts,

339 Teru Miyake seismologists must make use of this interlocking web of questions and answers about the earth’s interior. For example, questions about the rigidity of the earth’s core interlock with questions about the earth’s density, its chemical makeup, dynamical properties such as the earth’s rotation, gravitational effects such as the tides, and so on. There was a long drawn-out process by which the implications for all these various questions were worked out and examined, and discrepancies were accounted for. In some cases, examination of those discrepancies led to new discoveries, such as Lehmann’s discovery of the inner core. It is the success of this process of exploration, and not mere agreement between methods of determination, that makes us confident that the earth’s outer core really is liquid. What, then, was the difference between the period before and after seismological observation that allowed us to obtain so much more detailed knowledge of the earth’s interior? The difference is that with seismological observations, we were able to separate out parts of the earth that could be epistemically accessed in numerous different ways, which in turn allowed us to carry out the process of exploration through the examination of interlocking questions. Once, for example, the core was identified, we could attempt to determine its properties using various methods. A provisional determination of, say, its radius could then be used in the determination of other properties, such as rigidity. Of course, it’s true that this interlocking means that a mistaken assumption about, say, the earth’s core could lead to mistaken answers to questions elsewhere. But because of the way the questions here interlock with each other and with external observations such as those from astronomy, such errors are likely to show up as discrepancies elsewhere, and the mistaken assumption will then be corrected as these discrepancies are addressed. This is why we can have such a high degree of confidence that we know things about the earth’s deep interior that we did not know before.

4 Gaining access: earth models Starting in the 1930s, groups of seismologists began a concentrated effort to develop spheri- cally symmetric models of the mechanical properties of the earth’s interior. They constructed idealized earth models and compared expected travel times for such models with actual travel times of seismic waves. There was a remarkable agreement between the earth models produced by the two main groups, the Jeffreys-Bullen model and the Gutenberg-Richter model.11 Once the discontinuities between each of the layers, such as the core and the mantle, were established, the distribution of mechanical properties within each layer was mainly determined by trial and error: the models were postulated as hypotheses about the earth’s internal structure, and they were compared directly against observations of travel times. In the 1960s, new observational instrumentation made possible the recording of normal modes of oscillation of the earth. The frequencies of the normal modes of vibration of a spherically sym- metric, elastically isotropic, non-rotating earthlike body can be calculated theoretically. Seismol- ogists developed techniques for solving the inverse problem, that of determining the mechanical properties of such a body, given the frequencies of its normal modes. These techniques were then used to construct earth models that incorporate normal mode frequency observations. At around this time, some geophysicists became worried about the possibility of radically different models being consistent with all travel time and normal mode observations. Work by George Backus and Freeman Gilbert in the late 1960s and early 1970s, which tried to address this underdetermination problem, showed that limits could be put on the degree of non-uniqueness of earth models, but only under the assumption that the functions relating earth structure to observations of normal mode frequencies were linear, an assumption that was known to be false.12 In the mid-1970s, several teams of seismologists began to develop earth models with the goal of

340 Scientific realism and earth sciences coming up with a standard reference model. In 1981, this process culminated with the develop- ment of the Preliminary Reference Earth Model (PREM), which is still being used to this day, although there are now several other alternative models. I will now make a few comments on the role of these earth models, especially standard refer- ence models like PREM.13 First, they should not be confused with the models of measurement that I have been discussing, such as the Bouguer tableland model. Rather, as Bullen suggests, they are systematized and consistent presentations of seismological results. They are, in fact, much like the answers to the interlocking questions that I mentioned in the previous section, except they have been put into a unified, crystallized form. Second, these models are constructed with certain purposes in mind. I have not yet men- tioned another aim of seismology, that of extracting information about parameters of seismic sources from seismograms. In order to do this, the effects on seismic waves between the time they leave the seismic source and their detection on the surface of the earth, namely, the effects due to propagation through the earth’s interior, must be accounted for. An earth model is needed for this purpose. The better the earth model, the better the determinations of seismic source parameters will be. Questions about seismic source parameters are thus another set of questions that interlock with questions about the earth’s interior. Further, standard earth models play an important role in recent work on non–spherically symmetric, heterogeneous earth models. Such models are not constructed from scratch. Standard reference earth models like PREM are used as initial starting models to which heterogeneities are added, often using linear perturbation techniques. These reference models are thus used in the way that the interlocking questions were used in an earlier era – to probe and explore the earth’s interior through a process involving the examina- tion of discrepancies between measurements. They are tools for discovering significant features of the earth and finding out the effects these parts would have on observations. One way to do this might be to examine the residuals between observations that the model predicts and the actual observations. Patterns in the residuals might turn up some clues as to significant features that are being left out of the model, in much the same way as anomalies pointed the way for Lehmann towards her discovery of the inner core. When there are discrepancies between different earth models, this can be taken as pointing towards an as-yet-undiscovered feature of the earth’s interior. A review article on earth models makes this point:

Seismologists are still searching for the proper mathematical representation of the unknown physical properties of the earth’s interior. It is important at this stage to explore the earth with data that sample it in a significantly different way and to com- pare the resulting models. This approach is being carried out, and the formulation of the hypothesis that the inner core is anisotropic is a direct outcome of the encountered discrepancy between the isotropic models based on splitting of normal modes and travel times. (Dziewonski and Woodhouse 1987: 39)

Here, an examination of the way in which two different models differ, one based on travel time data and one based on data on the splitting of normal modes, both of which make the idealiza- tion that the interior of the earth is entirely isotropic, led to the postulation of the anisotropy of the inner core. We can see that the procedure I described in the previous section, of exploration of the earth’s deep interior through searching for factors that are giving rise to discrepancies or anomalies, has been continuing.

341 Teru Miyake

5 Earth science and realism I am advocating a kind of realism about the earth’s deep interior. An inner core really exists, and it really is solid. It is surrounded by an outer core that really exists, and it really is liquid, and so on. But it was by no means a foregone conclusion that we would ever come to know these facts. They were, as I have shown, extremely difficult to come by and were established much more recently than most philosophers realize. An understanding of this difficulty allows us to understand better some general issues in scientific realism. I want to end by briefly discussing two such issues in particular. First, robustness. I have stressed the importance of convergent measurement as an indication of the reality of the property being measured, and this is related to the issue of robustness discussed in the recent philosophical literature. It’s too easy, however, to misunderstand robustness as a criterion: in order for a property being measured to count as real, different purported measurements of that property must agree within a certain bound of error. I think that an examination of the history of seismology, though, reveals a more subtle way of thinking about the connection between robustness and reality. The best evidence we have that a real property is being measured is the fact that, in the process of accounting for divergence between measurements that we take to be measurements of the same property, we make new discoveries, which are then independently verified. For if real properties of the earth’s deep interior were not being measured, we would not expect that, in trying to account for divergences between measurements, we would come upon such new discoveries. This also accounts for the badness of ad hoc models or theories. If a model has been overfitted to the data, we would not expect residuals or discrepancies from such a model to lead to new discoveries. Second, underdetermination. Throughout the history of seismology, underdetermination has been the norm: at any given point, observations were compatible with multiple viable models of the earth’s deep interior. This is still the case, but notice how far we have come since the time of Oldham! Recent work by philosophers such as Miyake (2011) and Belot (2015) has attempted to draw out some of the philosophical implications of geophysical work on non-uniqueness. Belot (2015), in particular, argues that results that show the non-uniqueness of solutions to certain inverse problems14 in geophysics are a challenge to realism. These non-uniqueness results show that there can be cases where, given a particular model of measurement (in the sense of section 2), and given any finite number of observations on the earth’s surface, an infinite number of earth models that are consistent with all such observations can be found. There is a significant question here of the limits of resolution achievable through any particular model of measurement, but as I have discussed in this chapter, realism about the earth’s deep interior has been underwritten by convergence between the results of different models of measurement and the success of the procedure of putting in corrections to these models. Non-uniqueness of solutions to any par- ticular inverse problem is compatible with genuine knowledge about the earth’s interior. As to the wider significance of such results for scientific realism, care must be taken in laying out the analogy between inverse problems and the problem of scientific inference in general before we can properly evaluate the bearing such results have on the issue. The brief history of seismology I have presented here shows that underdetermination is an important problem in seismology, but its epistemological implications must be understood within the history of the accretion of evidence about the earth’s interior.

Notes 1 The distinction is made, for example, by van Fraassen (1980), who draws it based on what can be observed by unaided human perception. Electrons are observable for van Fraassen because we cannot see them with the naked eye, while the moons of Jupiter are observable, because we would be able to see

342 Scientific realism and earth sciences

them easily if brought sufficiently close. The interior of the earth would clearly be observable according to van Fraassen. 2 See Brush (1979) for a detailed history. 3 See Bullen (1975), chapter 2, for a history of such measurements. 4 See Zenneck (2007), originally published in 1901, for a turn-of-the-century survey and assessment of attempts to measure the gravitational constant. 5 See Tal (2012) for an investigation of epistemological issues in model-mediated measurement. 6 See Smith (2014) for a detailed discussion of this procedure as it took place in gravity theory, although Smith does not talk specifically of models. 7 See Brush (1979, 1980). 8 This was shown by Bullard and Cooper (1948), based on a mathematical result by George Kreisel (1949). 9 The elastic properties of an isotropic elastic medium can be completely captured in terms of two indi- ces, incompressibility and rigidity. Incompressibility is an indication of the resistance of a medium to compression or dilatation, while rigidity is an indication of resistance to shear deformation. A liquid has negligible resistance to shear deformation, so it would have negligible rigidity. 10 Wimsatt takes the term from Levins (1966), who discussed the method in the context of biological modeling. The original idea of Levins was criticized by Orzack and Sober (1993) and then modified and defended by Weisberg (2006, 2013). There is a more general literature on the notion of robustness and its implications for scientific realism (e.g., Culp 1995; Stegenga 2009; Odenbaugh 2011). The notion (although not always referred to by the term “robustness”) also plays an important role in a debate about the purported distinction between data and phenomena that arose in response to Bogen and Woodward (1988). See in particular Woodward (1989, 2011), McAllister (1997), Glymour (2000), Brading (2010), and Feest (2011). 11 See Bullen (1975). 12 See Miyake (2011) for discussion of the epistemological significance of these studies. 13 See Miyake (2015) for a more detailed discussion of the use of standard reference models in seismology in relation to the philosophical literature on models. 14 The classic geophysical work is a series of papers by Backus and Gilbert, starting with Backus and Gilbert (1967).

References Backus, G. and Gilbert, F. (1967) “Numerical Applications of a Formalism for Geophysical Inverse Prob- lems,” Geophysical Journal of the Royal Astronomical Society 13, 247–276. Belot, G. (2015) “Down to Earth Underdetermination,” Philosophy and Phenomenological Research 41(2), 456–464. Bogen, J. and Woodward, J. (1988) “Saving the Phenomena,” The Philosophical Review 97, 303–352. Brading, K. (2010) “Autonomous Patterns and Scientific Realism,”Philosophy of Science 77, 827–839. Brush, S. (1979) “Nineteenth-Century Debates about the Inside of the Earth: Solid, Liquid, or Gas?” Annals of Science 36(3), 225–254. ——— (1980) “Discovery of the Earth’s Core,” American Journal of Physics 48, 705–724. Bullard, E. and Cooper, R. (1948) “The Determination of the Masses Necessary to Produce a Given Gravitational Field,” Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences 194(1038), 332–347. Bullen, K. (1975) The Earth’s Density, London: Chapman and Hall. Culp, S. (1995) “Objectivity in Experimental Inquiry: Breaking Data-Technique Circles,” Philosophy of Science 62, 438–458. Dziewonski, A. and Woodhouse, J. (1987) “Global Images of the Earth’s Interior,” Science 236(4797), 37–48. Feest, U. (2011) “What Exactly Is Stabilized When Phenomena Are Stabilized?” Synthese 182, 57–71. Fraassen, B. Van (1980) The Scientific Image, New York: Oxford University Press. Franklin, A. (1986) The Neglect of Experiment, New York: Cambridge University Press. Glymour, B. (2000) “Data and Phenomena: A Distinction Reconsidered,” Erkenntnis 52(1), 29–37. Greene, M. T. (1982) Geology in the Nineteenth Century: Changing Views of a Changing World, Ithaca, NY: Cornell University Press. Hacking, I. (1983) Representing and Intervening, New York: Cambridge University Press. Kreisel, G. (1949) “Some Remarks on Integral Equations with Kernels,” Proceedings of the Royal Society of London, Series A, Mathematical and Physical Sciences 197(1049), 160–183.

343 Teru Miyake

Laudan, R. (1987) From Mineralogy to Geology: The Foundations of a Science, 1650–1830, Chicago: University of Chicago Press. Levins, R. (1966) “The Strategy of Model Building in Population Biology,” American Scientist 54(4), 421–431. McAllister, J. (1997) “Phenomena and Patterns in Data Sets,” Erkenntnis 47(2), 217–228. Miyake, T. (2011) Underdetermination and Indirect Measurement, Ph.D. dissertation submitted to Stanford University. ——— (2015) “Reference Models: Using Models to Turn Data into Evidence,” Philosophy of Science 82, 822–832. Odenbaugh, J. (2011) “True Lies: Realism, Robustness, and Models,” Philosophy of Science 78(5), 1177–1188. Oldham, R. (1906) “The Constitution of the Interior of the Earth as Revealed by Earthquakes,” The Quar- terly Journal of the Geological Society of London 62, 456–475. Oldroyd, D. (1996) Thinking about the Earth: A History of Ideas in Geology, Cambridge, MA: Harvard Uni- versity Press. Oreskes, N. (1999) The Rejection of Continental Drift: Theory and Method in American Earth Science, New York: Oxford University Press. Orzack, S. and Sober, E. (1993) “A Critical Assessment of Levins’s The Strategy of Model Building in Population Biology (1966),” The Quarterly Review of Biology 68(4), 533–546. Smith, G. (2014) “Closing the Loop: Testing Newtonian Gravity, Then and Now,” in Z. Biener and E. Schliesser (eds.), Newton and Empiricism, New York: Oxford University Press, pp. 262–345. Stegenga, J. (2009) “Robustness, Discordance, and Relevance,” Philosophy of Science 76(5), 650–661. Tal, E. (2012) The Epistemology of Measurement: A Model-Based Account, Ph.D. dissertation submitted to Uni- versity of Toronto. Weisberg, M. (2006) “Robustness Analysis,” Philosophy of Science 73, 730–742. ——— (2013) Simulation and Similarity, New York: Oxford University Press. Wimsatt, W. (1981) “Robustness, Reliability, and Overdetermination,” in M. Brewer and B. Collins (eds.), Scientific Inquiry and the Social Sciences, San Francisco: Jossey-Bass, pp. 124–163. ——— (2007) Re-Engineering Philosophy for Limited Beings, Cambridge, MA: Harvard University Press. Woodward, W. (1989) “Data and Phenomena,” Synthese 79(3), 393–472. ——— (2011) “Data, Phenomena, Signal, and Noise,” Philosophy of Science 77, 792–803. Zenneck, J. (2007) “Gravitation,” in J. Renn (ed.), The Genesis of General Relativity, Vol. 3: Gravitation in the Twilight of Classical Physics, Dordrecht, the Netherlands: Springer, pp. 77–112.

344 27 SCIENTIFIC REALISM AND CHEMISTRY

Paul Needham

1 Introduction Chemistry, it is often remarked, is the science of substances and their transformations into other substances. Popular descriptions often substitute “matter” for “substances”. But that doesn’t dis- tinguish the subject from physics. That may well accord with the view reigning in the philosophy of science for much of the 20th century that chemistry is reducible to physics, which has proved to be more speculation than specific argument (Hendry 2010). Concern with the different substances in which matter takes form is the distinctive feature of chemistry. Aristotle had distin- guished elements and mixts, and the kind of substance figured in his explanations of the behav- iour of matter – what rises is air, what falls is earth. But distinctions of substance played no role in the mechanics that was distinctive of physics as it developed after the 17th-century scientific revolution. Chemistry developed independently, freeing itself from the mystical accretions then associated with alchemy. Reunion came when the division of mass into substances, explaining for example pressure in osmosis and heating in chemical reactions, was handled by thermodynamics. The clear articulation of a theory of chemical combination by Geoffroy (1718) prepared the ground for the chemical revolution associated with Lavoisier’s new systematic chemistry. A distinction between compounds and solutions (not distinguished by Aristotle’s mixts) was pro- vided by the principle of constant proportions. Although implicit in Lavoisier’s work, this was taken to have been established by Proust in the first decade of the 19th century, so providing a basis for the extensive work on compositional analysis that followed. Dalton proposed an atomic interpretation of this principle, which remained controversial throughout the 19th century, when great strides were taken, particularly in organic chemistry, in understanding the great diversity and variety of structures of compounds. Physical chemistry was established as a separate branch of the subject at the end of the century, due in large part to the development of chemical ther- modynamics at the hands of Gibbs (1876–1878). Perrin’s investigations of Brownian motion at the beginning of the next century finally settled the question of the existence of a microscopic realm of discrete particles and chemists began speculating on the role of the newly discovered electron in the structure and reactivity of chemical substances. (See L. Henderson, “Global versus local arguments for realism”, ch. 12 of this volume, for discussion of Perrin’s arguments.) Lewis (1916) set the ball rolling with a neat distinction between ionic and covalent compounds based on distinct kinds of electronic bonding between atoms. At the same time, Kurnakow

345 Paul Needham

(1914) was upsetting the 19th-century scheme by questioning the universal validity of the law that compounds are what came to be called daltonides – with constant, integral proportions of elements – and introducing berthollides – compounds with variable, non-stoichiometric compo- sition. Thermodynamics and statistical mechanics provided some understanding of berthollides. But although Schrödinger’s quantum mechanics circumvented the inevitable paradoxes entailed by the classical framework in which Lewis formulated his ideas of bonding, a dispute about how it rationalised the source of the stability of the simple covalent bond has continued into this century (Needham 2014). Some indications of how realist/antirealist themes such as referential stability in the face of theory change and realism about atoms and molecules bear on chemistry are given in the remainder of this section and pursued in the following sections. The development of the subject has brought with it radical change, which is a primary source of antirealist sentiment. The history of chemistry is as good a fund as any of abandoned theories which might furnish premises for the pessimistic induction. On the other hand, certain concepts such as that of the atom which have finally triumphed have had a long history, often associated with substantial advances of the subject (Boyle, Dalton, Lewis). Perhaps the most familiar example motivating the preservation of reference over a long period of theory change is “water”, a term denoting a chemical substance which Putnam famously claims has preserved its extension from the time when a sample of the stuff was baptised with an ostensive gesture. Concerned as it is with chemistry’s distinctive concept of a chemical substance, this latter example deserves par- ticular attention here. Developments in the general notion of a substance have had implications which make it difficult to sustain Putnam’s thesis about the preservation of “water”’s extension, however, calling into question the presupposition of a particular predicate regimenting chemists’ understanding of water throughout the ages. A Hacking-like thesis of successful manipulation as the criterion of ontological objectivity would support the realist pretensions of chemistry with its outstanding success in the isolation of natural substances and their manipulation in schemes of synthesis for generating new ones. (See M. Egg, “Entity realism”, ch. 10 of this volume.) The Chemical Abstracts Service (CAS) database – the central repository of information for the American Chemical Society – records more than 89 million substances, many of which were synthesised with the aim of producing a substance with specific properties. Although chemists have been speaking of corpuscules and atoms for centuries, it is difficult to find more persistent content in microscopic concepts once the sweeping generalisations are put aside and the details examined. Boyle’s corpuscular philosophy has in the past been praised as an important factor in the establishment of chemistry as a branch of science in the 16th century. But Chalmers (1993, 2009: chapter 6) has clearly shown this to be pure speculation, with no impact on Boyle’s valuable empirical work. Lavoisier was likewise unimpressed towards the end of the 17th century with the pretensions of atomism:

[I]f, by the term elements, we mean to express those simple and indivisible atoms of which matter is composed, it is extremely probable we know nothing at all about them; but if we apply the term elements, or principles of bodies, to express our idea of the last point which analysis is capable of reaching, we must admit, as elements, all the sub- stances into which we are capable, by any means, to reduce bodies by decomposition. (Lavoisier 1789 [1965]: xxiv)

The turn of the century is seen by many to be a turning point in the fortunes of atomism. Despite the controversies over its introduction, Daltonian atomism is taught in school chemistry as something broadly correct if in need of elaboration of the details, and its 19th-century critics have been dismissed as misguided positivists, whose criticism can be discarded along with the

346 Scientific realism and chemistry now-discredited tenets of positivism. Chalmers (2005, 2008, 2009) and Needham (1998, 2004a, 2004b, 2008) have argued that there was considerably more substance in the 19th-century criti- cism of atomism, and the justification of claims about a realm of what we now call microstructure first came at the turn of the next century. This raises questions about what is involved in the philosophical stance of realism. Both Chalmers and Needham present their cases on the basis of a realist stance that science seeks and sometimes achieves knowledge of a world independent of our thoughts. But this may be considerably weaker than some realist stances, involving no specific claims about what science discovers about the world and in particular no stance on atomism. Twentieth-century scientific atomism is not a vindication of philosophical speculations about the corpuscular constitution of matter from the 16th and 17th centuries or any other historical period. Dalton had no notion of atoms combining to form molecules, but with this idea established by the beginning of the 20th century, chemists focused on the nature of bonds. The term “covalent bond” has since become a frequently used expression, suggesting a robust concept the finer details of which were finessed as progress has been made throughout the century. But a stable referent becomes elusive once we enter into the details. Quantum mechanics is the theory governing the behaviour of electrons and positively charged nuclei that interact to form bonds. But there is no clean application of the theory, which leads to the formulation of intractable differential equa- tions that are tackled by imposing different models in different circumstances to gain a tractable hold on the problem. The use of multiple models is a feature of modern chemistry which invites antirealist views with the distortions imposed by idealisation introducing false claims about the target system. Robustness analysis (Weisberg 2012, 2013: chapter 9) throws a different light on the matter. (See A. Levy, “Modeling and realism: strange bedfellows?”, ch. 19 of this volume, for further discussion.) Referential stability is to be sought, if anywhere, at the macro level. Chemical thermodynam- ics was developed in the latter decades of the 19th century, largely as a result of Gibbs’ work, which “has required no correction since it was published, and remains to this day the foundation for the study of phase separation. The underlying principles are few, and rigorous” (Sengers 2002: 43). Many substances of former times – earth, air, fire, salt as understood in the iatro- tradition, caloric, phlogiston – posited as playing fundamental roles in the general constitution of substances, are no longer accepted as such. But substances such as the aqua fortis synthesised by Geber in the 8th century, muriatic acid identified by Libavius in the 16th century, and salts of tartar identified in the 18th century by Campanella – our nitric acid, hydrochloric acid and potassium carbonate – along with the vast numbers of substances identified since the 19th cen- tury, are by and large still recognised together with the characteristic properties with which they were originally distinguished. Reductionist ideas are still at play on the part of realists worried that the macro level is entirely concerned with observational properties, with no substantial involvement of theory. The worry is an unnecessary concession in arguing against antirealists, who are the ones need- ing to delimit the observable. Realists should recall Duhem’s classical argument discrediting theory-independent observation, which doesn’t appeal to micro-level properties. Casual (the- oretically untutored) observation of the energy or entropy associated with a macroscopic body is impossible. Elements cannot be seen in their compounds, no parts of which exhibit the characteristic properties observable in elements when isolated, and chemists have long puzzled over in what sense elemtents are in compounds (Paneth 1931). Similarly, substances, whether elements or compounds, recognisable in isolation are usually not casually discernible in mixtures (solutions or heterogeneous mixtures such as colloids of various kinds) in which they most frequently occur. Compounds were first systematically distinguished from solutions in virtue

347 Paul Needham of the law of definite proportions, universally recognised by the beginning of the 19th century, and their identification was linked to the development of chemical theory. Success in finally isolating plant alkaloids, for example, was only finally recognised once chemists had a firm theoretical grasp of what constituted a single substance of the kind they were investigating, and identification proceeded by concordant analysis and synthesis.

2 Has “water” preserved its extension? Putnam’s externalist theory of reference, according to which factors determining what a pred- icate applies to (its extension) are independent of the thoughts of whoever might use the term at a particular time: “‘meanings’ just ain’t in the head” (Putnam 1975: 227), has been influential in promoting the understanding of realism as successful reference. The paradigm example is “water”, supposedly holding of whatever it applies to even for those who know nothing of mod- ern chemistry all the way back to Aristotle and before. Something counts as water (falls under the “water” predicate) provided it

bears a certain sameness relation (say, x is the same liquid as y, or x is the sameL as y) to most of the stuff I and other speakers in my linguistic community have on other occa-

sions called “water” . . . The key point is that the relation sameL is a theoretical relation (Putnam 1975: 225)

Superficial appearances might tempt us to say some watery quantity of material is water (as in Putnam’s thought experiment in which a substance with chemical constitution XYZ on a dis- tant planet Twin Earth looks like water does to us), but what decides the matter is the criterion of sameness, which according to Putnam is being H2O. It wasn’t until the mid-19th century that chemists finally settled on 2H O as the compositional formula of water (by way of determining the structural feature it had in common with certain organic compounds, in particular alcohols and ethers; see what follows). But this determines the import of the sameL relation, according to Putnam, whatever the date when “water” is used. Necessarily, water is H2O, whether the user of “water” knows it or not. This underlies scientists’ talk “as if later theories in a mature science were, in general, better descriptions of the same entities that earlier theories referred to” (Putnam 1975: 237). Water was essentially the same thing for Aristotle as it is for us, this being determined by a sameness relation that neither he nor we (unless we know enough science) are in a position to explicate. How far it is necessary to delve into modern chemical theory in order to give an adequate microscopic characterisation of a same substance relation applicable to water (without, in particular, making blatantly false claims) is a moot point. But interpreting “H2O” simply as a compositional formula (giving combining proportions of hydrogen and oxygen) suffices to distinguish water from all other substances since it has no isomers (distinct compounds with the same composition). A more pressing issue – part of chemistry’s theoretical heritage despite not concerning microstructure – which bears significantly on the preservation issue is that modern chemistry distinguishes between substances and phases. One and the same chemical substance, for example water, exhibits solid, liquid and gas phases under different circumstances, sometimes several of them simultaneously. (Several solid phases are distinguished under conditions of high pressure.) A quantity of liquid water doesn’t become a different substance when it is transformed into vapour by boiling or into ice by freezing. “Water” applies to a particular chemical substance independently of whatever phase(s) the quantity of matter in question might exhibit. This phase- independent sense of substance terms is essential for the proper understanding of the compositional

348 Scientific realism and chemistry

claim (entailed by any adequate microscopic interpretation) that water is H2O. Hydrogen and oxygen have low boiling points and are gases when water is liquid and through a large part of the temperature range when it is solid (all comparisons taken at the same pressure). It doesn’t make sense to say that water comprises hydrogen gas and oxygen gas. There is no gas in liquid or solid water. When claiming that water is composed of hydrogen and oxygen, all the substance terms are understood in the phase-independent sense. The relation of being the same substance as understood in modern chemistry is therefore not relativised to any particular phase; in particular, the relation of being the same substance water is not a matter of being the same liquid. Ice and steam are also H2O. There are several reasons for emphasising that the substance/phase distinction is a theoretical one. It is a cornerstone of the chemical thermodynamics developed by Gibbs already referred to. An important theorem of thermodynamics is the phase rule relating the behaviour of a quantity of matter to the number of substances in it and the number of phases it exhibits, which provided pre- viously unsuspected interpretations of jejune data. An early application of the theorem led Rooze- boom, aided by van der Waals’s deductions, to the discovery of a new substance (Daub 1976). The theoretical character of the distinction can also be appreciated by seeing how it emerged with the development of the concept of a chemical substance. Putnam’s understanding of this concept is in fact an older one, albeit still part of everyday usage. Substances haven’t always been understood as they are now. Aristotle’s conception of substances, which exerted an influence on how such terms were understood right up to the time of Lavoisier (his rejection of some aspects of the Aristotelian view notwithstanding), builds on a somewhat different view of elements and compounds. Like Lavoisier, Aristotle had no truck with atomism. Matter was considered continuous, and substance properties were understood to apply to all of what they apply to: “any part of . . . a compound is the same as the whole, just as any part of water is water” (328a10f.). Elements were defined in such a way that their defining properties, being such as to endow them with the capacity to react with one another, were so modified by the process of combination that the resulting compounds con- tained no parts exhibiting the elemental defining properties. These elemental defining properties were understood to endow the respective elements with the property of being solid (in the case of the Aristotelian element earth), being liquid (in the case of the Aristotelian element water) and being gas (in the case of the Aristotelian element air). So the Aristotelian element water differed from the substance water as understood in modern chemistry not only in being one of the ele- ments (from which all other substances are derived by combination) but also in being, as we would say, phase dependent. Aristotle understood the evaporation of water to involve the transmutation of water into another substance, air. (For more details, see Needham 2009.) Later, still within the Aristotelian tradition though the range of distinguished substances had greatly expanded, the general conception of substances as phase bound persisted. When Joseph Black introduced the notion of latent heat in 1760, he was still under the influence of this conception and evidently thought that supplying latent heat involved chemical reactions which transformed ice into another substance, water, and water into vapour, yet another substance. Using Lavoisier’s term “caloric”, these reactions might be represented by the following:

ice + caloric → water water + caloric → steam

Only free caloric, uncombined with any other substance, leads to the increase in a body’s degree of warmth and a gas’s expansion as it accumulates. Black was still in the grip of the Aristotelian conception of substance as necessarily connected with a certain phase: to change the phase is to change the substance.

349 Paul Needham

Lavoisier took over this conception of substances despite his reprimands concerning the Aris- totelian elements (Lavoisier 1789: xxiii). He apparently retained one of the Aristotelian elements, listing caloric as the “element of heat or fire” (Lavoisier 1789: 175). This element “becomes fixed in bodies . . . [and] acts upon them with a repulsive force, from which, or from its accumulation in bodies to a greater or lesser degree, the transformation of solids into fluids, and of fluids to aeriform elasticity, is entirely owing” (1789: 185). He goes on to definegas as “this aeriform state of bodies produced by a sufficient accumulation of caloric”. Air, on the other hand, is analysed into components. During the calcination of mercury, “air is decomposed, and the base of its res- pirable part is fixed and combined with the mercury . . . But . . . [a]s the calcination lasts during several days, the disengagement of caloric and light . . . [is] not . . . perceptible” (1789: 38). The base of the respirable part is called oxygen, that of the remainder azote or nitrogen (1789: 51–3). Thus, Lavoisier’s element oxygen is the base of oxygen gas and is what combines with caloric to form the compound which is the gas. Similarly, the new element base of hydrogen is not to be confused with the compound hydrogen gas, listed under binary compounds formed with base of hydrogen (1789: 198). Far from having the same extension, then, the present-day concept of water applies to much that wouldn’t have counted as water for Aristotle, namely all the H2O in the solid and gas phases. Whether Aristotle would have counted as water substances in the liquid phase what would now be consid- ered distinct from water I leave open. The additions to the extension are sufficient to contradict the preservation claim about the extension of “water”. There was a substantial change of meaning in the concept of a chemical substance to the phase-independent notion heralded by the recognition of the fixed elemental composition of compounds and the replacement of the concept of heating as the transference of caloric by the thermodynamic conception of a process involving change of energy. The concept received its final form in Gibbs’ chemical thermodynamics in the late 1870s. There is no suggestion of incommensurability here. (See H. Sankey, “Kuhn, relativism and realism”, ch. 6 of this volume.) Broadly understood, what Aristotle counted as water was a certain kind of substance in the liquid phase (i.e. a quantity of matter which is all water and all liquid), and this is determined by modern criteria of sameness of substance and sameness of phase. This substance concept as used by our forefathers may be defined in terms of concepts now in use, but preservation of reference (fixity of the extension of a predicate expressing the concept) doesn’t follow from this; on the contrary. Mixtures might give pause for thought about what belongs to the extension. Is there any water in the sea? Aristotle, who realised that salt could be obtained from sea water but not from fresh water, and took homogeneity to be a criterion of comprising a single substance, would presuma- bly have denied that there is any water in a quantity of matter when it is sea water, in opposition to the modern view of it as a solution (but see Earley 2005). This problem is exacerbated by the fact that ideas and standards of purity have undergone a radical development over this period. A more fundamental issue is that the introduction of a technical term allowing talk of pres- ervation of a predicate’s extension takes no account of different regimentations appropriate for different eras. In particular, the appropriate form of predicates reflects whether the substances they describe are understood to undergo transformation. Lavoisier’s principle of analysis assumes the indestructibility of elements, which was maintained until the discovery of radioactivity and was generally abandoned with the understanding that elements are generated and destroyed in stars. We might think Lavoisier’s concept of oxygen is essentially the same as ours, but in his theory it would be regimented with a unary predicate, whereas ours calls for a dyadic predicate applying to a quantity of matter and a time. Some historians argue that the conception of com- pounds as comprising immutable elements goes back at least to Geoffroy at the beginning of the 18th century (Klein 1994). Aristotle, who took water to be an element, thought it transformable

350 Scientific realism and chemistry into other elements. Substance kinds, on his view, whether elemental or not, would therefore be appropriately captured by dyadic predicates expressing the possibility that what is a particu- lar substance kind at one time is not at another. By contrast, the Stoic view, which allows the elements to be preserved in a mixt, calls for a monadic predicate. The Lavoisian view calls for dyadic predicates for compounds and unary predicates for elements. But since the status of water changed from element to compound on his scheme, it would still be expressed by a dyadic pred- icate applying to a quantity of matter and a time. Before Cavendish’s synthesis of water from hydrogen and oxygen in 1784, showing it is not an element, a unary predicate might have been appropriate. Clearly, the claim of what Chang (2003) calls preservative realism, that the “refer- ence” of an English expression like “water” is an extension that has remained unchanged since time immemorial, runs roughshod over significant differences that a more sensitive regimentation would reflect. The assumption that there are single things which are the extensions of expressions purporting to describe substances throughout history is not easily justified and hardly a tighten- ing up of the discussion with a more precise claim.

3 Daltonian atomism Modern chemistry is largely a science of molecular species – not only full-blown molecules, whether enduring like methane molecules or more transient like water molecules in the liquid but also reaction intermediates, free radicals, molecular ions like OH –, and perhaps even neutral ionic and covalent networks such as potassium sodium tartrate and silicon dioxide crystals involv- ing atoms bonded in macroscopic-seized structures. Antirealists are often portrayed as doubting the existence of such micro-entities. Realists may oppose this stance on grounds of the ontolog- ical commitments of well-confirmed and generally accepted contemporary theories, but not in terms of the preservation of successful reference from earlier times. The notion of a molecule is a relatively recent innovation. It formed no part of the ancient schemes of Democritus and Epicurus, Boyle’s speculations in the 17th century or Dalton’s putative explanation of the law of definite proportions.1 Avogadro, a contemporary of Dalton’s, might be credited with introducing the notion when in 1811 he introduced the “equal numbers” hypothesis that bears his name – equal volumes of all gases at the same temperature and pressure contain the same number of particles. Putting forward the proposal hardly counted as introducing the notion, however, since chemists at the time took the evidence to count decisively against the hypothesis. First, there was a problem of densities. Oxygen is denser than steam. But a particle of steam must be heavier than one of oxygen, it was reasoned, since it contains oxygen, and so there must be a smaller number of steam particles than oxygen particles in a unit volume. Second, reacting volumes seemed to contradict the hypothesis. Two volumes of hydrogen, for example, combine with one of oxygen to give two of steam. Gaseous elements were assumed to be simple, so that Berzelius represented the combination of two volumes of hydrogen with one of oxygen to give one of water by

2H + O → H2O, implying that a volume of steam (a compound rather than a simple substance) comprises half as many particles as the same volume of hydrogen. Avogadro proposed a different interpretation, consonant with his hypothesis that hydrogen and oxygen comprise diatomic molecules. His view of the reaction can be represented in modified Berzelius notation by

2H2 + O2 → 2H2O.

351 Paul Needham

This may look like the better proposal. But Avogadro had no conception of molecular structure leading to a way of independently confirming the diatomic structure of hydrogen and oxygen, and our contemporary knowledge shouldn’t lead us to think that Avogadro’s proposal is anything but an ad hoc manoeuvre to save his hypothesis. If two volumes of hydrogen combined with one of oxygen to yield n volumes of steam, Avogadro would have represented the reaction by

2Hn + On → nH2O, whatever the value of n (Nash 1957; Frické 1976). Berzelius had a reason to reject the idea of diatomic molecules. He believed like atoms would repel one another. On his electrical theory of chemical combination, quantities of different elements are held together in a compound by a balancing of the negative charge of one kind of matter with the positive charge of another. Dalton too believed that like atoms repel one another, although for a different reason: caloric associated with atoms was somehow so structured as to prevent the close approach of like atoms. This was how he explains his law of partial pressures. He saw no reason to abandon his assumption that water is HO. But in the absence of an independent method of determining a compound’s formula, which Dalton didn’t have, there was no way of determining atomic weights from gravimetric elemental composition. Dumas was not persuaded by these views and sought to defend Avogadro’s hypothesis. But after prolonged experimentation on the vapour densities of the then-known volatile elements, the perplexing results reluctantly led him in 1832 to the conclusion that “gases, even when they are simple, do not contain in equal volumes the same number of atoms” (quoted by Frické 1976: 294). Subsequent experimental investigations only compounded the problems. Avogadro’s hypothesis finally came to be accepted when Cannizzaro persuaded delegates at the famous meeting at Karlsruhe in 1860 of the viability of his method for determining atomic weights and chemical formulas from molecular weights calculated by application of the hypoth- esis to vapour densities. He accepted St Clair Deville’s explanation of the anomalous variable vapour densities as due to dissociation – apparently confirmed by associated colour changes, for example when colourless dinitrogen tetroxide becomes darker brown as the temperature increases, suggesting dissociation into brown nitrogen dioxide, and then colourless as the tem- perature increases further:

N2O4 2NO2 2NO + O2. colourless brown colourless ⇌ ⇌ Increasing the number of particles due to dissociation increases the volume, in accordance with Avogadro’s hypothesis, decreasing the density. Vapour density is a dimensionless magnitude defined as the ratio of the weight of a volume of the gas to the weight of an equal volume of a standard gas, for example hydrogen, at the same temperature and pressure. By Avogadro’s hypothesis, these volumes contain the same number of molecules, and the ratio is that of the weight of a molecule of the gas to one of hydrogen – and assuming further with Avogadro that hydrogen is diatomic – to twice the atomic weight of hydrogen. Conventionally assign- ing hydrogen an atomic weight of one, dimensionless molecular weights relative to hydrogen are therefore twice the vapour density. Cannizzaro proceeded to compare vapour densities for several compounds of a given element and, from their molecular weights and percentage elemental compositions, found the weight of the element per gram molecule2 of each com- pound. This must be some integral multiple of the atomic weight, which is probably the least of these weights. From several such tables for different elements with compounds in common,

352 Scientific realism and chemistry the formulas for the compounds can be determined. For example, the hydrogen data shows that the water molecule contains two atoms of hydrogen and the oxygen data that it contains one of oxygen, so its formula is H2O. However, structure theory had already resolved the problem of the formula of water before Cannizzaro’s deployment of Avogadro’s hypothesis in his 1858 paper. Around 1850, William- son had determined alcohols and ethers to be of the water type, requiring that water have two equivalents of hydrogen which could be substituted by alkyl groups. This imposed ethyl alcohol’s structure C2H5OH on the compositional formula C2H6O with the substitution of one equivalent of hydrogen with one of the ethyl radical. Dimethyl ether, with the same compositional formula, was distinguished by the structural formula (CH3)2O, showing it to be derived from water by the replacement of two equivalents of hydrogen by two methyl radicals (Duhem 1902: chapter 5; Rocke 1984: 215–223). Nevertheless, despite the undoubted success of structure theory, the status of molecules was highly controversial during the 19th century. One reason was nicely articulated by Duhem (1892, 1902), who showed how the developments in assigning structural formulas to organic sub- stances could be understood without any commitment to the thesis that the formulas represent microscopic molecules of which these substances are comprised. In other words, the evidence supporting structural theory was neutral on the molecular hypothesis (Needham 1998, 2004a, 2004b, 2008). Chalmers (2005, 2008, 2009), approaching the subject from the historical devel- opment of atomic speculation, comes to essentially the same conclusion. The underlying ratios determined by Cannizzaro’s method didn’t argue otherwise.3 The experimental justification of an underlying realm of molecules came at the beginning of the next century, and then only in the most general terms. Perrin’s observations confirmed only the crude, central tenets of the kinetic theory (Chalmers 2011) and provided little information on the nature of molecules. Detailed knowledge of microstructure was to come with the development of theoretical resources and experimental techniques, but it is salient to point out that much remains a mystery. In the words of one expert, “[o]f all known liquids, water is probably the most studied and least understood” (Franks 1972: 18) and more recently, “[i]t is a common fallacy that new interpretations reported in the immediate past are necessarily more correct than those published, say 50 years ago” (Franks 2000: 215). What is known about water, however, is that it is not molecular.4 Even assuming water molecules can be distinguished as entities held together by internal hydrogen–oxygen bonds from entities held together by intermolecular links in the hydrogen-bonded framework, bonding of both kinds is short lived – the former because of the continual dissociation into posi- tive hydronium and negative hydroxyl ions5 and rapid reassociation into water molecules and the latter because of thermal motion. So even if modern science has confirmed the atomists’ inter- pretation of the structural formulas of some substances like normal butane, CH3CH2CH2CH3, as consisting of enduring molecules, it has also shown that the analogous interpretation of water as comprising enduring H2O molecules is wrong and that non-molecularity in various forms is a feature of many substances. The old argument that atomic notions were no more than a mental crutch to facilitate thinking about chemical structure had to be reappraised after the investigations of Brownian motion at the beginning of the 20th century when other general evidence for the existence of micro-particles accumulated, and the quest to understand their chemical nature began. As Lewis was soon to point out, it seemed that electrons must somehow be involved in chemical bonding. But his speculations were not easily reconciled with the physics. The crudest conflicts were overcome with the application of quantum mechanics to the problem of bonding within quantum chemistry, but the nature of the approximate solutions to Schrödinger’s equation gave rise to other issues.

353 Paul Needham

From the outset, theoreticians were divided between those, like Pauling, making generous use of chemical concepts without quantum mechanical foundation and treating atoms as parts of molecules in the development of quantum chemistry and those, like Mulliken, inclined to follow a line more strictly dictated by quantum mechanical principles (Hendry 2012). There remains a large and deeply problematic gulf between the familiar idea of molecules as structured entities containing functional groups (such as the hydroxyl group – OH com- mon to alcohols) with characteristic physical and chemical properties conferred by “orbitals localized to a group, perhaps delocalized within that group” (Hoffman 1971: 1) and the fully de-localised models on which calculations are based with no trace of the localised functional groups. As Hoffman goes on to say, “[m]any theoreticians . . . have given up on trying to build a bridge of understanding between their computational results and the current ways of thinking of experimentalists”, although he himself has sought a compromise by looking for interpretations “starting from a semilocalized view”. Understanding of molecules as structures comprising localised functional groups is, after all, central to the analysis sought in the interpretation of spectra, the planning and execution of synthesis of new substances, all aspects of the biochemical understanding of cell biology – for example the mechanisms of metabolism and the way DNA governs the synthesis of proteins – and so on; in short, to the successful enterprise of chemistry. Unfortunately, limitations of space preclude pursuing these issues in any detail. But regard- ing the theme with which this section began, a recent claim to “return to Dalton’s notion of an atom” (Bader and Matta 2013: 256) in Bader’s theory of atoms in molecules is questionable. Whatever the merits of this theory, what Bader counts as atoms depends on how the electron density falls in particular molecules, which is not transferable (Popelier 2000). A carbon atom distinguished by this method in methane is not the same as a carbon atom so distinguished in any other compound and provides no basis for a compositional interpretation of chemical species in terms of atomic units.

4 Conclusion Few theoretical insights have survived in modern chemistry from its long history – disturbingly few for realists seeking continuity of reference. Lack of continuity of reference is hardly a problem for the more circumspect realism that insists on a world independent of our thought and accessible, in part at least, to scientific investigation in the quest- for knowledge. The per ceived threat from the pessimistic induction is better countered along the lines of the recent serious questioning of its validity (Mizrahi 2013). Negative existential claims on the basis of increasingly accurate evidence are as much part and parcel of a natural realist stance as the evidence-guided affirmation of ontological commitments. Continuity in the development of ideas and concepts can be traced in this spirit without fixing reference. Concepts of sameness of, and comprising a single, substance were recognised by Aristotle and have been applied and developed since. Other concepts are similarly better treated as evolving -than in terms of preser vation of reference. Pauling strove to maintain the Daltonian idea of compositionality based on a characterisation of atoms in terms of transferable properties, but despite Bader’s recent claims, quantum chemistry seems to have abandoned the idea. Chemical bonds would be a natural subject to take up in a continuation of this discussion elsewhere (Hendry 2008; Weisberg 2008). The claim at the beginning that transformations stand alongside substances as what chemistry deals with would suggest that processes should figure in chemical ontology (for two discussions, see Stein 2004; Needham 2013). But there has been no space to develop a fuller treatment incorporating these themes.

354 Scientific realism and chemistry

Notes 1 Hooykaas (1949: 71) suggests medieval alchemy’s “particle which is a mechanical combination of very strong coherence . . . foreshadows our modern chemical molecule”. But tenuous analogies with older speculative ideas lacking experimental justification do not count against the claim. 2 That is, the number of grams given by the molecular weight. The gram molecular weights of all gases contain the same number of molecules, called the Avogadro number (c. 6 × 1023), and by Avogadro’s hypothesis have the same volume (22.4 l at N.T.P.). 3 Duhem understands what he calls the rule of Avogadro and Ampère to be the principle that “The chemical formula of various compound substances are fixed in such a way that the molecular masses of these substances, brought to the state of a perfect gas, occupy the same volume for the same con- ditions of temperature and pressure” (1902: 81 [2002: 52]). “Molecular mass” is defined in terms of the “proportional numbers” for the constituent elements – 14 for nitrogen, 16 for oxygen and 1 for

hydrogen. Thus, the formula NO3H for nitric acid is interpreted so that “When combining 14 grams of nitrogen, 3 × 16 = 48 grams of oxygen and 1 gram of hydrogen, 14 + 48 + 1 = 63 grams of nitric acid are obtained, and 63 grams is then said to be the molecular mass of nitric acid” (p. 80 [p. 51]). 4 Except in the gas phase at temperatures sufficiently high that all dimers have dissociated but lower than that at which dissociation into hydrogen and oxygen sets in. 5 The reality is far more complicated, with protons and hydroxyl ions bound to more than just one water molecule.

References Bader, R. and Matta, C. (2013) “Atoms in Molecules as Non-Overlapping, Bounded, Space-Filling Open Quantum Systems,” Foundations of Chemistry 15, 253–276. Chalmers, A. (1993) “The Lack of Excellency of Boyle’s Mechanical Philosophy,” Studies in History and Philosophy of Science 24, 541–564. ——— (2005) “The Status of Dalton’s Atomic Theory,” The Rutherford Journal 1. URL: www. rutherfordjournal.org (online journal). ——— (2008) “Atomism and Aether in Nineteenth-Century Physical Science,” Foundations of Chemistry 10, 157–166. ——— (2009) The Scientist’s Atom and the Philosopher’s Stone: How Science Succeeded and Philosophy Failed to Gain Knowledge of Atoms, Dordrecht: Springer. ——— (2011) “Drawing Philosophical Lessons from Perrin’s Experiments on Brownian Motion: A Response to van Fraassen,” British Journal for the Philosophy of Science 62, 711–732. Chang, H. (2003) “Preservative Realism and Its Discontents: Revisiting Caloric,” Philosophy of Science 70, 902–912. Daub, E. E. (1976) “Gibbs’ Phase Rule: A Centenary Retrospect,” Journal of Chemical Education 53, 747–751. Duhem, P. (1892) “Notation atomique et hypothèses atomistiques,” Revue des questions scientifiques 31, 391–457. ——— (1902) Le mixte et la combinaison chimique: Essai sur l’évolution d’une idée, Paris: C. Naud. _____ (2002) Mixture and Chemical Combination, and Related Essays, translated and edited by Paul Needham, Dordrecht: Kluwer. Earley, J. E. (2005) “Why There Is No Salt in the Sea,” Foundations of Chemistry 7, 85–102. Franks, F. (ed.) (1972) Water: A Comprehensive Treatise, Vol. 1: The Physics and Physical Chemistry of Water, New York: Plenum Press. ——— (2000) Water: A Matrix of Life (2nd ed.), Cambridge: Royal Society of Chemistry. Frické, M. (1976) “The Rejection of Avogadro’s Hypotheses,” in C. Howson (ed.), Method and Appraisal in the Physical Sciences: The Critical Background to Modern Science, 1800–1905, Cambridge: Cambridge University Press, pp. 277–307. Geoffroy, E. F. ([1996] 1718) “Table of Different Relations Observed in Chemistry between Different Sub- stances,” Science in Context 9, 313–320. Gibbs, J. W. (1876–1878) “On the Equilibrium of Heterogeneous Substances,” Transactions of the Connecticut Academy of Arts and Sciences 3, 108–248, 343–520. Hendry, R. F. (2008) “Two Conceptions of the Chemical Bond,” Philosophy of Science 75, 909–920. ——— (2010) “Ontological Reduction and Molecular Structure,” Studies in History and Philosophy of Modern Physics 41, 183–191.

355 Paul Needham

——— (2012) “The Chemical Bond,” in R. F. Hendry, P. Needham and A. J. Woody (eds.), Handbook of the Philosophy of Science, Vol. 6: Philosophy of Chemistry, Amsterdam: Elsevier, pp. 293–307. Hoffman, R. (1971) “Interaction of Orbitals through Space and through Bonds,” Accounts of Chemical Research 4, 1–9. Hooykaas, R. (1949) “The Experimental Origin of Chemical Atomic and Molecular Theory before Boyle,” Chymia 2, 65–80. Klein, U. (1994) “Origin of the Concept of Chemical Compound,” Science in Context 7, 163–204. Kurnakow, N. S. (1914) “Verbindung und chemisches Individuum,” Zeitschrift für anorganische und allgemeine Chemie 88, 109–127. Lavoisier, A. ([1965] 1789) Elements of Chemistry, New York: Dover reprint. Lewis, G. N. (1916) “The Atom and the Molecule,” Journal of the American Chemical Society 38, 762–785. Mizrahi, M. (2013) “The Pessimistic Induction: A Bad Argument Gone Too Far,” Synthese 190, 3209–3226. Nash, L. K. (1957) “The Atomic Molecular Theory,” in J. B. Conant and L. K. Nash (eds.), Harvard Case Histories in Experimental Science, Cambridge: Harvard University Press, pp. 217–321. Needham, P. (1998) “Duhem’s Physicalism,” Studies in History and Philosophy of Science 29, 33–62. ——— (2004a) “Has Daltonian Atomism Provided Chemistry with any Explanations?” Philosophy of Science 71, 1038–1047. ——— (2004b) “When Did Atoms Begin to Do Any Explanatory Work in Chemistry?” International Studies in the Philosophy of Science 8, 199–219. ——— (2008) “Resisting Chemical Atomism: Duhem’s Argument,” Philosophy of Science 75, 921–931. ——— (2009) “An Aristotelian Theory of Chemical Substance,” Logical Analysis and History of Philosophy 12, 149–164. ——— (2013) “Process and Change: From a Thermodynamic Perspective,” British Journal for the Philosophy of Science 64, 395–422. ——— (2014) “The Source of Chemical Bonding,” Studies in History and Philosophy of Science 45C, 1–13. Paneth, F. A. ([1962] 1931) “The Epistemological Status of the Chemical Concept of Element,” British Journal for the Philosophy of Science 13, 1–14, 144–160. Popelier, P.L.A. (2000) Atoms in Molecules: An Introduction, Harlow: Pearson Education. Putnam, H. (1975) Mind, Language and Reality (Philosophical Papers, vol. 2), Cambridge: Cambridge Uni- versity Press. Rocke, A. J. (1984) Chemical Atomism in the Nineteenth Century: From Dalton to Canizzaro, Columbus: Ohio State University Press. Sengers, J. L. (2002) How Fluids Unmix: Discoveries by the School of Van der Waals and Kamerlingh Onnes, Amsterdam: Koninklijke Nederlandse Adakamie van Wetenschappen. Stein, R. L. (2004) “Towards a Process Philosophy of Chemistry,” Hyle 10, 5–22. Weisberg, M. (2008) “Challenges to the Structural Conception of Bonding,” Philosophy of Science 75, 932–946. ——— (2012) “Chemical Modeling,” in R. F. Hendry, P. Needham and A. J. Wood (eds.), Handbook of the Philosophy of Science, Vol. 6: Philosophy of Chemistry, Amsterdam: Elsevier, pp. 35–63. ——— (2013) Simulation and Similarity: Using Models to Understand the World, New York: Oxford University Press.

356 28 REALISM ABOUT COGNITIVE SCIENCE

Mark Sprevak

1 Introduction This chapter is about a puzzle. RealismX is oftenabout glossed as the Xs claimare thatmind independent: Xs exist and have their nature independent of human beliefs, interests, attitudes, and other mental states.Xs are out there, getting on with it, independently of human minds. How then should one understand realism about the mind? Having an answer to this is important if one wants to be a realist about cognitive science. The subject matter of cognitive science includes mental states, mental processes, and mental capacities. None of these are mind independent. But how then can one be a realist about them? This is our puzzle. My solution will be to distinguish between two types of mind dependence in cognitive science. One type is trivial and follows from the nature of the subject matter. The other type is non-trivial, and it is the true point of contention between a realist and an anti-realist about cognitive science. My aim in this chapter is to identify that point of contention. In section 2, I describe different varieties of realism that one might adopt about cognitive science. In section 3, I argue that realism that asserts mind independence has a special role to play in cognitive science. In section 4, I present the puzzle about this- variety of realism. In sec tion 5, I examine three solutions to the puzzle. Each draws a distinction between a trivial and a non-trivial form of mind dependence in a different way. My favoured proposal derives from the observation that theories in cognitive science aim to explain mental phenomena in terms of structured complexes – for example, in terms of computations, mechanisms, networks, or causal chains. I claim that realism in cognitive science should be understood as a claim about the individuals and relations that compose those structures not entireabout complexes the taken as whole. Mind dependence about the wholes (hypothesised to realise, constitute, or otherwise compose mental processes) is trivial. Mind dependence about the parts and relations that make up those wholes is not. This is the true point of disagreement between realists and anti-realists about cognitive science.

2 Kinds of realism Realism is not a single claim but a range of possible claims that could be made about a range of subject matters. One might be a realist about one type of entity or subject matter and an anti- realist about another. One might be a realist about electrons but an anti-realist about beauty

357 Mark Sprevak marks. ‘Local’ versions of realism should also be distinguished from ‘global’ versions. A global version of realism would assert realism about all or most subject matters of the mature sciences. I will not consider global versions of realism here. My concern is with realist claims made about entities specifically in cognitive science. Within this domain, a realist may make a range of claims. Realist/anti-realist disputes take on a different character depending on which claim is at issue. In this section, I highlight six possible varieties of realist claim: claims regarding existence of an entity, the nature of that entity, referential semantics for the discourse that purports to talk about that entity, truth or approximate truth of that discourse, evidence for truth of that discourse, and mind independence of the entity. A realist may assert or deny these claims in various combinations. First, existence. On this view, realism about Xs commits one to the existence of Xs. Fodor (1975) is a realist in this sense about beliefs. The relevant kind of anti-realism would be elimina- tivism. Churchland (1981) holds this position about beliefs. Second, nature of the entities. Assuming that Xs exist, what sort of things are Xs? This second variety of realism holds that Xs are discrete individuals. Fodor (1975) is a realist in this sense about beliefs. Beliefs are discrete individuals that occur and re-occur inside someone’s head. The rele- vant kind of anti-realism would take a deflationary view of the relevant entity. Dennett (1991b) argues that beliefs are not discrete individuals but rather amorphous and hard-to-count patterns in and around agents that observers may exploit for predictive or explanatory gain. Third, referential semantics. If one is a realist about Xs, then the relevant part of the discourse that purports to talk about Xs should be understood as having a referential semantics. Fodor (1975) is a realist in this sense too about beliefs. If we say, ‘Abby has the belief that beer is in the fridge’, we refer to some thing that Abby has. According to Fodor, that thing is a tokening of a sentence in the language of thought inside her head. The relevant form of anti-realism would be a non- referential semantics for the relevant discourse. Ryle (1949) advocates this kind of anti-realism about beliefs. When we say, ‘Abby has the belief that beer is in the fridge’, we do not refer to any thing that Abby has. Instead, we intend to convey to our listeners a warrant to make inferences about, among other things, Abby’s behaviour. Fourth, truth. If one is a realist about Xs then the relevant part of the discourse aims to tell the truth. Block (2007) advocates this form of realism about phenomenal consciousness. Experiments to study phenomenal consciousness involve reports from human subjects about the occurrence of subjective phenomenal aspects of their experience (reporting, for example, that they experience red). We should, according to Block, understand these reports as aiming to tell the truth about those experiences. In contrast, Dennett (1991a) argues that we should be fictionalists about phenomenal consciousness. Reports of experiencing red should be understood not as aiming to tell the truth about the occurrence of phenomenal aspects of experience but as a roundabout way for the subject to express that her cognitive system has detected a highly disjunctive physical property (such as redness). A realist holds that the discourse aims at telling the truth. An anti-realist denies the truth-seeking character of discourse but may maintain that the talk has other virtues (e.g. pragmatic virtues). Fifth, evidence. If one is a realist about Xs, then one holds that we have justification for the truth (or approximate truth) of the relevant part of the discourse. Block (2007) is a realist in this sense too about phenomenal consciousness. Subjects’ reports of conscious experience not only aim to tell the truth about instantiations of phenomenal character, we also (normally) have justi- fication that theyare true. Significantly, the justification holds even under unusual presentational conditions such as when stimuli are flashed briefly to subjects in Sperling’s (1960) experiments (a grid with characters is briefly presented followed by a visual mask). The relevant form of anti-realism would involve some degree of epistemic caution about the relevant claims. Irvine (2012) claims

358 Realism about cognitive science that we lack justification for believing the reports of subjects about phenomenal aspects of their experiences in the context of Sperling’s experiments. Finally, mind independence. Like the second claim, this concerns the nature of the entities. How- ever, the question here is not about their nature as discrete individuals but about their degree of mind dependence: does that entity depend, for its existence or nature, on minds? All our knowl- edge of the world is mediated to some extent by our minds. We cannot see the world untouched by human conceptual, motivational, and other cognitive systems. We may attempt to counteract the effects of our cognitive makeup by taking into account its hypothesised nature. But seeing the world ‘as it is’, without some contribution from the human mind, is impossible. This invites a question: Which parts of our knowledge represent to entities and properties that are really out there and which are (partial) constructions of our minds? Some entities appear to exist and have the properties that we attribute to them independently of the way we think of them. Perhaps some fundamental particles in physics, e.g. electrons, are like this. If our minds were not to exist, or if they were to have a radically different nature, electrons would continue to exist and have unchanged properties. Other entities appear to be partial constructions of our minds. Beauty marks may be an example of these. Whether a specific skin colouration is a beauty mark depends on how that colouration strikes, or would strike, a mind like ours and fit with our visual preferences – whether that patch looks beautiful to us. If human minds were not to exist, or if they were to have a different makeup, the distribution of beauty marks in the world would be different. One might ask the realism/anti-realism question about the entities of cognitive science. For example, among those entities are neural computations. Neural computations are invoked by cognitive science to explain human mental processes and mental capacities. Specific mental processes – for example, specific kinds of decision making – are explained by saying that the brain of the subject concerned performs specific neural computations. Cognitive science invokes neural computations to explain mental life. Should one be a realist or an anti-realist about these neural computations? Fodor (1980) is an example of someone who is a realist about these neural computations. Suppose that Abby’s brain performs a specific computation which realises her decision-making processes that determines, on a specific occasion, whether Abby goes to the fridge to get a beer. According to Fodor, whether Abby’s brain performs this computation, or any computation at all, has nothing to do with how we view Abby. Whether Abby’s brain performs this computa- tion is determined by facts about Abby and her brain. Burge (1986) is another realist about neural computation but he holds that the neural computation depends on a broader base of mind- independent facts: it depends not just on Abby’s brain but also her causal relationship to her environment. Despite their disagreement, both Fodor and Burge agree that neural com- putations are really out there, they are not a grouping that is dependent on how we human agents view Abby – a grouping that is somehow significant to us but not reflective of any objective distinction in the world. The aim of computational cognitive science is to discover and describe these objective distinctions in the world carved out by neural computations. One may get this description right or wrong, but one does so independently of how human agents conceive the world. In contrast, Putnam (1988) and Searle (1992) argue for anti-realism about neural computa- tion. According to them, neither Abby’s brain nor her brain plus her relation to her environment determine whether her brain performs a specific computation. Absent consideration of how we view Abby, there is no fact about whether Abby’s brain performs one computation rather than another, or whether it performs any computation at all. Neural computations are observer rela- tive. If human minds were not to exist, or if they were to have a different makeup, the distribution of neural computations would be different. Neural computations are more like beauty marks

359 Mark Sprevak than electrons: they are a construction that reflects the specific way in which humans are disposed to conceive of the world, not objective features waiting ‘out there’ to be discovered. For electrons and beauty marks, the question about mind dependence can be posed in a rela- tively straightforward manner. The worry is that the same cannot be said for neural computations. Neural computations are hypothesised to be connected to mental life. They realise or otherwise constitute aspects of our mental life. This makes realism about neural computations hard to under- stand as a coherent possibility. If neural computations realise mental life, how can neural computa- tions be mind independent? Fodor and Burge cannot believe that Abby’s neural computations are entirely mind independent. The entity in question – the neural computation that underlies Abby’s decision making about the beer – depends on at least one mind: Abby’s own. If Abby’s mind were not to exist or to have a different nature, that neural computation would differ. Similarly, Putnam and Searle cannot believe that Abby’s neural computations are in some way or other mind dependent. That would be trivially true. No one think that her neural computations can exist, or have their nature, independently of how things go in Abby’s mental life. So both the realist and the anti-realist must agree that Abby’s neural computations are mind dependent in some way or other. The realist/ anti-realist dispute cannot therefore be about mind dependence simpliciter. Something else must be going on. Identifying what this is – what is at stake in this realist/anti-realist dispute in cognitive science – is our puzzle. Earlier in this section we saw that a realist about X need not endorse a mind independence claim about X. We saw five alternative ways to be a realist about cognitive science. This suggests a quick way out of our puzzle. If stating mind independence is a problem for cognitive science, why not simply abandon this form of realism and pursue some other form of realism? There are at least five other options to choose from. In the next section, I argue that while there is nothing wrong with these alternative other forms of realism about cognitive science, this strategy would have a significant cost. Cognitive science needs the mind-independence form of realism to fulfil one of its wider ambitions: the ambition to naturalise the mind.

3 Why care about mind independence? The world contains at least two kinds of phenomenon: mental phenomena – involving entities like beliefs, sensations, ideas, concepts, thought processes, judgements, and so on – and physical phenomena – involving entities like bodies, brains, atoms, molecules, cells, and so on. The two appear to be related: changes in one correlate with changes in the other. But the exact nature of the relationship is unclear. In particular, it is unclear whether mental phenomena are sui generis entities or whether they somehow ‘arise from’ the physical. Mental phenomena are puzzling not just because they are complex but because we do not know how they relate to the physical world. Some theories in cognitive science aim to bridge this gap. Those theories reductively pair specific mental phenomena with non-mental phenomena. The non-mental phenomena often have specific properties: the states perform computations, represent, process information, carry error signals, and so on. Certain instances of decision making, for example, are paired with certain neural computations (Schultz, Dayan, and Read Monague 1997; Gold and Shadlen 2001; Gold and Shadlen 2007; Rangel, Camerer, and Montague 2008). Such theories propose a relationship between the mental and the non-mental that goes beyond that of mere correlation. The precise details differ between cases, but two general observations can be made. First, the association between the mental and non-mental has a non-trivial modal extent. The mental and non-mental reliably correlate across a wide range of circumstances includ- ing conditions not experimentally tested. Precisely how far this modal dimension extends – across every possible world, across worlds with the same physical laws as ours, across worlds with

360 Realism about cognitive science the same natural laws as ours – is open to question, but we can be sure that the association has a non-trivial modal dimension. The second observation is that the non-mental member of the relationship could substitute for its mental counterpart without change in scientifically relevant effects. For example, the scientifically relevant effects associated with decision making include patterns in behaviour, patterns in error making, how uncertain evidence is weighed, reaction times, and characteristic downstream neural effects. A potential non-mental partner would not only need to co-occur with specific instances of decision making but also to produce those characteristic effects. The drift-diffusion model, for example, aims to provide not just a neural correlate of decision making but also to show that this correlate would produce the characteris- tic effects associated with decision making regarding reaction times, weighting of evidence, and susceptibility to errors (Gold and Shadlen 2007). If a non-mental phenomenon co-occurs with a mental phenomenon across a wide range of modal circumstances and it also generates all the scientifically relevant effects associated with that mental phenomenon, then we are in a position to advance a reductive claim. Rather than hold that the mental phenomenon and non-mental phenomenon are two distinct entities that happen to co-occur, we may reduce one to the other. One might hypothesise that the mental and non-mental entities bear some reductive relation – perhaps identity, realisation, constitution, ground- ing, or another relation – to each other. For example, one might claim that decision making is a specific neural computation or that decision making is realised by a neural computation or that decision making is grounded by a neural computation. The theories in question identify some kind of reductive base for a mental phenomenon. The details of the reductive relation may differ (identity vs. realisation vs. constitution vs. grounding). But the general idea of finding some non-mental base that is sufficient for the scientifically relevant effects of the mental phenomenon is shared. One pairs a mental phenomenon with a non-mental phenomenon in such a way that the non-mental phenomenon is sufficient for, and somehow produces, the scientifically relevant properties of the mental phenomenon. Successful reductions of this sort appear to provide a road to naturalising the mind. By ‘nat- uralising’ I mean explaining scientifically relevant effects of mental phenomena in non-mental terms: in terms of a subject matter that does not already presuppose mental life. A naturalising explanation is one that takes as its explanandum some scientifically relevant effect of a mental phenomenon (for example, some property of decision making) and gives as its explanans an account that does not refer to or otherwise already presuppose mental life (for example, an explanans exclusively in terms of neural computations, physical inputs, and physical outputs of the brain). Naturalising the mind therefore requires realism about the subject matter of the explanans. One needs to be a realist – in the sense of asserting mind independence – about the entities cited by the explanans. To see why consider the alternative. Suppose that anti-realism about neural computation is correct. Explaining decision making by appeal to neural compu- tation would in this case not serve to naturalise that mental phenomenon. Explanation in terms of neural computations would not explain the phenomenon in non-mental terms. It would explain the phenomenon in terms of entities that depend on minds for their existence and nature. Explanation of mental phenomena in terms of neural computation would not be an explanation that does not refer to or already presuppose mental phenomena. It would not, in the sense of ‘naturalising’ above, be naturalising. One might of course still explain decision making in terms of neural computation. But one should not think that this provides a way to naturalise the mind: one has not shown how decision making arises from non-mental ingredients. Rather, one has offered a non- reductive explanation: an explanation of a mental phenomenon in terms, inter alia, of other mental phenomena. Nothing wrong with this – per se. But it does not serve an ambition to naturalise the mind. In order for the naturalising strategy described above to

361 Mark Sprevak work we need to be realists – specifically, realists who assert mind independence – about the subject matter of our explanans. Realism about the subject matter of cognitive science is not a mere idle intellectual posture. Realism of the mind-independence variety is needed for explanations within cognitive science to serve the project of naturalising the mind. It is perfectly possible to pursue cognitive science with- out any naturalistic ambition. But giving up that ambition is not to be taken lightly. Consider what we would miss out on: understanding how the mind arises from non-physical ingredients. Rather than abandon this variety of realism, let us instead examine what the problem with it is and how to solve it.

4 The puzzle about mind independence Reductive theories in cognitive science aim to pair mental phenomena with non-mental phe- nomena. A reduction of this kind appears to open the door to naturalising the mind. However, this can only work if one can be a realist – in the sense of asserting mind independence – about the non-mental side of the relation. The problem is that the preceding two claims – (i) mental phenomena reduce to non-mental phenomena; (ii) the non-mental side of the relation is mind independent – are incompatible. This is our problem. Let us examine it in more detail. Consider what happens if the reductive relation in question is identity. Assume that some instance of human decision making is a specific neural computation. In order to use this to offer a naturalistic explanation of decision making, we would need to be realists about this neural com- putation: we would need to assert that it is, in an appropriate sense, mind independent. But how could the neural computation be mind independent? If decision making is a neural computation, then that neural computation must be mind dependent. Being identical to a mental phenome- non surely entails mind dependence. If human minds were not to exist, or if they were to have a different makeup, the existence and nature of the neural computation would be different. What stronger reason could there be for thinking that X is mind dependent than X being identical to a mental phenomenon? But if the neural computation is mind dependent, then anti-realism is true and realism is false. The reduction of decision making to a specific neural computation seems to preclude realism about that neural computation. What if the reductive relation in question were realisation? Identity is a symmetric relation: if X is identical to Y, then Y is identical to X. Perhaps it is the symmetry of this reductive rela- tion that is the source of the problem. Realisation is asymmetric: if X realises Y, Y does not real- ise X. Would an asymmetric relation allow us to avoid mind dependence of one side of the relation infecting the other side? Unfortunately, no. The reason is that in spite of realisation being an asymmetric relation the reductive base still cannot occur independently of its mental phe- nomenon, which is what the realist requires. Suppose that an instance of decision making is realised by a neural computation. If it is realised by that neural computation, then that neural computation is sufficient for that instance of decision making to occur.1 The occur- rence of that neural computation is sufficient for the occurrence of that instance of decision making; otherwise, it would be unclear why what we found was a reductive base at all. So the following conditional holds: If this neural computation were to occur, then the relevant decision-making process would occur too. Moreover, this conditional holds over a non-trivial range of modal circumstances (the precise extent is determined by the realisation relation in question). The neural computation is tied to the mental process in a modally rich way such that the computation cannot occur without the relevant decision making also occurring. But then the neural computation is not mind independent. The neural computation cannot occur

362 Realism about cognitive science independently of minds. It cannot occur without the associated mental process also occur- ring. The reductive base is not mind independent. Let us put the same point schematically. Suppose that a reductive base, B, realises some mental process, M. B is tied in the modally rich way entailed by the realisation relation to M. B cannot occur (over some non-trivial range of modal circumstances) without M also occurring. But this means that B is not mind independent. B cannot occur without M and hence B cannot occur without specific mental phenomena occurring. If human minds were not to exist, or if they were to have a different makeup (e.g. without M), then the facts about M would be different so the facts about B would be different. The realist’s claim that B is mind independent is flat out incompatible with the claim that B realises M. Other reductive relations – grounding or constitution – suffer from the same problem. The reason is that for any reductive relation, the reductive base should, in some modally rich sense, be sufficient for the mental phenomenon. It should ‘bring about’ that mental phenomenon. The specific content of ‘bringing about’ will be cashed out in different ways by different reductive relations. Irrespective of differences between reductive relations the reductive base must be sufficient for the mental phenomenon – otherwise, why think we have identified a reductive base at all? If the ‘base’ is not sufficient for the mental phenomenon, then we have identified only one ingredient among (possibly many) others associated with the occurrence of that mental phenomenon, and that is no reduction at all. If B is a reductive base of a mental phenomenon, B cannot occur (over some non-trivial range of modal circumstances) without the associated mental phenomenon, M. But then B cannot (to the same modal extent) be mind independent. B is tied to M via the web of associations stipulated by the reductive relation. If M were not to exist, or if it were to have a different nature, B would not exist or it would have a different nature. B cannot be both a reductive base of M and be mind independent. The puzzle should not be confused with a similar puzzle about mind dependence. That puzzle arises from a worry about trivial causal dependence on minds. Many entities causally depend on minds for their existence and nature: tables, chairs, cities, children. Is realism about those entities thereby undermined (Devitt 1991; Miller 2012; Godfrey-Smith 2016)? Devitt (1991) and Miller (2012) argue that it is not because a realist does not deny causal dependence on minds. Anti- realism, rather than asserting causal dependence, is defined by a ‘further (philosophically inter- esting)’ sense of dependence that goes beyond ‘mundane’ causal dependence on minds (Miller 2012).2 This does not help us with our puzzle. The form of mind dependence at issue for us is not causal dependence. The proposals above do not say that reductive bases cause mental phe- nomena. Removing causal dependence from the field would not help us here. It is the further, non-mundane, ‘constitutive’ mind dependence built into the reductive relation that renders the anti-realist’s claim about cognitive science trivially true. In response to the puzzle, should we then grant the anti-realist an easy ‘win’: concede that we should be anti-realists about the reductive base of mental phenomena in cognitive science, including neural computations? This is not an option we should contemplate. If we were to concede to the anti-realist here, anti-realism would spread to other entities outside cognitive science. Atoms and electrons – large collections of them – are among the (likely) reductive bases of human mental life. Collections of atoms and electrons realise (or constitute, ground, etc.) at least some human mental phenomena. The atoms and electrons occupying the space where you sit now are sufficient to produce (some aspects of ) your mental life. If one were to replicate these atoms and electrons, one would replicate those mental phenomena. This conditional holds true over a non-trivial range of modal scenarios. But then the argument of this section can be applied to these collections of atoms and electrons. At a push, one might concede anti-realism about

363 Mark Sprevak neural computation. But conceding anti-realism about atoms and electrons on the basis of the argument above seems madness. Let us see how to respond to the puzzle in a way that does not grant a win to the anti-realist.

5 Solutions to the puzzle Each of the proposals described in this section solve the puzzle by distinguishing between two types of mind dependence. Reductive theories in cognitive science involve one form of mind dependence whereas anti-realism about the subject matter involves another. The hard question is how to draw the distinction between a (trivial) reductive form of mind dependence and a (non-trivial) anti-realist form of mind dependence. In this section, I examine three ways to do this. The first two distinguish the two kinds of mind dependence based on dependence on the mind of the subject versus dependence on the mind of an enquirer. I argue that this approach is unlikely to succeed. My favoured proposal is based on attending to the structured nature of the reductive base in cognitive science. The two forms of mind dependence can be distinguished as dependence of the component parts and relations of the reductive base on minds (non-trivial and the point of disagreement in realist/anti-realist disputes) versus dependence of the whole reductive base on minds (trivial and entailed by reduction).

5.1 Dependence on the enquirer versus the subject The first way to distinguish the two forms of mind dependence is to ask on whose mind the reductive base depends. When we described a neural computation that determines whether Abby goes to the fridge for a beer, we said that Abby’s neural computation trivially depends on her mind but we did not comment on whether it depends on the mind of anyone else. One might propose that anti-realism about neural computations is a claim about dependence on observers, not a claim about dependence on the subject being observed. Anti-realism in general is the claim that the world depends on how enquirers see or conceive of it. It does not depend on how the subject being studied sees it. Cognitive science appears to be special only in that the subject being observed has a mind. Whether the subject has a mind or not should be irrelevant to the anti-realist. Her concern is not to establish dependence of the subject on her own mind but to establish dependence of the subject on the mental life of others. Drawing the distinction this way also fits with the practice of cognitive science. Both the realist and the anti-realist can agree that the reductive base of some experimental subject’s mental life depends on that subject’s mind in the way described by the puzzle. If the experimental subject were not to have a mind, or if she were to have a radically different mind, the reductive base would be different. But the realist and the anti-realist can disagree about whether the reductive base depends on the minds of external enquirers. No justification for this flows from the reductive claim. We can state our distinction between two kinds of mind dependence as follows. Reductive mind dependence is dependence on the subject’s own mental life. Anti-realist mind dependence is dependence on the mental life of others, specifically the enquirers who study and ascribe properties to the reductive base. This way of drawing the distinction handles many cases, but not all. The problem is that there is no reason to believe that two separate persons are necessary to do cognitive science. An experimental subject could, in principle, perform experiments on herself. She could provide evidence and ascribe to her own brain specific neural representations. In this case, the proposal for distinguishing two kinds of mind dependence would fail. There would not be two separate minds (subject and enquirer), so there would not be two kinds of mind

364 Realism about cognitive science dependence. Both collapse to dependence on the subject’s own mind. The solution to the puzzle must lie elsewhere.

5.2 Dependence on second-order mental states One might try to finesse the previous proposal by looking for a differencewithin a subject’s men- tal life between her enquirer-like and subject-like aspects. If these two aspects could be identified, we could map them onto our two kinds of mind dependence. But how to draw this distinction? One thought is that enquirer-like aspects of mental life are distinguished by being about other aspects of mental life. A subject may have all sorts of mental states (beliefs, desires, and so on). What is special about her enquirer-like thoughts is that they are about aspects of her mental life. Enquirer-like thoughts are second-order thoughts about the mental life of a subject. The second-order thoughts might occur within a separate person (an external enquirer) or within the same person (a subject who is her own enquirer). We therefore avoid the counterexample above of the subject who is her own enquirer. On this view, reductive mind dependence would be dependence on a subject’s own mental life. Anti-realist mind dependence would be depend- ence on second-order mental states, either of the subject or some other enquirer, which are about that subject’s mental life. The problem is that this proposal’s characterisation of anti-realism fails to fit many plausi- ble forms of anti-realism. Consider Blackburn’s (1993) anti-realist reading of Hume’s view on causation. According to Blackburn, the existence and nature of causal relations depends on our cognitive apparatus – Hume is, in this sense, an anti-realist about causation. But Blackburn does not say that causation depends on our representational mental states, such as our beliefs or desires about causation. Causation depends on a different feature of our mental life: our dispositions to make certain inferences. Whether A causes B depends on our disposition to readily infer the occurrence of B from the occurrence of A. We have here anti-realism but not dependence on representational mental states. Following this model, a form of anti-realism about cognitive science – for example, anti-realism about neural computation – need not say the relevant entities depend on anyone’s representa- tional mental states. Indeed, an anti-realist need not say that we have mental representations at all. She might say that neural computations depend on non-representational aspects of our mental life (for example, our dispositions to make certain inferences). The distinction between first and second-order mental states only makes sense in the context of representational mental states. If anti-realism does not require representational states, the first-order/second-order distinction cannot be used to distinguish anti-realism from realism. Some other form of mind dependence must be at issue.

5.3 Dependence of the parts versus the whole on minds The previous two proposals try to partition the mind into subject-like and enquirer-like parts. In certain cases, this may be feasible (for example, when subject and enquirer are in two different people). But in general, it is difficult to know what distinguishes an enquirer-like aspect of the mental world from a subject-like aspect of the mental world. The proposal in this section adopts a different strategy. Rather than try to partition the mental realm into subject-like and enquirer-like parts, let us instead attend to partitions already given to us by theories in cognitive science: partitions in the reductive base. The structured nature of the reductive base is important to cognitive science. Theories in cognitive science do not reduce a mental phenomenon to a single, undifferentiated entity. They

365 Mark Sprevak reduce mental phenomena to a structured entity that consists of multiple individual parts and relations. Which parts and relations these are varies between theories: they might be computa- tional steps, mechanisms, networks, dynamic relations, or causal sequences of events. For example, a theory that identifies an instance of decision making with a neural computation, C, does not reduce decision making to a single, atomic individual, C. Rather, the theory identifies decision making with a structured entity, C, composed of multiple parts (perhaps including representations of environmental states, representations of utilities, and individual functional parts) and multiple relations (causal, syntactic, and other relations) that together are (or realise, constitute, ground) decision making. Observe that the puzzle described in the previous section only entails that the reductive base as a whole is mind dependent. The reductive base cannot occur without its associated mental phenomenon. But nothing follows from this regarding the mind dependence of the individual parts and relations. Mind dependence of the whole reductive base does not require mind dependence of the parts. The same argument as above cannot be run for the parts as there is no reason to suppose that any of the individual parts or relations would, by itself, be sufficient for a mental phenomenon. There is nothing contradictory in supposing that a part or relation of the reductive base can occur individually without any specific condition involving mental agents being met. For example, suppose that an instance of decision making is a specific neural computation. That entire neural computation is mind dependent: it cannot occur without the associated mental phenomenon. But this does not mean that the individ- ual parts and relations that compose the computation are also mind dependent. It is possible that the individual parts and relations – the representations of environment states, the smaller functional units, the causal relations – could occur individually without any condition being met concerning mental agents. It is also possible that one or more of the parts and relations is mind dependent. Parts and relations may be mind dependent or fail to be mind depend- ent even if the whole reductive base is not mind dependent. There is scope for different anti-realist views by adopting the mind-dependence claim about different parts or relations: one might, for example, be an anti-realist about causal relations or about syntactic properties. By contrast, there is only one way to be a realist: hold that none of the constituent parts and relations is mind dependent. Each part or relation could occur individually without a further condition being met regarding mental agents. Hence, we can draw our distinction. Reductive mind dependence is dependence of the whole reductive base on a mental phenomenon. Anti-realist mind dependence is dependence of one or more of the constituent individual parts or relations that make up the reductive base. Reductive mind dependence is entailed by the reductive claim. Anti-realist mind dependence is not. How do we know this is the right way to draw our distinction? Recall that what is at stake is the ambition to naturalise the mind: the attempt to show how mental life arises from non-mental ingredients. A naturalising explanation explains properties of a mental phenomenon in terms of the individual parts and relations of the reductive base. Whether the form of explanation in question is functional explanation, mechanistic explanation, computational explanation, causal explanation, or some other form of explanation, it consists in citing the individual parts and how they are arranged by relations in the reductive base. The individual parts and their relations explain the scientifically relevant properties of the mental phenomenon. As we defined it above, an explanation is naturalising only if its explanans does not refer to or otherwise presuppose mental phenomena. The relevant explanations in cognitive science appeal to the individual parts and relations of the reductive base. If we are realists about those parts and relations, we can appeal to them in our explanations without presupposing further conditions being met about mental phenomena. Conversely, if the naturalising project is to succeed, we must be able to be realists

366 Realism about cognitive science about the relevant parts and relations referred to by our explanations. In contrast, if one or more of the parts or relations that make up the reductive base are mind dependent, then an explanation that cites them will fail to naturalise the mental phenomenon. That the proposed distinction aligns with the fate of our naturalising ambitions indicates that we are on the right track here. Consider an analogy. You see a miniature castle in a shop window. You want to explain some of the castle’s properties: why it can bear so much weight or why it is resistant to attack by scrunched-up paper balls. You aim for your explanation to be ‘naturalistic’: to explain the castle’s properties in non-castle-involving terms. You do not want that explanation to make reference to or otherwise presuppose castles. Closer inspection reveals that the castle is built from Lego bricks. You make a reductive claim: the castle is (or is realised by, or is constituted by) this spe- cific configuration of Lego bricks. Armed with this reductive claim, you can explain the effects first noted. The individual Lego bricks and their specific configuration explain the ability of the castle to bear weight. The individual Lego bricks and their configuration explain the resistance of the castle to attack by paper balls. Someone might object at this point that, according to your hypothesis, the castle is this configuration of Lego bricks. Hence, you have not explained the castle in non-castle-involving terms. You reply, rightly, that this kind of castle involvement does not matter to your naturalistic ambitions. The specific configuration of Lego bricks is a castle, but the individual bricks and their relations are not. Your explanans cites those individual bricks and their relations not the configuration as an atomic whole. You have explained weight bearing and resistance to attack in terms of the powers of these parts and relations, neither of which are castles or are castle-dependent. That is all that is required to naturalistically explain the castle’s proper- ties. Now suppose that one were to discover, to great surprise, that the individual Lego bricks do essentially depend on castles. Perhaps some Lego bricks contain tiny castles. The structured configuration of Lego bricks is now castle dependent in a new and more troublesome way. The original naturalising ambition – explaining the castle’s weight bearing and resistance to attack without making reference to or otherwise presupposing castles – would fail.

6 Conclusion I have argued that what matters to the realist/anti-realist dispute in cognitive science is not whose mind the reductive base depends on (subject versus enquirer). Rather, it is the mind dependence of the individual parts and relations versus the (trivial) mind dependence of the reductive base as a whole. The status of the individual nuts and bolts that realise cognition matters. Whether a specific configuration of nuts and bolts taken as a whole is mind dependent is irrelevant to real- ism and to the naturalising project. Perhaps surprisingly, the structured nature of the reductive base in cognitive science and cognitive science’s parallel emphasis on structured explanation (whether that be functional, mechanistic, computational, causal, or another form of explanation via appeal to a structure) turns out to be essential to articulating the realist/anti-realist dispute in this area. The relevant form of anti-realism targets one or more entities in that structure. If someone claims to be an anti-realist about cognitive science, the first question one should ask is: About which entities in the reductive base are you anti-realist? The next question should aim to discover whether those entities really do play an essential role in the reductive base of cognition.

Notes 1 I assume we are considering the total realiser here (Shoemaker 2007). Changing to talk about the core realiser would not improve matters, as core realisers have further worries pertaining to their mind depend- ence (Wilson 2001). 2 Godfrey-Smith (2016) argues that this reply is wrong: causal dependence on minds is relevant to realism.

367 Mark Sprevak

References Blackburn, S. (1993) “Hume and Thick Connexions,” in Essays in Quasi-Realism, Oxford: Oxford University Press, pp. 94–107. Block, N. (2007) “Consciousness, Accessibility, and the Mesh between Psychology and Neuroscience,” Behavioral and Brain Sciences 30, 481–548. Burge, T. (1986) “Individualism and Psychology,” Philosophical Review 95, 3–45. Churchland, P. M. (1981) “Eliminative Materialism and the Propositional Attitudes,” The Journal of Philos- ophy 78, 67–90. Dennett, D. C. (1991a) Consciousness Explained, Boston, MA: Little, Brown & Company. ——— (1991b) “Real Patterns,” The Journal of Philosophy 88, 27–51. Devitt, M. (1991) Realism and Truth (2nd ed.), Princeton, NJ: Princeton University Press. Fodor, J. A. (1975) The Language of Thought, Sussex: The Harvester Press. ——— (1980) “Methodological Solipsism Considered as a Research Strategy in Cognitive Psychology,” Behavioral and Brain Sciences 3, 63–109. Godfrey-Smith, P. (2016) “Dewey and the Question of Realism,” Noûs 50, 73–89. Gold, I. J. and Shadlen, M. N. (2001) “Neural Computations That Underlie Decisions about Sensory Stimuli,” Trends in Cognitive Sciences 5, 10–16. ——— (2007) “The Neural Basis of Decision Making,” Annual Review of Neuroscience 30, 535–574. Irvine, E. (2012) Consciousness as a Scientific Concept, Dordrecht: Springer. Miller, A. (2012) “Realism,” in E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy, (Spring 2012 ed.). URL: http://plato.stanford.edu/archives/spr2012/entries/realism/ Putnam, H. (1988) Representation and Reality, Cambridge, MA: MIT Press. Rangel, A., Camerer, C. and Montague, P. R. (2008) “A Framework for Studying the Neurobiology of Value-Based Decision Making,” Nature Reviews Neuroscience 9, 545–556. Ryle, G. (1949) The Concept of Mind, London: Hutchinson. Schultz, W., Dayan, P. and Read Monague, P. (1997) “A Neural Substrate of Prediction and Reward,” Science 275, 1593–1599. Searle, J. R. (1992) The Rediscovery of the Mind, Cambridge, MA: MIT Press. Shoemaker, S. (2007) Physical Realization, Oxford: Clarendon Press. Sperling, G. (1960) “The Information Available in Brief Visual Presentations,” Psychological Monographs: General and Applied 74, 1–29. Wilson, R. A. (2001) “Two Views of Realization,” Philosophical Studies 104, 1–30.

368 29 SCIENTIFIC REALISM AND ECONOMICS

Harold Kincaid

1 Introduction Economics does not fit very naturally into current debates over scientific realism as a general doctrine. Standard arguments for or against realism, such as the no miracles argument or the pessimistic induction, rely on presuppositions that are questionable for economics – for example, that there are general theories to evaluate, and there is some clear change of successful theories over time. Also, trying to identify what economics takes to be “observable”, so that we could for- mulate a constructive empiricist stance towards it, is difficult, because it is not at all obvious what should count as observable – at least if the goal is to view some significant part of economics as successfully saving the phenomena. (I will argue, however, that this is probably better understood as a complaint about constructive empiricism than one about economics.) On the other hand, social constructivist views – versions that see social processes as raising questions about the epistemic status of science – find fertile grounds in economics. Economics also raises in quite pointed ways realism issues motivated by the role of unrealistic models in science. (See also A. Levy, “Modeling and realism: strange bedfellows?”, ch. 19 of this volume.) Unrealistic models have not played a big role in general scientific realism discussions, but they certainly should in thinking about economics. If the basic realist claim is that we have reason to believe that current science tells us roughly how the world is, then the often extremely unrealistic models of much of current economics motivate an antirealist stance towards parts of economics. Overall, my discussion supports what might be called a “local” approach to realism and a con- textualist approach to epistemological questions (Kincaid 2000; see also L. Henderson, “Global versus local arguments for realism”, ch. 12 of this volume). Contextualism, as I use the term, is a consistent anti-foundationalism which holds that: (i) We are never in the situation of evaluating all of our knowledge at once. (ii) Our “knowledge of the world” is not a unified kind that is susceptible to uniform theoretical analysis. (iii) There are no global criteria sufficient for deciding which beliefs or principles of inference have epistemic priority. (iv) Justification is always relative to a specific context, which is specified by the questions to be answered, the relevant error prob- abilities to be avoided, the background knowledge that is taken as given, and so forth. Contextualism, as described here, has much in common with the form of naturalism that Maddy calls “second philosophy”. In the context of scientific realism, this perspective is skeptical of general philosophical arguments – either realist or antirealist – that claim to assess the status

369 Harold Kincaid of “science”. The idea is that there are indeed important issues about when to take science real- istically, but they are frequently most fruitfully pursued as local scientific issues. I take it that the arguments that follow are consistent with this general perspective, but they do not presuppose it and should have force regardless of whether the contextualist perspective is appealing. My discussion of economics and scientific realism breaks down as follows. In section 2 I try to show that some standard general scientific realism arguments and approaches do not find much traction in economics and that in at least some cases that is an argument against the correspond- ing positions in the realism debate. Section 3 looks at unrealistic models, arguing that they can motivate antirealist theses and sketches criteria for when they do so. Section 4 looks at social constructivist arguments in the context of economics and finds that there is plenty of material here for further discussion.

2 The awkward fit of general realism debates in economics I take it that a basic thesis under dispute in debates over scientific realism is roughly whether we have good reason to believe that our best science tells us the way the world is. A standard philosophical argument for this thesis is the no miracles argument, namely it would be quite a coincidence if our best theories were as successful as they are and yet not be approximately true. (See K. Brad Wray, “Success of science as a motivation for realism”, ch. 3 of this volume.) Antire- alists can deny this thesis on multiple grounds. The pessimistic induction says that past successful theories turned out to be false, so we cannot use success to defend current theories as telling us the way the world is. (See P. Vickers, “Historical challenges to realism”, ch. 4 of this volume.) Alternatively, the argument can be that theories are underdetermined by the data, making reliance on any particular successful theory treacherous. (See D. Tulodziecki, “Underdetermination”, ch. 5 of this volume.) Social constructivists argue that current success is the product of social negotiation and other social processes not connected to epistemic goals, and thus any claim for a special status for current science is mistaken. (See M. Kusch, “Scientific realism and social epis- temology”, ch. 21 of this volume.) Some of these standard arguments over scientific realism, such as the pessimistic induction and the no miracles argument, are quite hard to apply to economics. The difficulties come from the lack of “successful theories”. There are questions about the existence of theories in economics, in the first place, and questions also about success. Both raise doubts about trying to apply these general arguments to economics. It is not clear that there really are dominant theories in past or current economics, and what may come closest to such do not have strong empirical success of the sort that could be used to argue either for or against realism. The three most obvious candi- dates for theories in economics would seem to be general equilibrium theory, neoclassical theory, and game theory. In each case there is good reason to think that:

1 There is not a theory but many. 2 What get called theories are not generally theories in anything like the sense that standard scientific realism debates assume. 3 There is often little reason to think that the “theories” in question have significant empirical success which is best explained by their truth or past theories that were successful but that we now know are false, thus grounding a pessimistic induction.

These three points are connected: (2) naturally leads to (1), and (1) and (2) help explain (3). Take general equilibrium theory in economics first. Many things can fall under this rubric. At its simplest, it is the claim and approach that is contrasted to partial equilibrium analyses

370 Scientific realism and economics of specific markets that take activities in other markets as fixed, asserting that a more adequate analysis should analyze the interactions of all markets simultaneously. In this form “general equilibrium theory” is not a theory but a methodological recommendation made on the basis of a causal claim. There are at least two different branches of general equilibrium theory: abstract and applied. Abstract equilibrium theory involves mathematical work showing that an equilibrium is possible for a full market economy under certain very strong assumptions and that such an equilibrium has certain desirable welfare properties. Thus Rosenberg (1992) argued that general equilibrium theory was not an explanatory empirical theory at all. Applied general equilibrium theory is a very different project, having in common with abstract equilibrium theory mainly the attempt to look at entire economies at once. The current instantiation of applied general equilibrium is predominately computable general equilibrium, which finds equilibrium solutions for economy-wide phenomena where the components of the theory are aggregative elements, such as factors of production, sectors, government expenditures, and so on. Here, as is the case across economics, there are many different models in use, each mak- ing different assumptions. How well confirmed those models are is an open question, in part because of their unrealistic assumptions – as I will discuss later. But even if some are relatively well confirmed, that would not provide a clear argument for a common theory that they all embody. Game theory presents a similar picture. It is now absolutely central to economics. Yet despite the name, it is arguably not best thought of as a theory in the sense of formalized laws with deductive relations and so on. An apt name would be a modeling strategy or technique. Assump- tions are made about possible strategies, payoffs, knowledge, and solution concepts and so on, and then an equilibrium outcome is argued for by one of several different routes given those assumptions. Again, there are a host of different models with different components. The evidence for those models is quite varied. In experimental set-ups some applications are quite successful, though far from all are (Binmore 2007). Application to observation in the field is quite hard, and success often means the models are at best suggestive “how-possible” explanations. When subjects and agents find equilibria, game theoretic models do not generally offer a plausible explanation of how they do so – it is surely not by calculating complicated Bayesian sub-perfect Nash equilibria, for example. Neoclassical theory perhaps comes closest to what might look like a theory in economics. This is what is taught in intermediate microeconomics, for example: the theory of the consumer, the theory of the firm, and how to solve for the equilibrium conditions. Standard consumer theory consists of a set of assumptions about consumer choice – Hausman (1992) calls them the potential laws of the theory – that are well known to be disconfirmed in a range of cases. In practice econ- omists often drop various standard assumptions of consumer theory and of the theory of the firm as they build models for specific purposes. On top of this, workhorses in economics are supply-and-demand analyses of aggregate mar- ket, not the behavior of individual consumers or firms, and there are well-known results showing that such aggregate analyses cannot be derived by aggregation of consumer and firm theory applied to individual consumers and producers. Supply-and-demand arguments work by con- sidering a short list of potential factors – prices of substitutes, complements, and income above all – causally influencing price at equilibrium; as elsewhere, the details about how equilibrium is reached are not well developed. A final candidate for theory in economics is macroeconomics. Issues similar to those men- tioned arise here as well. There is seldom one canonical theory that is taken to be well confirmed. Instead there are a variety of models, some tightly related, some not, that did not seem to be empirically successful and were then replaced by successor theories that were likewise successful

371 Harold Kincaid and showed their predecessors wrong in important ways. A superficial reading of the history might seem to support a story about Keynesian macroeconomics being replaced by New Neo- classical Synthesis, but a careful look at the history and the degree of empirical success would show, I think, that this is not in fact a case of one empirically successful theory being replaced by another successful one. Thus the picture that emerges is one consistent with the emphasis on the local approach, described in the introduction. There are not just one or even several main theories to be assessed, nor generally have there been theories in the past that were successful but later rejected. Instead, there are many different models developed according to the context at hand. The named doubts about finding general theories in economics equally complicates efforts to apply arguments based on underdetermination to realism about economics. Many (but not all) underdetermination arguments are motivated by Quinean style holism that finds underdetermi- nation in the ability to drop and/or revise interconnected elements of complex theories in a way that produces multiple theories consistent with the data. (See D. Tulodziecki, “Underdetermi- nation”, ch. 5 of this volume.) The much more modest, piecemeal models of economics make such gerrymandering look less likely. For example, the standard aggregate supply-and-demand account of specific markets – the workhorse of applied economics – does not seem like a case in which there is a complex theoretical whole that confronts the data all at once. Instead, there is a quite restricted set of causal claims. However, taking underdetermination worries about realism to be local arguments, these wor- ries are very real and something economists think about a great deal (Manski 1999). Econo- mists would generally label these issues as “identification problems” which concern whether the observed variables provide sufficient evidence to uniquely confirm or reject the theoretical structure postulated to explain them. This is akin to the problem in general causal modeling of determining whether the data suffice to pick out one causal model. In good contextualist fashion, the answers to these questions depend importantly on how much reliable background informa- tion can be brought to bear to restrict the number of possible models. There is no general answer as to whether economics can do this – it all depends on the models at issue, the data available, and what background knowledge seems secure. One idea that might be drawn from the practice of economic research would be a kind of structural realism applied to underdetermination problems (see Kincaid 2008). Structural realism is usually motivated as the claim about a diachronic context, involving preservation of important parts of theories – structures – across theory change. (See I. Votsis, “Structural realism and its var- iants”, ch. 9 of this volume.) However, it is possible to make a parallel argument across synchronic differences in theories. Even if multiple theories are consistent with a given set of data – if theory is underdetermined by the data – it might be that the competing theories still share some of the same “structure” or, more concretely perhaps, share some of the same causal relationships. In economics this could often be described as sharing the same “reduced form”. The reduced- form equations of a set of simultaneous equations expresses the endogenous variables only in terms of the exogenous variables – no equation has an endogenous variable as a function of another endogenous variable. The reduced-form equations can be estimated by well-known techniques. However, it may not be possible to determine the relation of the endogenous variables with each other from the data used to find the reduced-form relationships. For example, there may be mul- tiple structural relations between the endogenous variables compatible with the reduced forms. This is the identification problem noted earlier. So the basic relation between endogenous and exogenous variables is a “structure” that holds up despite possible underdetermination. I have been discussing the relevance of the general scientific realism debate to economics. What about the constructive empiricist version of antirealism? (See O. Bueno, “Empiricism”,

372 Scientific realism and economics ch. 8 of this volume.) At first glance my points about the lack of formalizable theories and the lack of empirical support might seem to speak in favor of a view that says we should only believe the observable part of science. Unfortunately, constructive empiricism, which is also a general approach to scientific realism, is not of much help in thinking about economics. Some prominent philosophers of economics have argued for the same conclusion for different reasons than those I will give in what follows. Hausman (1998) and Mäki (1998) have argued that constructive empiricism gets little traction in economics because economics is not about unobservables. The entities of economics are things like commodities and firms, which are part of the observable furniture of the everyday world – economics does not go beyond commonsense realism. These arguments are unconvincing for several reasons. As Guala (2012) points out, Hausman and Mackie argue that economics does not introduce new kinds of things. Yet that is not the issue with constructive empiricism, and van Fraassen is explicit about this – the issue is about com- mitment to entities and their properties that are not observable in van Fraassen’s sense of being observable by the unaided senses. It may well be that parts of economics stick to things we would regard as part of the commonsense furniture of the world, but that part is not good enough for van Fraassen. I will argue next that sticking seriously to van Fraassen’s notion of observable means talking about a very restricted part of the commonsense world. However, contra Hausman and Mäki, the fact is that economics does seem committed to theoretical entities that are in no way observable. Demand curves, aggregate income, velocity of money, and GDP seem likely candi- dates (see Hoover 2001), and many others could be mentioned. So it seems that economics does trade in unobservables that are very far from something the unaided senses can identify, even if our concepts of those objects have some sort of ties to concepts we apply to everyday economic phenomena that are observable in van Fraassen’s sense. The real problem with van Fraassen’s empiricism is that it is hard to see what in economics should count as observable in his sense. What things do economists see directly that would form the empirical subset of economic theories (putting aside the question whether we can talk of theories in the first place)? If we stick to the unaided senses criterion, it is hard to see how to avoid the conclusion that the observables – the objects – are overwhelmingly marks on paper or on computer screens: reports, survey question answers, choices in experiments, and the like. The vast majority of these marks are numbers, but van Fraassen restricts the observing to an entity. This rules out observing, for example, that the GDP growth rate of a country is 3%, the observ- ing of which depends on a host of inferences and calculations that van Fraassen eschews in his effort to comply with empiricist standards for what is worthy of belief. Economists believe and would say that many of those marks are allowing us to observe economic entities, such as prices and output. But those beliefs and that talk do not meet van Fraassen’s standards. What counts as observable for van Fraassen in this case thus just seem to be marks on paper and computer screens, and the constructive empiricist would be restricted to believing only that economics tells us that such marks exist. The conclusion here is so drastic and it empties economics of so much empirical content that it might seem I must be misinterpreting van Fraassen, but I see no way out. Philosophers have generally been relatively lax in trying to tie down in practical terms what would count as observable in specific sciences and to that extent have probably been overly charitable to con- structive empiricism. A possible position in the spirit of constructive empiricism, perhaps, would be to keep the demand for agnosticism about the theoretical but loosen the requirements for observability. There are obviously other notions of observability in the literature, such as Shapere’s notion of causal transfer of information or Quine’s conception of observation statements that are intersub- jectively agreed upon. It would be an interesting project to see if the economists’ use of the notion

373 Harold Kincaid of observation (1) was generally consistent, (2) was consistent with some epistemically motivated notion, and (3) would make for some coherently motivated empiricist antirealist position. This has not yet been done, but it might well be feasible for certain strands of econometric work.

3 Scientific realism, economics, and unrealistic models One compelling motivation for antirealism in economics concerns the use of unrealistic models. This does not motivate any kind of generalized antirealism for two reasons: there is no principled reason that economics must always use unrealistic models (and in fact it does not), and unrealistic models can also tell us the way the world is (in Mäki’s terminology, scientific realism and how realistic scientific models are, are two separate issues), as we will see in what follows. An example of unrealistic models that arguably played a role in misunderstanding and perhaps leading to – although this is a more controversial claim – the 2008 financial crisis are dynamic stochastic general equilibrium models (DSGE), the currently dominant kind of macroeconomic model. DSGE models depend essentially on “representative agents”: they assume that all consum- ers and firms can be treated as if they were one agent. In other words, they assume that there is only one consumer and one firm. They also assume that financial markets are fully efficient. Yet it has been shown that aggregate consumption mirrors individual choice only under stringent assumptions (Kirman 2010). Examples like these can be easily found throughout the field. Earlier we mentioned abstract general equilibrium models. They typically assume:

1 There is perfect competition in both the commodity and factor markets. 2 Tastes and habits of consumers are given and constant. 3 Incomes of consumers are given and constant. 4 Factors of production are perfectly mobile between different occupations and places. 5 There are constant returns to scale. 6 All firms operate under identical cost conditions. 7 All units of a productive service are homogeneous. 8 There are no changes in the techniques of production. 9 There is full employment of labor and other resources.

So these models are not making assumptions that certain factors only play a small role and can be ignored, nor are they just transforming complicated relationships into somewhat more tractable simpler ones. They are starkly at odds with reality. This is at first glance a strong motivation for antirealism about economics. Though note that saying that this motivates antirealism about economics is an exaggeration, given the previous dis- cussion, arguing that economics is a diverse set of different models. One would have to provide a representative sample of work across these various aspects of economics to show that unrealistic assumptions were a constant element of the field. For example, applied supply-and-demand anal- ysis of specific markets works with a well-identified set of causal factors, and it is not at all clear that any very strong unrealistic assumptions are being made (Kincaid 1996). Nonetheless, there is no denying that unrealistic models are widespread. Two caveats about pointing to unrealistic assumptions as an argument for antirealism about economics: unrealistic assumptions are obviously common across the sciences, yet that is not generally taken to be an argument for antirealism. That is presumably because those assumptions are dealt with in a systematic way that provides evidence that they are not confounding results. Yet in major parts of economics there seems to be no prospect of showing that their unrealistic

374 Scientific realism and economics assumptions can be similarly dealt with (Kincaid 2017). What would it take to show that the assumption that there is only one consumer and one producer in an economy or that there are markets for all goods at all times do not confound results? The problem of unrealistic assumptions in economics seems to be several orders of magnitude more problematic than idealizations in the natural sciences. (See A. Levy, “Modeling and realism: strange bedfellows?”, ch. 19 of this volume.) A second caveat is that models can serve many purposes other than telling us how the world is, and that is certainly true for economics; to the extent that this is the case, irrealism is no argument for antirealism. So models can:

• Be essential for tractability – needed to provide any way at all of systematically thinking about the phenomena. • Be used to discover new possible phenomena. • Be used to show how certain phenomena might come about – Schelling’s models of resi- dential segregation are a popular example often cited in this regard (see, e.g., Ylikoski and Aydinonat 2014). • Be used to get a clearer formulation of a verbal argument. • Be used to identify robustness, vis-à-vis to show that with weakened or different assumptions results still hold (Kuorikoski et al. 2010).

Models certainly play all the above roles in economics. Insofar as they do, the realism debate as one about whether science tells us how the world works is not to the point. This is a point made by Mäki (2011) in his defense of “minimal realism” in economics. Of course, economics has greater pretensions than just telling us how the world might work, and indeed it also wants to make for example policy advice, which presupposes that it provides sufficient knowledge of the world for us to causally intervene. The irrealism of assumptions highlighted here is a pressing threat to that pretension. There are various routes economists and philosophers writing about economics have pro- posed to defend and assess unrealistic models. Among them are the following ideas and claims:

1 A common response to worry about unrealistic models from economists is: don’t worry, be happy – all models are false. Yet that does not solve the problem that some models are so unrealistic that they do not tell us anything about reality. Others are unrealistic but seem to tell us about reality, and we want to know how to distinguish the two cases. 2 Another common response from economists is that models, despite their unrealistic assump- tions, provide “insight”. If this means that models are telling us about possibilities, as dis- cussed already, that sounds reasonable enough but does not get at the realism issue – realism is a claim about the actual world. In general, we have to ask when we know that insight is nothing more than a warm and fuzzy feeling. 3 Morgan (2012) defends unrealistic models by claiming that economists provide narratives and use tacit knowledge to tie them to reality, just as scientists do in the natural sciences. While this is no doubt true in some sense, the open question is still when and how unrealistic economic models do so successfully. There are well-developed methods in the natural sciences for tying unrealistic models to reality (see, for example, Wilson 2008), but the question is whether the social sciences have any real parallels. Talk of providing narratives and using tacit knowledge is a promissory note that needs filling out, something Morgan only hints at. I will make some efforts in that direction in what follows. 4 Finally, an important trend argues that unrealistic economic models are nonetheless explan- atory when they capture or isolate the essence of a causal mechanism. This view has been

375 Harold Kincaid

argued for by Cartwright (1989), at length by Mäki (1998, 2011), and very recently by, on my view, one of the more methodologically sophisticated economists, Rodrik (2015). These approaches seem to me fundamentally on the right track. But the question is how and when do we know that unrealistic models actually isolate causal mechanisms? How do we know that a causal process in a model is not disrupted in reality when other left-out factors are involved? These authors by and large do not address these questions.

I would argue that there are multiple ways scientists in general go about doing this – as noted – and the question is when economics finds such methods. For example, one route that economics sometimes successfully uses to show that unrealistic models capture causal mechanisms comes from the use of experiments or quasi-experimental methods. An important instance of the latter in economics is the use of “instrumental variables”. Instrumental variables are ways of trying to show that even when a causal model unrealistically assumes there are no confounders of a putative causal relationship, there still can be good evi- dence for the relation hypothesized by the model. If the model postulates that C causes E, an instrumental variable is one that causes C and has no causal effects of E, except through C. It is well established in the causal modeling literature that instruments can find the true effects, even though the model falsely assumes there is no confounding. Basically we are using the variation the instrument causes in the explanatory variable C independently of the unknown confounder to determine how that variation relates to changes in the effect. This is roughly an observational analogue to random assignment – the instrumental variable is assigning changes independently of the possible confounders. Economists often confuse this legitimate way of dealing with unrealistic models with another that only requires that A is correlated with C and not E without evidence that there are the required causal relations. However, instrumental variables used properly are one way that economic models can get at real things while none- theless being unrealistic. Experiments are another route to confirming important economic hypotheses despite their being derived from unrealistic assumptions. Let me give two examples. Economists have offered game theory models of bargaining behavior. For example, in a finite, two-person zero-sum game the Nash equilibrium first proved by von Neuman was the minimax value. Borel first proposed the solution but had no proof; von Neuman proved it was the Nash equilibrium. The game theory account of bargaining in such cases assumes that individuals know the payoff and strate- gies and reason their way to the Nash best response strategy. Such a model is highly unrealistic. Yet in experiments Binmore (2007) finds that subjects reach the appropriate equilibrium. So the hypothesis about the expected phenomena is derived from highly unrealistic assumptions. None- theless there is a well-confirmed explanation of equilibrium behavior, namely that each response is the best reply to the other. Deviations from the equilibrium strategy are punished in terms of payoffs, grounding a causal mechanism that reinforces equilibrium behavior. Thus an explana- tion can be compelling despite the fact we may have no good idea how agents actually find the equilibrium, and we know that they certainly do not use the strong reasoning assumptions the game theoretic analysis uses to identify equilibrium. Smith’s (2008) experimental work on auctions is another example. He explicitly makes the argument that the unrealistic assumptions he uses to identify an equilibrium are only tools for the analyst and not a theory of how subjects reach equilibrium. In this case the equilibrium persists because deviations from it mean excess supply or demand, inducing a return to the equilibrium price. How subjects reach equilibrium is again unknown, and surely they do not reach it because they have full information and the other unrealistic assumptions used to derive equilibrium by the economists’ models.

376 Scientific realism and economics

These are examples of the kinds of strategies that one can use to argue for a realistic take on economics. They will have to rely on a particular modeling strategy on a case-by-case – or maybe type-by-type – basis.

4 Social constructivist arguments Some social constructivists argue that scientific results are always the result of negotiations between actors with varying amounts of power and social networks. The radical social constructivist believes this is always the case and that talk of truth and evidence in science is mere rhetoric. This extreme skepticist position is not the only version of constructivism. A much more moderate version simply thinks that claims to scientific virtues are sometimes overblown rhetoric that are not met in practice, with disciplinary traditions, self-interest, and so on, having as much to do with which outcomes are accepted as does rigorous testing against reality. This can be used for what Hacking (2000) calls “debunking” – showing that some parts of science are overblown by providing direct evidence that various social processes unconnected to evidence are driving the science. I think there is much room for these debunking exercises in economics, and it is an open and interesting question just where they have force. In the following I sketch some areas where the approach seems fruitful. All of these arguments get their hold because there is generally a gap between available evi- dence and the conclusions drawn. This gap is true of science in general, but the gap in economics is often quite wide. There is no inherent reason why it must be, and there is much economics that bridges the gap quite well – for example, the numerous studies of supply-and-demand relation- ships in specific markets. This applied empirical work does not get much glory in the profession, but it is a common mode of economic analysis. However, there are other parts of economics that are not nearly so empirically grounded, and the gap in this work from evidence to hypothesis is large enough to allow social factors to play an important role. The gap exists for commonly known reasons: for example, the reliance on purely observational data, the use of theoretical concepts whose empirical counterparts are unclear and indeterminate, and the problems of underdetermination represented by the identification problem discussed earlier. The problem is especially deep in econometric work, because many decisions have to be made where there is no consensus on exactly how to proceed. It is commonly recognized that different econometricians given the same data set can reach quite different conclusions. A major route to closing gaps between theory and evidence comes from replication of results. Hypotheses tested by different individuals under at least somewhat different conditions give us assurance that results are believable when tests are successful. When there is no replication, the worry is that factors other than the constraints of evidence are at work. However, replication in economics is sometimes a scarce commodity. A study by Duvendack and Reed (2015) finds the following to be true of economics:

• Only 27 of 333 journals surveyed regularly publish replications of econometric work. • While some journals have policies saying that data should be shared, there is little evidence that this requirement is enforced. • For those journals explicitly requiring that data be posted online, there is no requirement that the statistical code be included, but without the code it is often very difficult to figure out exactly how analyses were done. • Of existing replication attempts from 1977–2014, 78% fail.

Apparently this key scientific virtue is missing from much work in economics. Not surprisingly, cynicism among economists themselves about the reliability of econometric studies is not unusual.

377 Harold Kincaid

Another route to closing gaps between theory and evidence often offered by economists is, of course, rigorous statistical inference. The evidence, however, is such that rigorous standards are often not at work. Outside of experimental economics, economists pay almost no heed to considerations of statistical power. The topic is given at most a page or two in standard econo- metric texts; power calculations are nearly nonexistent in econometric work. As a result, when relationships are shown not to be statistically significant, it is never clear whether that is because they do not exist or because sample sizes are too small. Another obstacle to rigorous statistical inference is publication bias – journals only publishing statistically significant findings and thus producing a published literature that is not representa- tive of the actual findings of research. The bias will be in the direction of preexisting theoretical ideas about how things work – findings that reject those preexisting ideas will be buried. There is substantial empirical evidence that this happens in economics. There are statistical ways to look for evidence of publication bias, because fully reported results should display a characteristic distribution of significance values. That seems not to be the case in some at least significant parts of economic research (Paldam 2013). Finally, there is evidence that economics sometimes closes the evidence–hypothesis gap sim- ply by importing normative values. Here I refer, for example, to a very careful and compelling study by Nelson (2014) of publication and confirmation bias reflecting gender stereotypes in the published literature on gender differences in risk aversion. It is standardly reported in the literature that gender differences in risk aversion have been well confirmed, with men being less risk averse than women. Nelson shows fairly conclusively that there is very little evidence for an essential difference between women and men in terms of risk attitudes. Actual studies find very small average differences, with significant evidence of overlap in terms of risk attitudes. Some studies show that women are more risk seeking. Some find no significant difference at all. Past literature and even author’s own results are interpreted as supporting a substantial gender difference where the results show no such thing. Statistical tests for publication bias are positive: plots of the precision of estimates (1-standard errors) do not show a symmetrical pattern that would occur if all results were published but instead a skewed funnel typical of results where insignificant results are not reported. So for this specific body of work there is reason to be an antirealist. How often, when, and where this sort of intrusion of values directly into the confirmation process occurs is entirely an empirical question. Finally, there is some serious work on the history of economics that might be taken to support a social constructivist antirealism. This work documents the interconnection between devel- opments in economics and politics, government, societal institutions, and think tanks. It details disciplinary trends and social processes. Paradigm examples are the extensive corpus of Mirowski (e.g., 2003, 2013). Historians are hesitant to describe themselves as social constructionists. How- ever, to the extent that their histories depict social processes in the development of economic doctrine that do not seem to promote practices based on evidence or truth, they can be taken as arguments for some form of social constructivist antirealism.

5 Conclusion Standard arguments for and again scientific realism are hard to assess in economics simply because the field does not have the characteristics needed for those arguments to make sense. Constructive empiricism looks especially unhelpful in formulating antirealist arguments when looking at eco- nomics. Arguments over realism and antirealism in economics, it seems, are best regarded as local arguments about whether specific pieces of economics tell us the way the world (approximately)

378 Scientific realism and economics is. Unrealistic models are a serious reason for taking an antirealist stance about much economics, but blanket generalizations would be mistaken. Antirealist arguments of a broadly social con- structivist stripe certainly have some applicability in economics.

Acknowledgments Juha Saatsi made helpful comments that improved this chapter.

References Binmore, K. (2007) Does Game Theory Work? The Bargaining Challenge, Cambridge, MA: MIT Press. Cartwright, N. (1989) Nature’s Capacities and Their Measurement, Oxford: Oxford University Press. Duvendack, P.-J. and Reed, R. (2015) “Replications in Economics: A Progress Report,” Economics Journal Watch 12(2), 164–191. Guala, F. (2012) “Are Preferences for Real?” in A. Lehtinen, J. Kuorikoski and P. Ylikoski (eds.), Economics for Real, London: Routledge, pp. 137–156. Hacking, I. (2000) The Social Construction of What? Cambridge: Harvard University Press. Hausman, D. (1992) The Inexact and Separate Science of Economics, Cammbridge: Cambridge University Press. Hausman, D. (1998) “Problems with Realism in Economics,” Economics and Philosophy 14, 185–213. Hoover, K. (2001) “Is Macroeconomics for Real?” in U. Mäki (ed.), The Economic Worldview, Cambridge: Cambridge University Press, 225–246. Kincaid, H. (1996) Philosophical Foundations of the Social Sciences, Cambridge: Cambridge University Press. ——— (2000) “Global Arguments and Local Realism about the Social Sciences,” Philosophy of Science 67, S667–S678. ——— (2008) “Structural Realism and the Social Sciences,” Philosophy of Science 75, 720–731. ——— (2017) “Unrealistic models, mechanisms, and the social sciences.” In H. Leitgeb, I. Niiniluoto, P. Seppälä and E. Sober (eds.), Logic, Methodology and Philosophy of Science: Proceedings of the 15th International Congress, College Publications, pp.367–381. Kirman, A. (2010) “The Economic Crisis Is a Crisis for Economic Theory,” CESifo Economic Studies 56(4), 498–535. Kuorikoski, J., Lehtinen, A. and Marchionni, C. (2010) “Economic Modelling as Robustness Analysis,” British Journal for the Philosophy of Science 61, 541–567. Mäki, U. (1998) “Aspects of Realism about Economics,” Theoria 13, 301–319. ——— (2011) “Scientific Realism as a Challenge to Economics,”Journal of Economic Methodology 18, 1–12. Manski, C. (1999) Identification Problems in the Social Sciences, Cambridge: Harvard University Press. Mirowski, P. (2003) Machine Dreams, Cambridge: Cambridge University Press. ——— (2013) Never Let a Serious Crises Go To Waste, London: Verso. Morgan, M. (2012) The World in the Model, Cambridge: Cambridge University Press. Nelson, J. (2014) “The Power of Stereotyping and Confirmation Bias to Overwhelm Accurate Assessment: The Case of Economics, Gender, and Risk Aversion,” Journal of Economic Methodology 21(3), 211–231. Paldam, M. (2013) “Regression Costs Fall, Mining Ratios Rise, Publication Bias Looms, and Techniques Get Fancier: Reflections on Some Trends in Empirical Macroeconomics,” Economics Journal Watch 10, 136–156. Rodrik, D. (2015) Economics Rules: The Rights and Wrongs of the Dismal Science, New York: Norton. Rosenberg, A. (1992) Economics: Mathematical Politics or Science of Diminishing Returns, Chicago: University of Chicago Press. Smith, V. (2008) Rationality in Economics, Cambridge: Cambridge University Press. Wilson, M. (2008) Wandering Significance, Oxford: Oxford University Press. Ylikoski, P. and Aydinonat, E. (2014) “Understanding with Theoretical Models,” Journal of Economic Meth- odology 21(1), 19–36.

379

PART V Broader reflections

30 REALISM AND THEORIES OF TRUTH

Jamin Asay

1 Introduction The notion of truth has never been far from the ongoing conversation about scientific realism. Scientific realism (and its opposition) is often defined explicitly in terms of truth, and sometimes it is argued that scientific realism (and its opposition) requires a commitment to particular con- ceptions about the nature of truth. It’s not difficult to appreciate why. The realist perspective maintains that science succeeds (or aims at succeeding) in correctly capturing the nature of reality, and to correctly describe reality just is to give a true theory about it. While it is fairly indisputable that there is some connection between truth and realism, it remains to be seen just how deep the relationship goes. In this chapter, I explore some of the prominent ways that scien- tific realism and the theory of truth have intersected, and evaluate the arguments that have been offered concerning their relationship. Two basic positions can be articulated when it comes to the relationship between the theory of truth and scientific realism. Thoseneutrality who favorbelieve that one’s position on the the- ory of truth is neutral with respect to one’s position on scientific realism, and vice versa. Even though realism might be defined in terms of truth, it’s a further (and false) claim that realism (or its opposition) must be defined in terms of particular theories ofpartisan truth. Those who favor a view believe that varieties of scientific realism and anti-realism are tied up in the nature of truth, such that the former are committed to particular perspectives on truth. Of course, whether the neutral or partisan position is correct depends on precisely how one understands what realism and truth amount to, and there is no shortage of options in this regard. Still, it will be worthwhile to explore why various theorists have or have not found views about truth lurking in the debate over scientific realism. I take up the neutral and partisan positions in turn in the following two sections. I begin by considering the case for neutrality, also offering alongside some basic remarks about the various theories of truth at issue. I then evaluate a variety of partisan positions, beginning with Arthur Fine’s natural ontological attitude. I then relate Fine’s views to those of major twentieth-century partisan figures such as Thomas Kuhn, Richard Rorty, and Hilary Putnam, and finally consider some more modest, contemporary partisan views. While I do not address the related notions of approximate truth and truthlikeness, they are taken up by G. Schurz, “Truthlikeness and approx- imate truth,” ch. 11 of this volume.

383 Jamin Asay

2 Neutrality Let’s begin by examining some canonical statements about scientific realism. According to Sta- this Psillos’s account, the realist “regards mature and predictively successful scientific theories as well-confirmed and approximately true of the world” (1999: xvii). The constructive empiricist Bas van Fraassen defines his realist opponent as one who subscribes to the thesis that “Science aims to give us, in its theories, a literally true story of what the world is like; and acceptance of a scientific theory involves the belief that it is true” (1980: 8, emphasis removed). Notice how both of these characterizations of scientific realism employ ‘true’. By presenting realism as the view that science is ultimately interested in truth, Psillos and van Fraassen suggest that realists view science as a guide to learning about reality. There are facts of the matter as to what reality is like, and the job of science is to discover and describe those facts. Anti-realist views are likewise often defined in terms of truth. Some modest forms of anti-real- ism maintain that science is not interestedwhole truthin theabout reality, but just some restricted set of truths. Constructive empiricism maintains that science aims at empirical adequacy, that is, truly describing observablethe aspects of reality (see, e.g., van Fraassen 1980: see also O. Bueno, “Empiricism”, ch. 8 of this volume). Entity realism maintains that science is a tool for discovering the truth about what entities exist, but not facts concerning what those entities are like (see, e.g., Cartwright 1983; see also M. Egg, “Entity realism”, ch. 10 of this volume). Structural realists hold that science is prepared to find the truth about the structure of reality, though not its nature (see e.g., Worrall 1989; see also I. Votsis, “Structural realism and its variants”, ch. 9 of this volume). More thoroughgoing forms of scientific anti-realism deliberately distance themselves from truth. Classic instrumentalism, for instance, is sometimes defined as the view that scientific theories are not even in the business of being true or that their value and purpose is in no way related to their truth (e.g., Smart 1968). By contrast, other strong forms of scientific anti-realism, perhaps under the spell of Kuhn (1970), deny that the notion of truth makes any sense at all, and so the practice and aims of science must be understood without reference to truth at all (cf. Rorty 1972). The central commitment of the neutral perspective is that the accounts of realism and anti-realism that employ truth in the manner described do not thereby saddle realism and its opposition with particular views as to what truth itself is. That is to say, the aforementioned statements of realism and anti-realism can be read as not taking an explicit stand as to whether ‘substantive’ theories of truth (such as the correspondence theory or coherence theory) or more ‘deflationary’ theories of truth (such as disquotationalism and prosententialism) are correct. Substantive theories of truth take the notion of truth to be a robust- one worthy of philosoph ical analysis. Such views aim to define truth in terms of its correspondence with reality (e.g., Newman 2002) or in terms of a kind of coherence that obtains between all the members of a particular set of propositions or beliefs (e.g., Young 2001). Deflationary- theorists (e.g., Hor wich 1990) find such elaborate accounts misguided, as for them the principal role for truth to play is as a kind of expressive device. The truth predicate enables us to reassert claims made by others (“What she said is true”) and make simple generalizations about many claims at once (“Everything he said last month is true”). According to deflationists, when one predicates truth of, say, the sentence ‘There are electrons’, one is not ascribing to the sentence some complex property in need of analysis, but rather just claiming once again that there are electrons. Important for our discussion is that these statements of realism can be neutral with respect to the debates between substantive and deflationary theorists. For example, suppose realism is as van Fraassen maintains and that the realist holds that in accepting a scientific theory one believes it to be true. A deflationist will point to this thesis as a paradigm instance of the utility of the truth predicate. For to say that a scientific theory is true is no more than to say that if the theory entails

384 Realism and theories of truth that aardvarks anticipate avalanches, then aardvarks anticipate avalanches, and if the theory entails that boa constrictors bustle in Bozeman, then boa constrictors bustle in Bozeman, and so on. In calling a scientific theory true, one asserts all of the implications of the theory without having to assert each one individually. What one does not do is ascribe some particular property (such as ‘corresponding to the facts’) to the theory. Hence, what it is for a theory to be true is that, for all p, if the theory entails that p, then p. By rejecting substantive theories of truth, deflationists do not thereby deny that truth has some connection to scientific realism. Likewise, there is nothing in van Fraassen’s or Psillos’s statements of scientific realism to disturb a substantivist about truth such as the correspondence theorist. In fact, substantivists about truth can agree with deflationists about the logical and expressive functions of ‘true’. They simply go further, adding that what it is for particular implications of the theory to be true is for them to stand in a certain relation of correspondence, coherence, or what have you. Hence, the neutralist concludes, to define realism in terms of truth is not yet to assign a spe- cific theory of truth to any version of scientific realism or anti-realism. (See also Devitt 1984 and Horwich 1996 for more neutralist arguments.) At least, core commitments of scientific realism offer no guidance in choosing between the various substantive theories of truth and deflationary theories. That said, the stronger view that does find partisan views of truth within perspectives on the realism debate has a long history, which is the focus of the subsequent sections.

3 Partisanship: the natural ontological attitude In discussing the history of realism, Putnam writes, “That one could have a theory of truth which is neutral with respect to epistemological questions, and even with respect to the great metaphys- ical issue of realism versus idealism, would have seemed preposterous to a nineteenth-century philosopher” (1978: 9). Indeed, the shift from idealism toward realism in the work of philoso- phers such as Bertrand Russell and G. E. Moore in the beginning of the twentieth century was accompanied by their eventual adoption of the correspondence theory of truth. The idealism of their forerunners went hand in hand with the coherence theory of truth (e.g., Joachim 1906). That the debate over realism and idealism (in whatever domain) should have great implica- tions for the theory of truth (and vice versa) is understandable when the theories of truth in question are loaded with metaphysical content and consequences. Not all views about the nature of truth are particularly metaphysical, however. Deflationary views of truth, for instance, offer positive views as to what the expressive functions of the truth predicate are but remain silent on broader metaphysical questions such as the tenability of realism. The same holds for primitivist theories of truth (e.g., Asay 2013a). Nevertheless, some philosophers remain convinced that truth plays an indispensable role when it comes to discerning the correct metaphysical interpretation of science. This attitude has been developed quite thoroughly by Arthur Fine (1984a, 1984b, 1986). Fine, for instance, goes so far as to claim that the scientific realist “adopts a standard, model-the- oretic, correspondence theory of truth” (1984a: 52), whereas the scientific anti-realist adopts a pragmatic, instrumentalist, or conventionalist theory of truth (1984b: 97). According to Fine, there is a source of common ground between the scientific realist and anti-realist. He calls this the “natural ontological attitude” (NOA) and defines it as the acceptance of scientific theories and taking “them into one’s life as true” in just the way that one accepts the evidence of one’s senses (1984b: 95). Where realism and anti-realism differ is in what sort of theory they attach to ‘true’. The realist accepts scientific theories as being ‘correspondence true’, whereas the anti-realist accepts scientific theories as being ‘instrumentally true’, or something similar. Fine’s own preference is to reject both perspectives and just accept scientific theories as being true but without adding on top of that some theory as to what the nature of truth consists in.

385 Jamin Asay

Fine’s presentation of the debate has an appealing structure and theoretical simplicity. But it is not at all clear that he has accurately characterized the role that truth plays in the debates over scientific realism. One common reaction to Fine’s view is that he is really espousing a realist perspective, despite his claims to being above the debate and advocating ‘non-realism’. Consider, for instance, this passage:

When NOA counsels us to accept the results of science as true, I take it that we are to treat truth in the usual referential way, so that a sentence (or statement) is true just in case the entities referred to stand in the referred-to relations. Thus, NOA sanctions ordinary referential semantics and commits us, via truth, to the existence of the individ- uals, properties, relations, processes, and so forth referred to by the scientific statements that we accept as true. (1984b: 98)

It is easy for a reader of Fine to find a strong commitment to scientific realism in this passage (e.g., Musgrave 1989) as well as a strong commitment to metaphysical realism about contentious enti- ties such as properties and relations. For one thing, some correspondence theorists of truth might recognize their own view in this statement of Fine’s. What more is there to a correspondence theory of truth than a view that explicates the relationship between statements and the world via relations of reference? (See, e.g., Lynch 2009.) Correspondence theories come in a variety of forms, but at least some who carry the banner of correspondence theory are attracted to views like those that Fine is happy to accept. Furthermore, in attributing NOA to all parties to the debate, Fine mischaracterizes many extant positions. Fine accepts that ‘Electrons exist’, being a consequence of contemporary physi- cal theories, is true, and thus he is committed to the existence of electrons, to which that sentence refers. That commitment is supposed to be common ground between the realist and anti-realist (as well as Fine’s non-realist). Yet it’s hard to imagine many scientific anti-realists accepting this consequence, since the motivations for their view often lie precisely in being wary of ontological commitments to unobservable entities, such as electrons. Van Fraassen’s constructive empiricist, for instance, refuses to believe that ‘Electrons exist’ is true; agnosticism is the best stance regarding statements concerning the unobservable aspects of reality, even though such statements may be regarded as in fact being true or false. Other views that aim to reinterpret scientific theories so that they are not committed to unobservable entities will likewise resist NOA’s plea to accept the apparent referential commitments of scientific theories. Hence, it appears that Fine’s attempt to define the debate over scientific realism fails to reflect the actual debate between realists and anti-realists. What is especially relevant for our purposes is that the disputes between, say, constructive empiricists and realists do not seem to turn at all on what theory of truth is correct. The constructive empiricist, for instance, recommends agnosti- cism regarding the truth or falsity of the parts of scientific theories that concern the unobserv- able. They thereby reject NOA, which endorses the entire truth of accepted scientific theories. But their agnosticism is not a function of their theory of truth; whatever truth itself is, construc- tive empiricists stay neutral as to whether what scientific theories say about the unobservable is true. Constructive empiricists may maintain their stance, regardless of whether they believe in correspondence theories of truth (to which van Fraassen seems sympathetic early on: see 1980: 197), deflationary theories (to which van Fraassen is currently more sympathetic: see 2008: 249), or something else. It is their epistemology that drives their agnosticism, not their theory of truth. Fine’s defense of partisanship between truth and realism faces some severe difficulties. His view predicts that realists and anti-realists alike accept in full the findings from successful scientific

386 Realism and theories of truth theorizing and that their differences emerge only after they engage in more philosophical debates over, for example, what the best theory of truth is. As Fine might put it, philosophers don’t dis- agree about the science itself but rather about the best interpretation of that science. Fine, in turn, suggests offering no interpretation at all. In practice, this is not the debate that actually takes place. Theorists offer arguments about underdetermination (e.g., van Fraassen 1980), the pessimistic induction (e.g., Laudan 1981), the explanation of scientific success (e.g., Smart 1963), and so on in order to determine whether we should believe in the truth of successful scientific theories. Philosophers of science engage in arguments that assess directly the metaphysical and epistemo- logical characteristics of scientific inquiry: does the history of science give us reason to distrust the reliability of science as a source of truth? Are the posits of scientific theories metaphysically problematic? Is it epistemologically justified to infer the existence of objects that cannot be perceived by the senses? These questions form the heart of the debate over scientific realism. Concerns about the nature of truth have not been at the forefront and are largely irrelevant to the arguments at hand. Why, then, does Fine put so much weight on the notion of truth in his account of scientific realism? The answer, I gather, involves the fact that Fine is responding to a philosophical tradition that has an inflated idea as to the kind of truth that realists have traditionally accepted. This con- ception of truth has little to do with the contemporary dialectic in the theory of truth, which often involves the debate between deflationary theories and contemporary correspondence the- ories. But it is a notion that has been of importance historically, especially in the mid-century metaphysical debates carried out by such seminal figures as Kuhn, Rorty, and Putnam.

4 Partisanship: Kuhn, Rorty, and Putnam Kuhn’s work on the history of scientific revolutions is often thought to have radical conse- quences for the very idea of truth, and whether scientific theories are in the business of pursu- ing it. On Kuhn’s view, the theories generated by scientists working in different paradigms are incommensurable; they speak, in effect, different languages. As a result, there is no sense to be made of progress from one scientific paradigm to another, at least by way of being measured by their attempts to capture a true, shared picture of reality. The very idea of a shared reality, a single world described more or less accurately by successive scientific theories, is something that Kuhn finds objectionable. Instead of claiming that scientists from different paradigms pur- sue their investigations with different worldviews, Kuhn prefers to say that “after a revolution scientists work in a different world” (1970: 135). (See also H. Sankey, “Kuhn, relativism and realism”, ch. 6 of this volume.) The notion of truth plays very little role in The Structure of Scientific Revolutions, as Kuhn himself notes (1970: 170). This absence is expected, given Kuhn’s skepticism about the role of truth in science. In the postscript to the second edition, Kuhn puts forward his fairly dismissive attitude about the role of truth in science:

A scientific theory is usually felt to be better than its predecessors not only in the sense that it is a better instrument for discovering and solving puzzles but also because it is somehow a better representation of what nature is really like. One often hears that successive theories grow ever closer to, or approximate more and more closely to, the truth. Apparently generalizations like that refer not to the puzzle-solutions and the concrete predictions derived from a theory but rather to its ontology, to the match, that is, between the entities with which the theory populates nature and what is ‘really there’.

387 Jamin Asay

Perhaps there is some other way of salvaging the notion of ‘truth’ for application to whole theories, but this one will not do. There is, I think, no theory-independent way to reconstruct phrases like ‘really there’; the notion of a match between the ontology of a theory and its ‘real’ counterpart in nature now seems to me illusive in principle. (1970: 206)

What we can learn from this passage is how Kuhn understands his realist opponent. When a realist claims that a theory is true, what Kuhn hears is that the theory, in a theory-independent way, matches what is ‘really there’. The scare quotes here are revealing. They suggest that the anti-Kuhnian uses those words in a way that Kuhn does not. Presumably, the realist has some inflated notion of ‘really there’ of which Kuhn is skeptical. Perhaps Kuhn’s idea is that the realist thinks that there is some privileged description of the universe, and that a theory that fails to employ its terms fails to be true, despite whatever predictive successes it may enjoy. (Nowhere else in the book does Kuhn use the phrase ‘theory-independent’.) If so, for Kuhn truth is essentially connected with some special vocabulary that provides the correct description of reality, whereas scientific theories are always “dependent” upon their own vocabularies, which may or may not correspond to the privileged one. To better appreciate the perspective Kuhn is endorsing, it will be worthwhile to consider the views of others who also approve of the general idea. Kuhn’s claim seems to be that realism is committed to theories being ‘really’ true in a ‘theory-independent’ way. Such a commitment is too metaphysically loaded for Kuhn’s tastes. Fine echoes this thought when he describes the real- ist as one who adds onto NOA “a desk-thumping, foot-stamping shout of “Really!” [. . . T]he realist wants to explain the robust sense in which he takes these claims to truth or existence, namely, as claims about reality – what is really, really the case” (1984b: 97). It might seem juvenile to joke as to whether Fine intends the realist position to be one that takes scientific theories to be really true or one that takes scientific theories to be really, really true. But the joke has some force, since Fine crucially relies on the distinction between a theory being true and a theory being really true in order to separate his view from realism (much as Kuhn relies on distinguishing objects existing from their ‘really’ existing). The question is what the difference is supposed to be. Fine again points to the correspondence theory of truth as the culprit. Though Fine personally takes the correspondence theory to be empty of content – it is “an arresting foot-thump and, logically speaking, of no more force” (1984b: 97) – it at least provides the needed wedge between realism on the one hand and Kuhn and Fine on the other. The same dichotomy drawn by Kuhn and Fine also appears in the work of Richard Rorty (1972). He, too, draws a distinction between the way the realist understands truth and the world and the way that the realists’ opponent understands those notions. When discussing the idea that the world determines what is true and false (as opposed to a coherentist view that says truth is a matter of coherence between beliefs, say), Rorty identifies an ambiguity. He writes, “All that “determination” comes to is that our belief that snow is white is true because snow is white, that our beliefs about the stars are true because of the way the stars are laid out, and so on” (1972: 662). This set of claims is unimpeachable, much as (on Fine’s view) the commitments of NOA are. But, Rorty says, this sense of determination is not enough for the realist: “What he wants is [. . .] the notion of a world so ‘independent of our knowledge’ that it might, for all we know, prove to contain none of the things we have always thought we were talking about” (1972: 662–663). What is clear is that Rorty, like the others, portrays the realist as someone thirsting after something always elusive. For Rorty, what is distinctive about realist truth is that it could obtain in spite of everything we believe about the world being wrong. What appears to concern Rorty is the possibility (according to realism) that a scientific theory could manifest any number

388 Realism and theories of truth of empirical and practical virtues and yet somehow fail to be true (perhaps because it fails to be couched in some privileged vocabulary). Common among Fine, Kuhn, and Rorty is the idea that essential to scientific realism is a doctrine about truth that maintains that it ‘really’ corresponds to the world, is ‘independent’ of theory, and that we could in principle be wrong about it. The difficulty with pressing this line, however, is that it risks making a straw man out of the realist. After all, notice that Fine accepts that theories are true, Kuhn admits that scientific theories aim at making correct predictions, and Rorty believes that the world determines what is true. These sound like the sorts of views that realists espouse: science aims at telling a true (and not merely useful) story about the world, and it is the world itself (and not our beliefs and practices) that decides which theories are true or false. In order to distance themselves from the realists, Kuhn, Rorty, and Fine insert extra content into the realist view – theories aren’t just true, they’re really true – but at the same time they seem to suggest that this additional content isn’t really additional content at all. A stomp of the foot does not add cognitive content to an utterance. That’s what Fine would say, but then it’s unclear why Fine thinks it’s a stomp of the foot that separates his philosophical view from the realist’s. The concern, ultimately, is that either these supposedly non-realist or anti-realist stances adopted by Kuhn, Fine, and Rorty are just realist perspectives after all, or that these theorists have unfairly mischaracterized the realist position. Realists, it seems to me, have more than adequate responses to make against each of these cri- tiques. The realist may claim that Fine’s ‘real truth’ is nothing more than truth, and so Fine needs to do more to successfully distance himself from realism. Kuhn’s idea of ‘theory-independent truth’ needs some analysis that doesn’t render the idea innocuous. For example, ‘Kuhn is the author of The Structure of Scientific Revolutions’ is true, even though some theory T1 entails it, and some theory T2 entails its negation. Kuhn is the author of The Structure of Scientific Revolutions regardless of what any theory has to say about the matter. That is a sense of theory-independent truth the realist is quick to accept; if the realist is committed to some other notion (such as a privileged vocabulary that is indispensable for describing the ‘real’ nature of reality), it had better be identi- fied and explained. As for Rorty, the realist sees nothing more than a sensible epistemological fal- libilism where Rorty detects grand metaphysical substance. That we might be wrong about even our most successful theories remains a possibility, however remote. The sensible realist will claim that we’re not in fact wrong about the theory of evolution by natural selection, for example, but that in principle we could be mistaken about it: the evidence against the theory could forever lie beyond our ken. This unfortunate possibility is far-fetched, but a modest fallibilist epistemology acknowledges it. Rorty is mistaken to see inflated metaphysics lurking behind it. Hilary Putnam, for his part, has articulated a similar mindset that attempts to insert a gulf between realists and anti-realists. He writes, “What I believe is that there is a notion of truth, or, more humbly, of being ‘right’, which we use constantly and which is not at all the metaphysical realist’s notion of a description which ‘corresponds’ to the noumenal facts” (1990: 40). First notice Putnam’s reluctance to even speak about truth: it’s more humble to speak of being right and avoid appeal to ‘truth’ at all. Given the contemporary context in the theory of truth, this supposed humility is uncalled for. To speak of something being true need not amount to anything more than making an assertion. When I assert that it’s true that Kuhn authored The Structure of Scientific Revolutions, I’m only asserting that Kuhn authored The Structure of Scientific Revolutions. To think that the use of ‘truth’ requires any metaphysical apology is to assume from the outset that the notion is inflated. Second, we see again the common refrain that the realist is committed to a kind of truth that goes beyond what Putnam can accept in good conscience. Putnam invokes Kant’s phenomena/noumena distinction, which is a clue as to what Kuhn, Fine, and Rorty may also sus- pect is distinctive about realism. Non-realists, they say, can accept that truth is correspondence to

389 Jamin Asay reality in some sense, but not in the distinctly realist sense of truth as correspondence to noumenal reality (or the “World”, as Fine [1986] writes with italics and a capital ‘W’). I am skeptical that scientific realists rely on this Kantian distinction in developing their view (see also Musgrave 1989). The distinction plays no role in Psillos’s defense of realism, and for all of van Fraassen’s talk of ‘saving the phenomena’, The Scientific Image never once discusses the noumena. Perhaps, though, there is a way of articulating the basic thought around which Putnam and the others seem to be circling, drawing on the idea of a ‘privileged language’ from above. Imagine that there is some language – distinct from any extant natural language – that ‘carves nature at the joints’. That is to say, all of the predicates of this language refer to the genuine, natural properties of the world (as opposed to gerrymandered properties like grue). The other features of the language – its quantifiers and connectives, for example – also map onto the gen- uine structure of the world. This language may be used to describe the world ‘as it really is’ in a way that our humble natural languages that are overly concerned with the more derivative parts of reality never could. This is the language of God, or the language of ‘the book of the world’. Facts expressed in this language are the noumenal facts. Sure, it’s true that Kuhn authored The Structure of Scientific Revolutions. But presumably, that claim reduces down to some complicated set of facts of microphysics that can be expressed using the privileged language: those are the really true facts, the ones about quarks and charms, not about men and monographs. That there is such a description of the world, and that science aims at it, is the distinctive commitment of realism. If this sort of picture is what the partisan opponents have in mind when they speak of ‘real’, ‘theory-independent’, or ‘noumenal’ truth, I again wonder on behalf of scientific realists as to whether this is a necessary component of the latter’s view. To be sure, the account has its defend- ers, Ted Sider (2011) being a prominent recent example. Yet it is hardly necessary for articulating the kind of perspective on science that realists usually defend. The joint-carving view is a broader metaphysical perspective one might have, but it’s one that reaches far beyond the interests of the scientific realism debates and extends into the canonical discussions about metaphysics, empir- icism, transcendental idealism, and the like. These are perennial philosophical topics of great interest, but to see them as essential to capturing the basic perspective of the scientific realist is to inflate the view beyond what’s warranted. The partisans, it seems, are guiltier of inflating scientific realism than are realists of inflating truth. Summing up, the defenders of the partisan perspective on truth and realism – at least those coming to the issue from the non-realist camp – assign to scientific realism a conception of truth that is thoroughly metaphysical and involves a commitment to views that speak to a host of grand philosophical questions such as the tenability of transcendental idealism and the like. The non-realists appear to take this tack because it allows them to embrace the truth of scientific theories while disavowing the realist position: yes, scientific theories are true of course. What means do we have of discovering the truth about the world better than science? But whether or not they’re ‘really true’ or describe ‘the World’ in all its noumenal glory are questions that either escape our grasp, are empty, or don’t even make sense. What’s revealing about this conception of the debate is that it’s coming from the non-realist camp; as a result, we should be wary of think - ing that they have correctly described their opponents’ positions. When the only option you leave open to your opposition is that their view is meaningless, is empty, or entails skepticism, there is cause for re-evaluation, and concern that you have erected a straw man. In the face of this sort of realist retort – ordinary truth itself is the realist’s standard for science, not “joint carving in the language of God noumenal truth” – the partisan opponents worry that realism is reduced to triviality or banality (cf. Rorty 1986: 354 and Putnam 1990: 32). Who would deny that scientific theories are true (even if they’re not a source of God’s-eye-perspective truth)? Well, plenty of those deeply engaged in the realism debate. Worrall (1989) doubts science’s ability

390 Realism and theories of truth to teach us the truth about the nature of light. Van Fraassen believes that scientific investigation gives us no evidence as to the existence of DNA molecules and electrons (1980). Laudan argues that the history of science shows that we ought to be skeptical about even the best of our current theories (1981). None of these views accepts that we should blithely commit to the truth of our best science; furthermore, they do this independently of offering philosophically loaded concep- tions of truth. If committing to the truth of theories automatically involved such an inflated (or at least highly controversial) metaphysical worldview, then there would be independent reason to pause before accepting the truth of our best science. But what the actual practitioners in the philosophy of science demonstrate is that even if we commit to a fully deflationary conception of truth, there is still reason to be cautious about accepting the truth of scientific theories. Truth – even when brought back down to earth from noumenal heaven – is still a major commitment, and one to which many philosophers of science object. If the non-realist partisans are happy to commit to it nonetheless, that highly suggests that they should be interpreted as reluctant realists.

5 Modest partisanship The partisan views discussed earlier have all come from those who reject the label of scientific realism. In effect, I have argued that their criticisms are better directed at those who, like Sider, embrace a particularly loaded (i.e., joint-carving) broader metaphysical outlook. Defenders of scientific realism endorse a more modest thesis that need not take a stand on some of the grander philosophical questions of metaphysics. In this section, I address some more contemporary per- spectives on truth and realism that don’t infuse the issue with such imposing philosophical implications and yet still find a substantive role for truth to play in the understanding of science and its success. Philip Kitcher (2002) has defended what he calls a “modest correspondence theory of truth”, which he more or less equates with the basic commitment of scientific realism (or at least his brand of realism). Kitcher’s modest theory “doesn’t suppose that there are entities, facts, to which true sentences correspond” (2002: 347). Nor, presumably, does Kitcher’s realist employ the even grander metaphysical theses adumbrated by the partisans of the previous sections. Kitcher’s posi- tive account of truth maintains that truths are about objects that exist independently of those who make utterances concerning them, and that a causal theory of reference is required to explain how linguistic tokens represent mind-independent objects. It is the second of these commitments that Kitcher believes separates his modest realist and correspondence theorist from the deflation- ist. The adoption of this causal theory of reference is useful to the realist because it buttresses the classic realist argument that truth offers the best explanation of ordinary and scientific success, both practical and theoretical. Truth can explain success because a theory being true means that there are causal connections between how the world is and how the theory represents the world. Maps, by analogy, are successful when they stand in the right causal relationships to the part of the world that they claim to model. All told, Kitcher identifies scientific realism as the view that many of the well-established claims made by science are true in the sense offered by his modest correspondence theory of truth. There is reason to adopt realism in this sense because the kind of truth (and, importantly, reference) posited by the view is essential for explaining the many successes of mature science. Here we find a kind of partisanship – which theory of truth you maintain makes a difference as to what view on realism is most tenable – albeit one that is not so burdened with the metaphysical extravagances of the past. Still, Kitcher’s presentation faces its own challenges. Notably, Kitcher defines deflationism about truth in terms of a thesis about reference: that all there is to say about reference are sentences of the form ‘“a” refers to a, if a exists’. This thesis is best thought of as a

391 Jamin Asay deflationary view about reference, and so is separable from deflationary views about truth. It’s true that Paul Horwich (perhaps the most ardent defender of deflationism) endorses both (1990: 121), but that is not to say that the two views are the same or that one entails the other. For example, one might adopt a causal story about how ‘Kuhn’ has come to refer to Kuhn and how ‘The Structure of Scientific Revolutions’ has come to refer to The Structure of Scientific Revolutions. The proponent of that account about reference could still maintain that there is no substantial prop- erty in virtue of which ‘Kuhn is the author of The Structure of Scientific Revolutions’ is true. There is nothing more to the sentence being true than Kuhn being the author of the book. The upshot for Kitcher is that while realism might be aided by a partisan view on reference, truth ultimately has nothing to do with it. Other modest takes on partisanship might, like Kitcher’s, try to find some room for how the theory of truth can lead to progress in the discussion of scientific realism. Stephen Leeds (2007) considers (and rejects) some possible avenues down which realists might venture. His view, with which I am sympathetic, is that deflationary views are perfectly sufficient for the tasks scientific realists take up. To defend partisanship, theorists need to come up with some task for a substan- tive theory of truth to perform that deflationism cannot handle. See, for instance, Jarrett Leplin’s argument for why a deflationary ‘redundancy’ theory can’t capture the necessary explanatory import of truth needed for realism (1997: 17–19).

6 Conclusion The central questions about the relationship between truth and realism are (1) does truth fig- ure into the definition of scientific realismparticular and views (2) areabout truth necessary to the debate over scientific realism? I have suggested a positive answer to the first question and presented a number of difficulties for positive answers to the second. Crucial to the realist posi- tion is that science succeeds (or aims to succeed) at discovering the (approximate) truth about reality, though it may be indifferent as to whether thetruth correctis offered theory by of cor- respondence or deflationary theories. Whether realism is exhausted by its relationship to truth is another matter. I have argued elsewhere, for instance, that realists need to commit not just to the (approximate) truth of scientific theories but also to some sort of mind-independent reality of unobservable objects that grounds the truth of scientific theories (Asay 2013b). Otherwise, a phe- nomenalist view that accepts that scientific theories are true but made true exclusively by a realm of sense data would qualify as a realist theory. (Van Fraassen speaks to such concerns by requiring that theories literallybe true, but I take the issue to be a metaphysical one, not a semantic one.) Given the sheer number of different theories that have been defended in the name of scien- tific realism, the correspondence theory of truth, and others, it is inevitable that some theorists will develop theories about realism and truth that are necessarily connected; Kitcher provides just one example. Nevertheless, it remains plausible that though truth and realism are intimately connected, realism need not adopt a stance on what exactly truth is. That may come as a shock to those with nineteenth-century sensibilities, but times have changed. And for the better: neutrality is the preferred view, methodologically speaking, as it allows us to develop views with little risk of begging the question against other, potentially quite different views on other, distinct matters.

References Asay, J. (2013a)The Primitivist Theory , of Cambridge:Truth Cambridge University Press. ——— (2013b) “Three Paradigms of Scientific Realism: A TruthmakingInternational Account,”Studies in the Philosophy of Science27, 1–21.

392 Realism and theories of truth

Cartwright, N. (1983) How the Laws of Physics Lie, Oxford: Clarendon Press. Devitt, M. (1984) Realism and Truth, Princeton: Princeton University Press. Fine, A. (1984a) “And Not Anti-Realism Either,” Noûs 18, 51–65. ——— (1984b) “The Natural Ontological Attitude,” in J. Leplin (ed.), Scientific Realism, Berkeley: Univer- sity of California Press, pp. 83–107. ——— (1986) “Unnatural Attitudes: Realist and Instrumentalist Attachments to Science,” Mind 95, 149–179. Fraassen, B. C. van (1980) The Scientific Image, Oxford: Clarendon Press. ——— (2008) Scientific Representation: Paradoxes of Perspective, Oxford: Clarendon Press. Horwich, P. (1990) Truth, Oxford: Basil Blackwell. ——— (1996) “Realism and Truth,” Philosophical Perspectives 10, 187–197. Joachim, H. H. (1906) The Nature of Truth, Oxford: Clarendon Press. Kitcher, P. (2002) “On the Explanatory Role of Correspondence Truth,” Philosophy and Phenomenological Research 64, 346–364. Kuhn, T. S. (1970) The Structure of Scientific Revolutions (2nd ed.), Chicago: University of Chicago Press. Laudan, L. (1981) “A Confutation of Convergent Realism,” Philosophy of Science 48, 19–49. Leeds, S. (2007) “Correspondence Truth and Scientific Realism,”Synthese 159, 1–21. Leplin, J. (1997) A Novel Defense of Scientific Realism, New York: Oxford University Press. Lynch, M. P. (2009) Truth as One and Many, Oxford: Clarendon Press. Musgrave, A. (1989) “NOA’s Ark – Fine for Realism,” Philosophical Quarterly 39, 383–398. Newman, A. (2002) The Correspondence Theory of Truth: An Essay on the Metaphysics of Predication, Cambridge: Cambridge University Press. Psillos, S. (1999) Scientific Realism: How Science Tracks Truth, London: Routledge. Putnam, H. (1978) Meaning and the Moral Sciences, Boston: Routledge and Kegan Paul. ——— (1990) “A Defense of Internal Realism,” in J. Conant (ed.), Realism with a Human Face, Cambridge: Harvard University Press, pp. 30–42. Rorty, R. (1972) “The World Well Lost,” Journal of Philosophy 69, 649–665. ——— (1986) “Pragmatism, Davidson and Truth,” in E. LePore (ed.), Truth and Interpretation: Perspectives on the Philosophy of Donald Davidson, Oxford: Basil Blackwell, pp. 333–355. Sider, T. (2011) Writing the Book of the World, Oxford: Clarendon Press. Smart, J.J.C. (1963) Philosophy and Scientific Realism, London: Routledge and Kegan Paul. ——— (1968) Between Science and Philosophy, New York: Random House. Worrall, J. (1989) “Structural Realism: The Best of Both Worlds?” Dialectica 43, 99–124. Young, J. O. (2001) “A Defence of the Coherence Theory of Truth,” Journal of Philosophical Research 26, 89–101.

393 31 REALISM AND METAPHYSICS

Steven French

1 Introduction Scientific realism may be crudely characterised as insisting that scienceis tells, us how the world in both its observable and unobservable aspects (for a useful consideration of the different ways realism is defined, see Chakravartty 2015). An obvious question, then, is what does this insistence that ‘science tells us how the world is’ amount to? An equally obvious answer is that this involves simply reiterating the core features of the relevant theory, as it is stated or presented ‘scientifically’ (that is, in the relevant papers, conference presentations, textbooks, etc.). However, such an answer might seem unhelpful because of the feeling that what lies behind that demand to know ‘how the world is’ is the idea that the relevant knowledge must involve something more than what is expressed by simply repeating the relevant core features of the theory. So, for example, consider what physics tells us about how the world is with regard to space and time. If the realist were to simply present the core equations of General Relativity and declaim ‘There! That is how the world is!’, the reaction would be justifiably dismissive (certainly from other philosophers or lay-folk, at least!). First of all, the world thatcannot, unless literally be like some form of radical Platonism is being adopted, according to which the world is understood as inherently mathematical. So we need to interpret these equations. But, second, that interpreta- tion must go beyond the partial interpretation of the logical positivists and their ilk, whereby the equations are connected via some appropriate bridge principles, correspondence rules, or ‘dic- tionary’ to what is observed – at least as far as realism is concerned. Thus, third, and not surpris- ingly, realism should be committed to giving an answer to the question, “what does this ‘science tells us how the world is’ amount to?” And that should be articulated in terms of providing an interpretation of, in this case, Einstein’s equations, in a way that goes beyond relating them to the observable and not only invokes such features as ‘spacetime’, ‘mass-energy’, and so on, but goes further, in some sense, by incorporating that ‘something more’ that I mentioned earlier. The question then is: how much further should the realist go in -articulating these (unobserv able) features in order to ‘flesh out’ what science is telling us about how the world is? Again, one response might be, ‘As far as scientists are willing to go and no further’. However, on the one hand, that might not be very far at all, and again one might be left feeling unsatisfied, for the same reasons as before. On the other, some scientists go much further than others in articulating such features, where this articulation involves situating those features in some broad, and broadly understood,

394 Realism and metaphysics metaphysical framework. And indeed, some realists – whether scientists or philosophers – would argue that they should go even further, rendering that metaphysical framework more precisely and drawing on available metaphysical schemes and devices in order to do so. The central question, then, that I shall consider in this chapter is, how much further should we go in articulating what science tells us about how the world is. Or, to put it another way and specifically in the realist context, to what extent should our realism be metaphysically informed? Again, one obvious answer might be ‘only as much as it strictly needs to be’. Perhaps this might be motivated by a reaction to those anti-realists who urge us to resist the seductive temptation of metaphysically informed realism and say goodbye to metaphysics entirely (van Fraassen 1991: 480–482). This would put the realist at the ‘shallow’ end of the spectrum (Magnus 2012), with perhaps the shallowest form of realism adopting the unhelpful answer listed: accepting that there are electrons, for example, but refusing to go any further and state what sorts of things electrons are – or indeed consider whether they should be regarded as things at all – beyond what physics tells us. Presumably such a realist would also likewise refuse to engage in metaphysical consideration of the nature of the properties, such as charge, possessed by electrons and other particles, or, indeed, even whether and in what sense they might be said to ‘possess’ them. On such a view, realism would become (almost) entirely epistemic and would involve only considerations of the grounds for believing theories (or the appropriate explanatory parts or features of them) to be true, of the appropriate theory of reference, if deemed appropriate, of the relevant terms, of the appropriate responses to underdetermination and the pessimistic meta-induction, and so on. Perhaps some would be comfortable with such an epistemically rich, metaphysically poor form of the realist stance, but, first of all, it is hard to make out what ‘truth’ and ‘reference’ are supposed to amount to on such a view. If the correspondence theory of truth to which the realist typically adheres is supposed to relate propositions and states of affairs, then what is the nature of the latter? And if reference is supposed to take us from linguistic terms to entities, how are these to be conceived? Second, of course, even those realists who are happy to paddle in the shallow end typically find themselves talking about electrons as ‘objects’ or their properties as being ‘possessed’ by them and in general adopt a more or less ‘naïve’ and quite general metaphysics that allows them to answer these questions regarding truth and reference but which, clearly, deserves critical investigation. And this move into slightly deeper metaphysical waters might well be motivated by something along the lines of Chakravartty’s thought that ‘One cannot fully appreciate what it might mean to be a realist until one has a clear picture of what one is being invited to be a realist about’ (Chakravartty 2007: 26). Describing electrons as ‘objects’ that ‘possess’ ‘properties’ (however all of those terms are understood) can be seen as going some way towards providing such a ‘clear picture’. More generally, one can argue that such a picture can be filled in by drawing on an appropriate set of metaphysical notions, tools, and assorted devices, tailored, one would hope, to the theoretical context concerned. But of course, some may feel, first, that the more metaphysical shading we include in the picture, the less clear it becomes and, second, that the set of tools offered by current metaphysics is simply not ‘fit for purpose’ when it comes to colouring in the picture offered by modern physics, leaving us with the choice of either coming up with some ‘made-to-measure’ devices of our own or retreating to the shallows. So, now in addition to the core question given, namely how much deeper should one go from the very shallowest form of realism, we have the further issue: if one is going to venture further down the spectrum, as it were, can one draw on any of the tools that current metaphysics has to offer? I shall argue that given the characterisation of realism, we should ‘go deep’, and that in response to the issue just mentioned, current metaphysics does have some tools that we can appropriate and deploy to obtain a ‘clear picture’ of what we are being asked to be realists about.

395 Steven French

2 Gaining clarity by moving to the modal Let me begin by considering in more detail this idea that realists need a ‘clear picture’ of what science is telling us. Obviously we should not take this too literally, as a demand that we have to come up with some kind of visualisable image of electrons or quarks or space-time or whatever. Although there are interesting issues to do with the role of visual imagery in model construction, and although some scientists have extolled the value of visualisation as a heuristic device, what is intended here, as already indicated, is something along the lines of providing some appro- priate interpretation of the theory, which goes beyond simply adverting to the relevant formal machinery, with attendant minimal interpretation of the relevant elements in terms of ‘electrons’, ‘quarks’, and the like. So, consider our example of the electron again: in the classical context, at least, one might want to begin to put together a ‘clear picture’ of it not only as an object but as an individual object (however that individuality might be grounded), which possesses certain properties, such as charge, which in turn can be regarded as intrinsic (however that may be articulated) . . . and so on. Indeed, physicists themselves often find it necessary to construct at least the beginnings of such a clear picture or understanding in order to further advance their work. Thus Boltzmann, for example, in his famous work, Lectures on Gas Theory ([1896] 2011), that lays down the basic foundations of statistical mechanics, makes it clear that the gas atoms he was considering should be regarded as individuals; otherwise the combinatorial procedures he sets out would make no sense. And one can see the advantage of these kinds of moves, insofar as that understanding rests on a form of metaphysics that is essentially contiguous with that of everyday entities: gas atoms, electrons, and so on are individual objects possessing properties, just like chairs, trees, and people. Now, such an acknowledgment immediately invites the further question: what is it that grounds or underpins this individuality of the electron or the intrinsic nature of charge? Here one can appeal to a familiar metaphysical move: going modal. Thus we might follow the likes of the Scholastic philosopher Suarez and many metaphysicians since in reflecting on a possible world in which there is only the object concerned – an electron in this case – and concluding that since there is nothing in that world from which the object can be distinguished, individuality must be at least conceptually distinct from distinguishability. This may then motivate a view of individuality as grounded in some form of haecceity or primitive thisness, for example. Likewise, in considering what it is that renders a property intrinsic, we might reflect again on this possible world and conclude that intrinsicality is associated with those properties that may be possessed by such ‘lonely’ objects – that is, in the absence of anything else. Again, this may be as ‘deep’ as the realist wishes to go and she may feel that the picture (of gas atoms or electrons in the context of classical physics) that one ends up with – of individual objects possessing intrinsic properties – provides sufficient understanding for the purposes of fleshing out her realism. The metaphysician might press her to go even further, insisting that her understanding is incomplete until, say, she explicates what she means by ‘property’ – is it an instantiated universal or trope-like particular, for example? In response the realist may throw her hands up in the air and declaim, ‘I have as clear a picture as I need for my purposes!’ At this point, perhaps, we can only urge that some kind of balance be struck between the demand for such a clear metaphysical picture and the fear of being entangled in metaphysical devices that appear to be too far adrift from any determination in the science with which we began. Indeed, such drift carries obvious epistemic risks. Consider the example mentioned: is there anything in electrodynamics, whether classical or quantum, that indicates whether ‘charge’ should be regarded as an instantiated universal or a trope? Arguably not. Here we come up against a form of ‘metaphysical underdetermination’, where the physics underdetermines the

396 Realism and metaphysics metaphysics that we are appealing to in order to yield a ‘clear picture’, and there exist other such forms, even more acute (French 2011). The risk might also seem heightened if we turn to the history of science, which is typically portrayed by the critic of the realist as a history of discarded ontologies which then feeds into the infamous ‘Pessimistic Meta-Induction’. Thus, caloric is thrown away in favour of molecular motion, even though the attendant theory was at least partly successful, and likewise the ether is abandoned as the ‘carrier’ of electromagnetic waves in favour of a field-theoretic conception (see P. Vickers, “Historical challenges to realism”, ch. 4 of this volume for further discussion). If these unobservables are seen as entwined with certain forms of metaphysics, then one might be tempted to run an analogous argument to the Pessimistic Meta-Induction, but this time with the focus on that metaphysics, with the conclusion that given the historical record, whatever meta- physics is taken to be entwined with our current theories should also be understood as unworthy of our epistemic commitment. It goes without saying that care must be taken when it comes to these concerns. Just as one can argue that although certain unobservable elements are discarded during theory change, others – particularly those that play a relevant explanatory role – are retained, so one can argue that although certain ‘manifestations’ of forms of metaphysics are abandoned, along with that which they ‘flesh out’, other manifestations or carriers of those forms remain. Thus, we might lose caloric as a substance, but we retain material atoms and molecules, of course, and also energy, which has been argued to be substantival; or we lose the ether, again as an underlying substance for wave-like properties, but we retain a substantival metaphysics in the form of fields, understood either as substances themselves or as properties of a substantival space-time. Nevertheless, one can appreciate the sense of epistemic risk here. An obvious way to mitigate that risk would be to once again return to the shallow end of the realist pool. But that is perhaps too hasty. Alternatively, we might note that in such situations we have little choice but to adopt an attitude of ‘epistemic humility’ (Langton 2009): that is, given the above lack of determina- tion, and since no observation, however broadly construed, can tell us whether charge, say, is an instantiated universal or a trope, we have to adopt a humble attitude towards these options, in the sense that we can never know which holds. This raises another question: just how much humble pie should the realist eat? Too little and the epistemic risks loom large. On the other hand, too much humility will of course attract the scorn of those who remain neutral or are undecided in the realist-antirealist debate, as well as that of the anti-realist, who would be tempted to remark that if the realist is going to be that epistemically humble, she might as well slide over into the anti-realist camp! Still, it is worth noting that even the anti-realist might be interested in the question of what it is to provide these ‘clear pictures’ via which we obtain understanding. Thus the constructive empiricist’s (in)famous farewell to metaphysics (van Fraassen 1991) is mitigated somewhat by the admission that even though, on that view, interpretations of our theories can only ever tell us how the world could be, reflecting on such interpretations does give us more than simply reiterating the relevant physics. Of course, reflecting on alternative possibilities may indeed illuminate aspects of some actual or current situation – as in the case of our ‘lonely’ object from earlier, or in that of alternative histories, for example, which may help us understand how we arrived where we are today (see Radick 2008), or works of science fiction which by exploring future possibilities shed light on the impact of technology and the associated human response. However, there are limits to how much can be gained via such modal reflections. Consider the case of quantum physics, where even the constructive empiricist is happy to entertain a particular interpretation, but only, again, as a way the world could be. In this case, it might be argued that there is no actual or current situation to illuminate, at least not when it comes to the theoretical side of things. So one might

397 Steven French feel that it is not clear what it is that these alternative interpretations are illuminating for the anti-realist, beyond the observable situation, which of course is what the constructive empiricist is primarily concerned with. Exploring these issues will take us too far from the central concern of this chapter. Let me just note two things: first, when it comes to deciding how much metaphysics one should swallow as a realist or how deep one should go, a balance must be struck. Too little or too shallow, and you may be judged to be doing little more than regurgitating the relevant physics; too much, and you run the kind of epistemic risk associated with metaphysical underdetermination or end up eating too much humble pie! Fortunately, I believe that such a balance can be struck by pulling away from certain metaphysical devices in order to avoid underdetermination whilst deploying other such devices to take one towards a deeper form of realism (see French 2014). Secondly, in her consideration of the different interpretations of scientific theories (such as quantum mechanics) the anti-realist effectively deploys the same metaphysical manoeuvre as in the cases of individuality and intrinsicality, touched on earlier, namely the ‘move to the modal’. (One can also find this device deployed in current discussions of the nature of scientific laws, for example; see Slater and Haufe 2009.) Here one acquires the appropriate ‘clear picture’, either directly in the case of different interpretations or indirectly as in the cases of individuality and intrinsicality, by shifting to a possible situation or world, the features of which are then supposed to shed light on the relevant features of this, the actual world. Unfortunately, however, when we update our discussion to include modern physics, manoeu- vres such as this may prove to have only a limited usefulness.

3 The dismissal of metaphysical devices Thus, consider again the move to a world containing a lonely object and the claim that this helps underpin the distinction between individuality and distinguishability. Even in this venerable and apparently straightforward case one might immediately raise the following concern: surely there has to be something else in such a world, namely some form of spatio-temporal background? But as soon as we suppose that, the conceptual distinction between individuality and distinguishability collapses and the move loses its value. The point here is the same as that made by Hacking (1975) in his defence of another metaphysical device, Leibniz’s Principle of the Identity of Indiscerni- bles. As he argues, certain standard counter-examples to the Principle (such as the infamous pair of iron globes) should be dismissed for failing to include such a background, the inclusion of which may offer an escape route for the Leibnizian. We’ll return to Leibniz’s Principle later, but the crucial point is, to paraphrase Hacking, in the positing of possibilities, mere stipulation is not enough – one should incorporate appropriate physical features if such a move is to be relevant to the actual world. Of course, some might decry this as an unfortunate surrender to naturalism, insisting that the value of metaphysics lies in its purview of possibilities beyond the physical. To which one might respond, first, that if the great eye of modern metaphysics can scan over such vast swathes of possibilities, how is it that the schemas, views, and devices it comes up with are all so classical in the face of modern physics (French and McKenzie 2012), and second, that of course we should surrender to naturalism, since the name of the game here is to ‘go deep’ by deploying metaphysics in order to provide the clear picture that realism requires! One might also worry that the defence of Leibniz’s metaphysical device implicitly assumes a substantival understanding of space-time itself, according to which the latter is indeed a kind of ‘thing’, by comparison to which the lonely object, for example, might be regarded as distin- guishable. This is of course an obvious understanding to adopt, since taking a relationist account

398 Realism and metaphysics in this context is clearly problematic. One might try to overcome this problem by adapting the standard relationist move when faced with a lack of material objects between which the rele- vant spatio-temporal relations can hold, namely bridge the ontological gap by positing possible objects. However, now we have possible objects within a possible world that was supposed to contain only one, and with the iteration of possibility one begins to lose one’s grip on what is being supposed! Nevertheless, the overall point is a useful one: a possible world is not physically relevant unless it includes some spatio-temporal background which in turn may require under- pinning with its own ‘clear picture’. Likewise, efforts to articulate what might be meant by an ‘intrinsic’ property by shifting one’s gaze to such lonely worlds may also founder on the shoals of modern physics. Is charge such a property? Well, let’s consider a world in which there is one lonely but positively charged proton. It would appear that we can certainly conceive of such a world, but is such a world of just one pro- ton physically possible in the context of the so-called Standard Model of elementary particle phys- ics? At the very least, it is not clear whether such a solution to the model even makes any sense (French and McKenzie, ibid.). Furthermore, given quantum field theory (our most fundamental framework for physics) and assuming a continuous space-time, the possession of a supposedly intrinsic property such as charge may depend on what else exists, thus undermining the loneliness on which the standard understanding of intrinsicality depends (French and McKenzie 2015). And one can extend this sort of analysis further. In current discussions of the ‘metaphysics of science’, capacities, dispositions, and powers feature prominently. In particular, properties such as charge are regarded, from this perspective, as essentially dispositional in the sense that the prop- erty can be characterised entirely in terms of the relevant stimulus – the bringing up of a ‘test’ charge, say – and the associated manifestation – the force experienced by that test charge or, if one wants something more observable, the acceleration it experiences. From this metaphysical base the relevant laws are then said to ‘flow’ or upon which they supervene, and the necessity of such laws is then nicely underpinned: if we move to a different possible world but with the same entities possessing the same properties, we will have the same dispositions and hence the same laws (see Bird 2007). Unfortunately, however, it is not clear whether this characterisation in terms of stimulus and manifestation is also appropriate in the context of modern physics. Within the classical frame- work, of course, we can easily envisage the appropriate (clear) picture underpinning this char- acterisation: anyone who has studied classical electromagnetic theory will be familiar with the idea of imagining a fixed charge and then bringing in a test charge from infinity (the stimulus), resulting in the expression of a force and associated acceleration. But in the modern context, the notion of isolated objects and hence the properties they possess is problematic, as we’ve already seen. Furthermore, the role of symmetries – so crucial in this context – appears to be difficult to accommodate within a dispositional account. Consider: from the perspective of modern physics, properties like charge and spin ‘drop out’ of the relevant symmetry group, such as the Poincaré group, which captures certain (relativistic) space-time symmetries. On one view, such symmetries constrain the laws, and it is not clear how this kind of constraint nor this way the properties ‘drop out’ of the symmetry can be accommodated within the dispositional account (see Psillos 2006; Lange 2012). More generally, the idea of a distinct stimulus becomes problematic in this general context, to the point that one might be tempted to conclude that this particular metaphysics should be abandoned altogether (French forthcoming). The limitations of contemporary metaphysics indicated here have already been noted in the literature (e.g. Ladyman and Ross 2007; Callender 2011). Thus, consider another metaphysical concept, that of the composition of objects: can it be that the composition of a table, say, into parts such as legs and a top, is really akin to that of the table into its ‘constituent’ particles or of

399 Steven French certain of those particles, such as protons and neutrons, say, into their constituents, namely quarks and gluons? Lange, for example, has indicated how composition is problematic in the relativistic context (2009), and Healey has argued that physics provides cases of multiple decompositions (2013).1 At best, standard accounts of parthood and of the ‘arrangement’ of such parts to yield a table are little more than thinly sketched placeholders for the detailed accounts based on the rel- evant physics; at worst, they too founder on said details, leading to claims that classical mereology is simply inapplicable to quantum objects. Indeed, Ladyman and Ross write,

It [the composition relation] is supposed to be the relation that holds between the parts of any whole but the wholes [typically considered] are hugely disparate and the composition relations studied by the special sciences are sui generis. We have no reason to believe that an abstract composition relation is anything other than an entrenched philosophical fetish. (Ladyman and Ross 2007: 21)

As a result of this and other critical appraisals, they conclude that,

contemporary analytic metaphysics, a professional activity engaged in by some extremely intelligent and morally serious people, fails to qualify as part of the enlightened pursuit of objective truth, and should be discontinued. (ibid.: vii)

This would seem to put the dampers, at the very least, on the proposal that realists should engage with contemporary metaphysics in order to articulate a clearer picture of what they are realists about. Indeed, it seems that such dismissive claims would encourage us all to stay paddling about in the shallow end of the realism pool. But this is too quick, as we’ll now see.

4 Rummaging in the toolbox Let us look again at the issue of whether the metaphysics of composition is applicable to modern science. Does the disparity of the kinds of objects we typically find in science present an absolute block on deploying composition relations in this context? Hawley (2006) thinks not, drawing an analogy with different criteria of identity appropriate for different kinds of entities and arguing for an account of compositionality that is relative to the given context. Of course, this is to pre- cisely concede Ladyman and Ross’s point about abstract composition relations, but nevertheless we can see Hawley’s relativisation of compositionality as re-tooling this particular metaphysical device in such a way as to make it deployable by the realist. To illustrate this ‘re-tooling’ in more detail, consider again the Principle of Identity of Indis- cernibles, often deployed in considerations of the individuality of objects because it allows us to ground that individuality in the relevant properties of the object concerned and avoid having to introduce haecceity or primitive thisness, as mentioned previously. Although, as already noted, Hacking tried to save it from the usual possible-world counter-examples by drawing on modern space-time physics, it has long been held to be violated by elementary particles in the quantum context (see French and Krause 2006). However, Saunders has effectively re-tooled the Principle by re-introducing Quine’s notion of ‘weak discernibility’, according to which objects may be discerned via certain relations (Saunders 2003). This has allowed him to claim that certain kinds of quantum particles, namely fermions, at least, are ‘weakly’ discernible in this sense, and hence that a modified form of Leibniz’s Principle still holds for such particles (there is some debate as to

400 Realism and metaphysics whether this applies to bosons too; see for example Huggett and Norton 2014). This is then used to underpin the claim that such particles can still be regarded as objects with well-defined identity conditions, and thus in this respect, at least, the apparent challenge that quantum physics poses to the realist in this regard can be mitigated. In the current context, then, we can see Saunders’s move as helping to articulate the clear picture that Chakravartty has challenged the realist to provide. Again, here we have a case in which a previously discarded metaphysical device is adapted to become once more ‘fit for purpose’, and indeed one might be tempted to draw an analogy with a Lakatosian stance towards scientific research programmes: even the kinds of metaphysics that the likes of naturalistically inclined philosophers of science would be strongly tempted to dismiss may in fact be restored to functional usefulness. Of course, the re-tooled device may be quite different from the original, just as the Quinean form of the Identity of Indiscernibles differs from Leibniz’s, but the same can be said for theories within a research programme. The similarities are such that we can reasonably talk of different forms of the principle in this case rather than different principles. Likewise, advocates of dispositionalism have also reconfigured their views in attempts to accommodate the implications of modern physics. Thus Chakravartty has developed a relational form of dispositionalism that underpins his ‘semi-realism’, which in turn bears useful compari- son with certain forms of ‘structural realism’ (2007, 2013). Anjum and Mumford, on the other hand, have focused on the notion of a stimulus and argued that this should be understood as the pragmatic designation of a contributory power, where such powers should be regarded in terms of sui generis tendencies (Anjum and Mumford 2011). Vetter goes even further by dropping the stimulus and manifestation characterisation entirely (2014, 2015) and argues that dispositional ascriptions should be understood as expressions of possibility, the localised counterpart of which is potentiality (2015: 23). This yields a conception of modality as ‘in the world’ which may actually be better suited to modern physics than the standard dispositionalist picture (French forthcoming). However, the bigger point, of which all of these cases represent just one aspect, is that contem- porary metaphysics can provide an array of views, moves, approaches, and devices that the realist can appropriate and deploy in articulating this clear picture of what she believes (see French and McKenzie 2012). Thus, if she wants to maintain that quantum particles are objects with well-de- fined identity conditions, just like tables and chairs and so on, she can deploy Quine’s notion of weak discernibility in order to do so. Likewise, if her realism extends to modal features, she can appropriate some of this recent work on dispositionalism; or if she wants to retain a standard account of compositionality in this context, she might turn to Hawley’s work. Here we can see metaphysics as proffering a toolbox in which the realist can rummage around and select appropriate tools that she can then utilise in further articulating how the world is. And, of course, this ‘toolbox’ may be particularly useful for those realists who wish to move away from or beyond the standard forms of the realist stance. Consider the ‘ontic’ structural realist for example, who argues that the standard metaphysics of objects possessing intrinsic properties and so forth, as sketched earlier, is simply incompatible with modern physics. (See I. Votsis, “Structural realism and its variants”, ch. 9 of this volume for a more general discussion of structural realism.) According to this view, such objects may be reduced to ‘thin’ formal placeholders within the relevant structure which does all the heavy real- ist lifting, or they may be eliminated entirely in favour of such structure. Here we have another case of the downplaying or even outright rejection of certain metaphysical entities and devices (objects and individuality, respectively), albeit some quite fundamental ones, as unsuitable, but then the structuralist has to face the demand for presenting an appropriate articulation of what she means by ‘structure’. Interestingly, the unwillingness to venture out of the shallows that many

401 Steven French

‘object-oriented’ realists evince is typically regarded as unacceptable when it comes to this artic- ulation, and so the structural realist may encounter a dismissive attitude when she simply adverts to those structures that are presented by modern physics (such as one finds associated with the Standard Model of elementary particle physics, for example, involving a rich array of symmetries and laws). Fortunately, she can deepen her realism by going to the toolbox and drawing on such metaphysical tools as the relationship between determinables and determinates to help articulate her understanding of the connection between physical symmetries, such as those represented by the Poincaré group, and properties, such as spin or mass; or if she is an eliminativist about objects, she can pull out recent forms of truthmaker theory that can accommodate talk about such objects while removing them from one’s ontology (see French 2014). Furthermore, the specificity of such tools, or their relativisation to context as Hawley insists, is all the more necessary if we adopt a localised approach to realism as suggested by Saatsi (2017) and others. Here one eschews global defences of the realist stance and instead argues that the grounds for adopting realism may depend on the specific theory considered and the specific explanations invoked, with, as a concomitant move, similar specificity regarding the kinds of entities one is being a realist about. Thus one can still respect the tenor of the No Miracles Argument and the realist insistence that our best theories ‘latch onto reality’, while acknowledging that both the sense of that latching onto may differ from theory to theory and also which aspect of reality is latched onto. With regard to the Standard Model of modern physics, the realist might plausibly argue that what we are latching onto here is the structure of the world, but when it comes to current biological theories, it might be certain biological processes or lineages or, at the very least, object-like features that may be quite different from those invoked in the case of classical physics (see e.g. Clarke 2013). Not only might we want to invoke different epistemic moves in order to capture the different senses of ‘latching onto’, but we obviously need to draw on different and quite specific metaphysical devices to articulate what we mean by ‘structure’ or ‘process’ or ‘biological individual’ respectively. The point is that if we eschew what Saatsi calls ‘recipe realism’ and adopt an exemplar-driven realist stance, then going deeper will require particular pieces of metaphysics suitably tailored to the exemplars concerned. Indeed, rather than one toolbox, perhaps we will need many! Now, someone going deep in this way would hope that these toolboxes are big enough to contain the right tools for the right job, but what if they’re not? After all, one of the com- plaints about contemporary metaphysics, as we have noted, is that for all its vaunted claims about exploring the space of possibilities, it really ends up confined to quite a limited volume. Can we really expect metaphysicians, some of whom explicitly turn their backs on science and are busy non-naturalistically exploring their different possible worlds of gunk and star-shaped electrons (see McKenzie 2013), to come up with the goods and set out for us a further range of devices beyond what is already available? There are surely grounds for optimism. Even the most armchair of metaphysicians has to consider how her ‘gunk’ relates to non-gunk or how her monism recovers diversity and the issues involved can be mapped, at least to some extent, onto those faced by the realist in trying to explicate the nature of quantum fields, say, and the manner in which they relate to observable phenomena. Given the broad similarities, the kinds of devices and manoeuvres developed by metaphysicians in response to these issues may well be exportable beyond the limited domain in which they were constructed. So, as another realist-oriented example along these lines, consider so-called wave function realism. This is the view that follows Schrödinger in taking the wave function as the appropriate target of our ontological commitments when it comes to quantum mechanics. After all, it sits at the heart of the theory, with its evolution described by Schrödinger’s equation and whatever

402 Realism and metaphysics interpretation one chooses – Bohmian, Everettian, whatever – the wave function is central. How- ever, as Schrödinger realised, to his chagrin, the wave function cannot be conceived of as describ- ing a wave evolving in three-dimensional space (or situated in four-dimensional space-time). A wave function describing a two-particle system, for example, will require a six-dimensional space, and so it goes . . . so that for any reasonably sized system we need a multi-dimensional space for the wave function to ‘sit’ in. Undaunted by this, some, such as Albert (1996; see also Albert and Ney 2013), have taken inspiration from Schrödinger’s intuition and argued that we should nevertheless adopt a realist stance towards the wave function, understood as a field in this multi-dimensional ‘configuration’ space. This ‘wave-function’ realism has attracted numerous objections, principal among which are those that point out that it leaves a massive metaphysical gap between what it is that we are sup- posed to place our ontological commitments in, namely the wave function in multi-dimensional space, and the nature of appearances according to which things appear to be situated in three-di- mensional space or, taking account of relativity theory, four-dimensional space-time. Adopting a form of error-theory and insisting that the appearances are flat wrong seems a hard line to hoe. Alternatively, we can appropriate a version of metaphysical nihilism combined with a revised form of truthmaker theory. According to this, certain statements such as ‘a exists’ might be true according to some theory without a being an ontological commitment of that theory, because they are made true by something other than a (Cameron 2008). Thus we can accept statements about the everyday world of appearances in three/four dimensions as true without being com- mitted to the existence of such appearances. What makes such statements true, according to wave function realism, are certain features of the distribution of the wave function, as a field, in multi-dimensional configuration space (French 2012). Of course, there is further work to be done in fleshing out the details within the framework of this device, but it does at least indicate again how a metaphysical manoeuvre may be put to service by the realist. And of course, one can always hope for mutually informed collaboration between metaphy- sicians and philosophers of science in order to design and construct new kinds of such tools as science advances. There would be a certain economy associated with basing such devices on already extant forms, as in the case of the Principle of Identity of Indiscernibles discussed earlier. Ideally (or idealistically), rather than philosophers of science having to appropriate metaphysical tools ‘off the peg’ as it were – and I’ll come back to this – they can work closely with metaphy- sicians to construct new ones. Here again the issue of whether quantum objects should be regarded as individuals might serve as a useful analogy: it was suggested early on in the ‘quantum revolution’ that this new phys- ics implied that quantum particles should be thought of as ‘non-individuals in some sense, and various metaphors were put forward to articulate this sense, such as that of £ in a bank account (the idea being that you can say how many £ you have in your account without being able to identify each one; see Hesse 1963). Subsequent work led to the development of non-standard forms of logic and set theory capable – or so it is claimed – of accommodating such ‘non-indi- vidual’ objects within a formal framework which can then in turn be expanded and developed in various directions (see Dalla Chiara, Giuntini, and Krause 1998; French and Krause 2006). Likewise, we might hope to see philosophers of science and metaphysicians working together to create new metaphysical frameworks in response to similar developments across the sciences. As another example and returning to the example of composition, consider Paul’s develop- ment of her ‘mereological bundle theory’, which draws on certain aspects of modern physics in rejecting a traditional ‘corpuscularist’ metaphysics, according to which the world should be seen as a mosaic of localised properties instantiated at spatiotemporal points (Paul 2012). Instead she proposes a ‘more flexible’ composition relation based on fusing properties together and which

403 Steven French can accommodate the range of kinds of properties and relations one finds in physics, over and above the spatio-temporal ones. Thus she indicates (ibid.: 245) how the ‘wave function realism’ sketched earlier can be accommodated by her metaphysics in terms of fusions of field values in configuration space. And even structural realism, touched on previously, can be made to mesh with this account, if we take the former as implying that the fundamental categorical structure of the world is the category of relations and its fundamental nature involves a network of such relations (ibid.: 248). The realist can then draw, in turn, on this metaphysics to help articulate the relationship(s) between different levels of ontology or, if one is averse to levels talk, the constitution of entities more generally, from the most fundamental on up. Thus, we have here an example of a possible fruitful ‘to and fro’ between metaphysics and the philosophy of science, as the metaphysician draws on developments in the foundations of physics to help construct a certain kind of device or tool that the philosopher of science can then deploy to help further articulate her realism. The alternative is for philosophers of science to put together their own, ‘bespoke’ forms of metaphys- ics, but one might wonder how such a project is even to get off the ground. Of course, one can argue, there is no need to go quite this far – one can start with readily graspable notions such as ‘object’, ‘individual’, ‘structure’, and ‘property’ and the like and tweak them into shape, depending on the theory considered. But this is just a ‘go-it-alone’ version of the first option; why reinvent the wheel? Furthermore, and again thinking of metaphysics in terms of research programmes, one could argue that the tools and devices that metaphysicians have come up with have the virtue at least, of having been ‘tested’, in a sense, against our every- day intuitions and ontological conceptions. This may have revealed the flaws and fault lines that can then be exploited or patched up when exported into the relevant physical domain. But of course, it may be objected that such exportation is impossible given the differences between everyday objects and those of physics. When it comes to some metaphysical devices, at least, it does appear that science rules them out – take the example of intrinsic properties, as indicated earlier – although even here, some form of Lakatosian ‘regeneration’ may be possible, as the case of the Principle of Identity of Indiscernibles reveals. And other manoeuvres, such as those involved in nihilism or monism or ‘mereological bundle theory’, by virtue of their general, ‘umbrella-like’ nature may still be adapted, as also suggested. The alternative, again, is to come up with something sui generis, with the risk of producing devices and moves of limited appli- cability or worse and, bluntly, that are incomprehensible. Instead we should use the resources already made available and work with metaphysicians to construct a clear picture of how we, as realists, take the world to be.

5 Conclusion What is the alternative to the appropriation of some form of metaphysics by the realist? She can step out of the pool altogether and thereby abandon her realism, perhaps joining the spare ranks of the anti-realists. Or she can remain in the ‘shallow’ end, insisting that she is indeed a realist about the unobservable features of the world that science presents but resolutely refusing to articulate that realism in any but the most ‘vanilla flavoured’ terms. And even the latter may fail her when it comes to fundamental physics, say. Consider, again, the so-called Standard Model, with its ‘zoo’ of elementary particles, its mediation of forces via bosons, including the Higgs, through which mass is acquired, all underpinned by quantum field theory. How is the realist to articulate her attitude towards this theory, which surely must be counted as among the best that physics has produced, given its record of predictions? As I said at the beginning, she could just point to the model or theory itself, presented via the relevant mathematics (group theory when it comes to

404 Realism and metaphysics the symmetries, for example), and state bluntly that that, as presented by the physics itself, is how she believes the world is. However, as I have argued, that seems unsatisfactory, at least as far as realism is concerned. Instead, in order to provide the kinds of ‘clear picture’ that Chakravartty demands, the realist should wade deeper. Perhaps she might do so without metaphysical aids and try to articulate how the world is according to the Standard Model, say, in her own terms, deploying appropriate tools of her own devising. In that case she runs the risk of producing an account that lies outwith current philosophical frameworks and indeed may be barely comprehensible, if at all, to the rest of us. It would be better, I urge, to engage with extant metaphysics, draw on the tools it has already developed, and work with metaphysicians themselves to hone and sharpen them in various ways, so that they can be deployed more precisely to help us understand what it is that science is telling us. In this way, scientific realism can more fruitfully engage both with science itself and with the rest of philosophy, to everyone’s benefit.

Note 1 I am grateful to Fabio Ceravolo for drawing my attention to these papers through his own work, which critically analyses them.

References Albert, D. Z. (1996) “Elementary Quantum Metaphysics,” in J. T. Cushing, A. Fine and S. Goldstein (eds.), Bohmian Mechanics and Quantum Theory: An Appraisal, Dordrecht: Kluwer, pp. 277–284. Albert, D. Z. and Ney, A. (2013) The Wave Function: Essays on the Metaphysics of Quantum Mechanics, Oxford: Oxford University Press. Anjum, I. and Mumford, S. (2011) Getting Causes from Powers, Oxford: Oxford University Press. Bird, A. (2007) Nature’s Metaphysics: Laws and Properties, Oxford: Oxford University Press. Boltzmann, L. [1896] (2011) Lectures on Gas Theory, New York: Dover (originally pub. J. A. Barth 1896). Callender, C. (2011) “Philosophy of Science and Metaphysics,” in S. French and J. Saatsi (eds.), The Contin- uum Companion to the Philosophy of Science, London: Continuum, pp. 33–54. Cameron, R. (2008) “Truthmakers and Ontological Commitment: Or, How to Deal with Complex Objects and Mathematical Ontology without Getting into Trouble,” Philosophical Studies 140, 1–18. Chakravartty, A. (2007) A Metaphysics for Scientific Realism, Cambridge: Cambridge University Press. ——— (2013) “Realism in the Desert and in the Jungle: Reply to French, Ghins, and Psillos,” Erkenntnis 78, 39–58. ——— (2015) “Scientific Realism,” in E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2015 ed.). URL: http://plato.stanford.edu/archives/fall2015/entries/scientific-realism/ Clarke, E. (2013) “The Multiple Realizability of Biological Individuals,” Journal of Philosophy 110, 413–435. Dalla Chiara, M. L., Giuntini, R. and Krause, D. (1998) “Quasiset Theories for Microobjects: A Compari- son,” in E. Castellani (ed.), Interpreting Bodies: Classical and Quantum Objects in Modern Physics, Princeton: Princeton University Press, pp. 142–152. Fraassen, B. Van (1991) Quantum Mechanics: An Empiricist View, Oxford: Oxford University Press. French, S. (2011) “Metaphysical Underdetermination: Why Worry?” Synthese 180, 205–221. ——— (2012) “Whither Wave-Function Realism,” in D. Albert and A. Ney (eds.), The Wave Function, Oxford: Oxford University Press, pp. 76–90. ——— (2014) The Structure of the World: Its Metaphysics and Representation, Oxford: Oxford University Press. ——— (forthcoming) “Doing Away with Dispositions: Powers in the Context of Modern Physics,” in A. S. Meincke (ed.), Dispositionalism: Perspectives from Metaphysics and the Philosophy of Science, Springer. French, S. and Krause, D. (2006) Identity in Physics, Oxford: Oxford University Press. French, S. and McKenzie, K. (2012) “Thinking Outside the (Tool)Box: Towards a More Productive Engage- ment between Metaphysics and Philosophy of Physics,” The European Journal of Analytic Philosophy 8, 42–59. ——— (2015) “Rethinking Outside the Toolbox: Reflecting Again on the Relationship between Philoso- phy of Science and Metaphysics,” in T. Bigaj and C. Wuthrich (eds.), Metaphysics in Contemporary Physics, Poznan Studies in the Philosophy of the Sciences and the Humanities, Amsterdam: Rodopi, pp. 145–174.

405 Steven French

Hacking, I. (1975) “The Identity of Indiscernibles,” Journal of Philosophy 72, 249–256. Hawley, K. (2006) “Principles of Composition and Criteria of Identity,” Australasian Journal of Philosophy 84, 481–493. Healey, R. (2013) “Physical Composition,” Studies in History and Philosophy of Science 44(1), 48–62. Hesse, M. (1963) Models and Analogies in Science, London: Sheed and Ward. Reprint Notre Dame: University of Notre Dame Press, 1966. Huggett, N. and Norton, J. (2014) “Weak Discernibility for Quanta, the Right Way,” British Journal for the Philosophy of Science 65, 39–58. Ladyman, J. and Ross, D. (2007) Every Thing Must Go: Metaphysics Naturalized, Oxford: Oxford University Press. Lange, M. (2009) “A Tale of Two Vectors,” Dialectica 63(4), 397–431. ——— (2012) “There Sweep Great General Principles Which All the Laws Seem to Follow,” in K. Bennett and D. W. Zimmerman (eds.), Oxford Studies in Metaphysics (vol. 7), Oxford: Oxford University Press, pp. 154–186. Langton, R. (2009) “Ignorance and Intrinsicality,” talk given at Pacific APA, Vancouver. Magnus, P. D. (2012) Scientific Enquiry and Natural Kinds: From Planets to ,Mallards London: Palgrave – Macmillan. McKenzie, K. (2013) “Review of Basic Structures of Reality: Essays in Meta-Physics by Colin McGinn,” Mind 122, 813–816. Paul, L. A. (2012) “Building the World from Its Fundamental Constituents,” Philosophical Studies 158, 221–256. Psillos, S. (2006) “What Do Powers Do When They Are Not Manifested?” Philosophy and Phenomenological Research 72, 135–156. Radick, G. (2008) “Focus Section: Counterfactuals and the Historian of Science,” Isis 99, 547–584. Saatsi, J. (2017) “Replacing Recipe Realism,” Synthese 14, 3233–3244. Saunders, S. (2003) “Physics and Leibniz’s Principles,” in K. Brading and E. Castellani (eds.), Symmetries in Physics, Oxford: Oxford University Press, pp. 289–307. Slater, M. J. and Haufe, C. (2009) “Where No Mind Has Gone Before: Exploring Laws in Distant and Lonely Worlds,” International Studies in the Philosophy of Science 23(3), 265–276. Vetter, B. (2014) “Dispositions without Conditionals,” Mind 123, 129–156. ——— (2015) Potentiality: From Dispositions to Modality, Oxford: Oxford University Press.

406 32 MATHEMATICAL REALISM AND NATURALISM

Mary Leng

1 Introduction: naturalism, methodological and ontological ‘Naturalism’ has been used to label many different philosophical doctrines, and as we will see, different strands of naturalism pull in different directions regarding the topic of mathematical realism. At the core of most naturalist philosophies is a version of methodological naturalism, which involves respect for the methods and results of the natural sciences. Methodological naturalists view the natural sciences as the result of our best efforts to understand the world as we experience it, holding that if, by the lights of our best scientific standards, we have reason to believe that P, then we do indeed have reason to believe that P. There are, naturalists hold, no separate philosophical reasons to believe outside of scientific reasons to believe. So while we can question, from within science, whether the scientific evidence really speaks in favour of a given hypothesis, we cannot coherently stand entirely outside of science and ask of something that we acknowledge to be confirmed by our best scientific standards whether it isreally true. While science may of course be proved wrong, the only corrective to science comes from more science, not from some separate domain of more stringent philosophical reasoning. This is not to say that there is no role for philosophical thinking about science but rather that such thinking should be pursued as part of science, broadly construed, not as a separate discipline with its own supposedly higher standards. So understood, methodological naturalism provides a route to a basic form of scientific realism – belief in whatever claims are justified according to our best scientific standards. If we assume that what is justified by those standards is our best scientifictheories , taken as a whole, then this entails a blanket realism about the claims of our scientific theories. But if we look at those theories more closely, we will see that they include many claims, not just about physical objects but also about mathematical objects. Take the Einstein field equations for general relativity, which π state that Gµν = 8 Tµν. Here, Gµν refers to the Einstein tensor, a mathematical object representing the curvature of spacetime, and Tµν is the stress-energy tensor, again a mathematical object, this time representing the distribution of mass/energy (8 and π are of course mathematical objects too – the familiar real numbers). What the equation tells us is how the curvature of spacetime relates to the distribution of mass/energy; it is in this sense about the physical world. But it is also π about the mathematical objects Gµν, 8, , and Tµν, at least in the sense that what it says about the physical world is just that the mathematical objects associated with the relevant features of that world are

407 Mary Leng

π related by the equation ‘G µν = 8 Tµν’. If we take general relativity to be true – or even just approx- imately true – and take it that this amounts to believing the (approximate) truth of its equations, then it seems we have to believe not just in the curvature of spacetime by matter but that the spacetime is curved in such a way that the Einstein tensor and the stress-energy tensor that the theory associates with spacetime and mass/energy respectively are related as the field equations tell us. And it seems we cannot believe all this without believing in the relevant mathematical objects. To do otherwise would be, as Putnam has it,

like trying to maintain that God does not exist and angels do not exist while main- taining at the very same time that it is an objective fact that God has put an angel in charge of each star and the angels in charge of each of a pair of binary stars were always created at the same time! (Putnam 1975: 74)

This route from naturalism to realism about mathematical objects is most commonly associated with the work of W. V. Quine and Hilary Putnam. Quine characterises naturalism as “the recog- nition that it is within science itself, and not in some prior philosophy, that reality is to be iden- tified and described” (Quine 1981a: 21) and asserts his confirmational holism in the claim that “our statements about the external world face the tribunal of sense experience not individually but only as a corporate body” (Quine 1951). Putnam (1971, 1975) stresses the indispensability of mathematics in stating the laws of our best scientific theories. The argument, with its three premises of naturalism, confirmational holism, and indispensability, has become known as the Quine-Putnam indispensability argument for mathematical realism (QPIA). Not all versions of mathematical realism involve realism about mathematical objects. Michael Dummett famously distinguishes between two kinds of realism – realism in truth-value and realism in ontology – approvingly citing a remark he attributes to Georg Kreisel to the effect that realism-in-truth-value is all that matters in debates over realism in the philosophy of math- ematics: “The point is not the existence of mathematical objects, but the objectivity of math- ematical truth” (Dummett 1978: 228). On a ‘standard’ construal of mathematical truth, it may be hard to see what this contrast amounts to: if one thinks that it is objectively true that, for example, there are infinitely many prime numbers, then surely the existence of mathematical objects follows from this claim? But there are ways of preserving the truth of mathematical claims without accepting the apparent ontological consequences that follow, for example by offering a non-standard semantics for mathematical claims, such as a position that holds that statements are true in mathematics if and only if they follow logically from standard mathematical axioms. Dum- mett’s own account of mathematics identifies truth in mathematics with provability and, as such, involves a rejection of an ontology of independently existing mathematical objects (and, indeed, the rejection of the use of classical logic for mathematical claims). In supporting mathematical realism via consideration of the role of mathematics in science, the QPIA speaks against this sep- aration of ‘truth-in-mathematics’ from what we might call ‘truth-simpliciter’. Mathematical truths make their way into our theories alongside empirical truths and are, it is claimed, confirmed along with them. A semantic theory that identified truth in mathematics with mathematical provability but empirical truths with a corresponding ontology would have difficulty with making sense of the mixed mathematical-empirical claims that make up our scientific theories (though see Weir [2010] for a recent attempt to do just this). The QPIA holds that any attempt to draw such a distinction (between mathematical truths and others) is at any rate unmotivated. Furthermore, arguably, mathematical objects enter into our theories in the same way that physical objects do, as part of our overall best scientific attempts to describe, organize, and predict our experiences.

408 Mathematical realism and naturalism

As such, we have as much reason to believe in those objects as in any of the objects posited by our scientific theories. There is, however, another strand of naturalism that is deeply concerned about these apparent ontological implications of methodological naturalism. This strand – known as onto- logical naturalism – takes it that science tells us ultimately about a world of physical, spatiotempo- rally located, causally efficacious entities and that any respectable ontology should therefore be reducible to an ontology of physical objects. But mathematical objects are standardly construed platonistically, as abstract objects (where ‘abstract’ is characterised negatively as nonspatiotempo- ral, acausal, mind and language independent). Such objects would seem to have no place in a physicalistic ontology. So much the worse for physicalism, we might think. If methodological naturalism tells us (via the QPIA) that ordinary scientific standards give us reason to believe in abstract mathematical objects as well as concrete physical objects, then it looks as though onto- logical naturalists have simply been mistaken in their presumption about what kinds of objects we can come to know through empirical scientific enquiry. This response on behalf of the indispensability theorist is, however, perhaps a little too san- guine. At the very least, there is a tension in the combination of naturalism with mathematical platonism. Empirical science, it seems, tells us that there are mathematical as well as physical objects. But whereas empirical science has a story to tell about how we, as spatiotemporally located, physical beings, could have developed theories that reliably reflect how things are with physical objects (via their causal imprint on the world of our experience), when it comes to abstract mathematical objects empirical science seems embarrassingly silent. So we have on the one hand a pressure – from the presence of mathematical posits in our theories alongside physical posits – to treat mathematical truth as truth simpliciter and to see mathematical objects as con- firmed as existing alongside physical objects. But on the other hand, a puzzle raised for science and its philosophy, of explaining how the methods of enquiry of natural science could ever lead us as human enquirers to come to know how things are with the acausal, mind and language independent objects of our mathematical theories. This tension is most famously expressed in Paul Benacerraf’s seminal (1973) paper, ‘Mathe- matical Truth’, in which he argues that “accounts of truth that treat mathematical and nonmath- ematical discourse in relevantly similar ways do so at the cost of leaving it unintelligible how we can have any mathematical knowledge whatsoever” (403). Despite Quine’s own claims that mathematical objects are known in just the same way as we know about any of the objects posited by our scientific theories – via their successful role in the theories that organize our experiences – many philosophers of mathematics, influenced by Benacerraf’s worries, have considered this answer unsatisfactory. Particularly influential in this regard has been Hartry Field (1989), who presents the problem as one of explaining the reliability of our mathematical beliefs (given what we know about our belief-forming mechanisms and given the platonist account of mathematical objects as acausal abstracta). Philosophers who take seriously the Benacerraf-Field challenge are faced with a number of options. On the realist side, they could attempt to naturalize mathematical ontology, providing a scientifically acceptable explanation of how we could come to know truths about the realm of the abstract or (despite the pressures to treat mathematical truth alongside empirical truth) develop a naturalistically acceptable realism-in-truth-value for mathematics which does not bring with it a platonistic ontology. Alternatively, it is mathematical realism that could give. For philosophers inclined to methodological naturalism, this would involve resisting the confirmational holist claim that what is justified according to our best scientific standards is our theories in their entirety and instead showing why our theoretical successes do not warrant belief in the mathe- matical objects posited by our theories.

409 Mary Leng

2 Naturalizing mathematical ontology Let us start with attempts to naturalise the ontology of mathematics. One of the most well- known attempts of this sort, from well within the Quinean indispensability theorist programme, is Penelope Maddy’s early work on set theoretic realism. Maddy (1990) argues that while the indispensability considerations give us reason to believe that we do have some knowledge of mathematical truths, they do not yet explain in naturalistically acceptable terms how that knowl- edge is possible. Maddy attempts to provide this by building on work in neuropsychology. Maddy starts with an observation of Gödel’s that “our ideas referring to physical objects contain constituents qualitatively different from sensations or mere combinations of sensations, e.g., the idea of object itself” (Gödel 1947: 484). Gödel sees this additional element as abstract and con- tributed by intuition and argues that if such a contribution from intuition is required for us to have knowledge of physical objects, then it should not be surprising if the same intuitive process can provide us with knowledge of mathematical objects. Balking at the unscientific nature of Gödel’s faculty of intuition, Maddy follows through with the methodological naturalist’s project by looking to what the science of perception has to say about what that extra element is that helps us to reach the idea of a physical object. Drawing on experimental work of Donald O. Hebb and Warren S. McCulloch, which suggests that our brains develop complex neural structures – object detectors – in response to repeated sensory experiences, Maddy suggests that the same could be true in the case of sets of physical objects. According to Hebb and McCulloch, our brains develop so as to allow us to interpret our fleeting sensory stimulations as perceptions of robust and continuing physical objects. If we suppose, as Maddy does, that sets of physical objects are located where their members are, then it is plausible that just as our brains develop to allow us to perceive physical objects, they may also develop to allow us to interpret our sensory stimulations as perceptions of sets of physical objects. In Maddy’s view, then, the missing ingredient in the Quinean naturalist picture is provided by the science of perception, which explains how we can perceive some mathematical objects – sets of physical objects – by taking these to be spatiotemporally located with their members. Of course most mathematical objects are not accessible by perception in this way. Pure sets (sets that have no non-sets in their transitive closures) have no physical location, and while impure sets are, in Maddy’s picture, physically located where the physical objects in their transitive closures are, we only have to go a little way up the hierarchy before it becomes implausible to think that we perceive these sets when we perceive the physical objects in their transitive closure. I may perceive the set of three eggs when I perceive the eggs in the fridge, and I may even perceive the set consisting of one of the eggs and the set of the other two. But do I perceive the eight-membered power set of the set of eggs? The 256-membered power set of that set? Surely not the ℵ0 mem- bers that appear at the first transfinite level in the hierarchy? These sets, Maddy argues, are not perceived but are inferred as theoretical objects via their indispensable role in our scientific theo- ries. But here we have a potential difficulty. In Maddy’s view our ability to know about and refer to mathematical objects comes via our interactions with samples of the kind, ‘set’ – in particular, our perceptual access to some sets of physical objects. This provides some basic knowledge of the kind, but further knowledge is via the role of sets in our empirical theories. This route to knowledge will seem plausible so long as the more complex sets are genuinely of a kind with the sets of our experience. But are they? Stewart Shapiro (1991) distinguishes the logical notion of set from the iterative notion of set, where logical sets are the sets of arbitrary combinations of members taken from a given basic domain (if our domain is physical objects, the logical sets would be the sets of physical objects). Granted that Maddy has provided a plausible story of how we might be able to have perceptual access to logical sets, still the sets required for mathematics

410 Mathematical realism and naturalism and empirical science are the iterative sets of ZFU (the iterative hierarchy of Zermelo-Fraenkel set theory with the Axiom of Choice, or ZFC with physical urelements at the ground level), which go far beyond this first stage. If these are not of a kind with the logical sets, then it looks as though all we can really say about our knowledge of these sets is that they are known via their role in our best scientific theories. And if so, we are no further on from the original Quinean story. Similar concerns could be raised for more recent attempts to account for our knowledge of mathematical objects via our knowledge of mathematical structures. Structuralist approaches to mathematics come in a variety of stripes, but ante rem structuralists (such as Shapiro 1997; Resnik 1997) hold that mathematical objects are positions in abstract structures and that we can know about at least some structures via their physical instantiations (through our acknowledged ability to recognise patterns). Again, if this epistemological story is meant to show how we can grasp the kind, abstract structure, to which both simple physically instantiated structures and the more com- plex uninstantiated structures of most of mathematics (including the iterative hierarchy of ZFC) belong, then a difficulty arises in showing that the latter structures are genuinely ‘of a kind’ with the instantiated structures that we are able to recognise through our ability to abstract patterns from particular phenomena. In re structuralists hold that while there are structured systems (and two systems of objects can share the same structure), there are no abstract structures over and above those instantiations. So the fact that we can tell a scientific story involving, as Shapiro (1997: 110) puts it, “only natural processes amenable to ordinary scientific scrutiny” about our ability to rec- ognize patterns/structures in physical systems should not necessarily be interpreted as providing an account of our knowledge of structures as abstract objects. Indeed, on Shapiro’s epistemological picture, our knowledge of the more complex uninstantiated structures of much of mathematics is via the axiom systems that work as implicit definitions of these structures – via our grasp of the logical possibility of those axioms (in Shapiro’s terminology, their coherence) and our grasp of their logical consequences. But this knowledge will only yield knowledge of abstract structures if we already have some reason for believing that there is some structure that our axioms truly describe, and here the best the realist structuralists can do, it seems, is to rely again on the indispensability considerations as providing reason for taking mathematical theories to be true (see MacBride (2008) for a detailed critique of Shapiro’s epistemology and Shapiro (2011) for a response).

3 Mathematical truth without mathematical objects An alternative structuralist proposal, offered by Geoffrey Hellman (1989, 1996), questions the claim that the indispensability of mathematics justifies belief in the truth of our mathematical theories when construed as truths about mathematical objects. Instead, Hellman suggests, we can construe our physical theories in such a way that “rather than commitment to certain abstract objects receiving justification via their role in scientific practice, it is the claims of possibility of certain types of structures that are so justified” (Hellman 1989: 96–97). Hellman thus offers the alternative of a realism-in-truth-value without a realism in ontology, trading an ontology of abstracta for an ‘ideology’ of modal commitments (for another attempt to make such a trade, see Chihara 1990). In the case of our empirical scientific theories, Hellman takes it that our empir- ical claims are elliptic for claims about what would be the case were there mathematical objects as described in our axiomatic mathematical theories and not interfering with the way the material world actually is (the standard assumption of mathematical objects as abstract is meant to guarantee that they don’t interfere with how things are with material objects). Hellman considers a number of ways of developing this idea, including ways that attempt to minimize the modal existence assumptions (such as by using only mathematical objects that can be expressed in second-order

411 Mary Leng real analysis (RA2), arguably interpretable in terms of possible physical structures). But to capture all of contemporary science it seems that we need richer structures than these. By Hellman’s reckoning, the second-order Zermelo axioms together with an additional claim that there is only one limit ordinal (Hellman labels these axioms Z+), with physical objects as urelements, will do (though once one goes beyond RA2 and beyond the realm of mathematics that can be interpreted physically there’s no principled reason to stop at Z+ rather than, for example, full ZFU [ZFC + urelements]). So the real content of an empirical statement A linking mathematical to non-math- π ematical objects (such as, for example, our equation Gµν = 8 Tµν) is interpreted as saying only that A follows logically from the assumption that, in addition to whatever material objects there really are, there are also sets as characterized by Z+ (or ZFU), whose addition to the material world leaves everything material as it is. And why should we be interested in science by what follows logically from this assumption? Because some of the consequences of this assumption are purely empirical, and if it’s right that the addition of sets doesn’t interfere with how things actually are materially, then if an empirical prediction about material objects is derived from our scientific laws (understood modal structurally), then we will have reason (as much reason as we have to believe our scientific laws reflect material reality) to expect that claim to be true.

Rejecting mathematical realism Modal structuralism requires a non–face-value interpretation of scientific discourse, taking cat- egorical claims apparently about the relation between mathematical objects and physical objects to be elliptic for hypothetical claims about what would be the case were there mathematical objects satisfying the axioms of our preferred mathematical theories, over and above whatever material objects there actually are. But if Hellman is right that what is confirmed in science is just the relevant modal alternatives to the categorical claims in which our scientific theories are expressed, this undermines the main motivation provided by the QPIA for providing an account of mathematics as a body of truths. Rather than providing a non–face–value interpretation of our scientific theories, an alternative interpretation of this situation would be to say that what is confirmed by the success of science is not our scientific theories per se (which are just as they appear – theories whose central claims concern the relations between mathematical and physical objects) but rather their nominalistic content, that is, the picture they paint of the material world, regardless of whether the mathematical objects they posit exist. This moves us to the last of the ontological naturalist approaches outlined earlier: rather than try to accommodate mathematical truths within empirical science by naturalising mathematical objects (Maddy, Shapiro) or denying their ontological consequences (Hellman), the final option is to question whether the truth of mathematical claims is really confirmed along with our empirical scientific theories. Perhaps the best-known attempt to deny that the mathematics used in science receives empir- ical confirmation from the success of our empirical theories is that of Hartry Field (1980). Field falls firmly in the scientific realist camp, holding with Quine that whatever posits are indispen- sable in formulating our best scientific theories are confirmed wholesale via the success of those theories. However, unlike Quine and Putnam, Field thinks that mathematical posits can be dis- pensed with in formulating our best scientific theories. Field’s dispensability programme involves showing that our ordinary (mathematically stated) scientific theories areconservative extensions of nominalistically acceptable alternative theories, produced by adding mathematics (in the form of set theory with physical objects as urelements) to those theories. The nominalistically acceptable theories to which mathematics is added are, Field argues, better theories than the mathematical versions, in that they allow for intrinsic explanations of phenomena that do not, for example, appeal to contingencies in the way that the mathematics is added, such as choice of coordinate

412 Mathematical realism and naturalism system. So it is those theories that we ought to believe (if we are scientific realists). And if we do believe these theories, then we have an excellent explanation of why we ought also to believe the predictions of our ordinary (mathematically stated) scientific theories even if we do not believe that there are any mathematical objects. Since those theories are conservative extensions of nominalistic theories that we do believe, any of their nominalistically stated consequences are already consequences of the nominalistic theories that they extend. So given that we believe the nominalistic theories (and hence all their consequences), we should also believe the nominalistic consequences we derive from those nominalistic theories with the help of added mathematical assumptions. In this respect mathematics, in Field’s picture, plays the role of what Carl G. Hempel (1945: 391) calls a “theoretical juice extractor”:

The techniques of mathematical and logical theory can produce no more juice of fac- tual information than is contained in the assumptions to which they are applied; but they may produce a great deal more juice of this kind than might have been anticipated upon a first intuitive inspection of those assumptions which form the raw material for the extractor.

Field’s programme requires that one traverses what Mark Colyvan (2010) has called the ‘hard road’ of dispensing with mathematics: coming up with attractive nominalistically stated alterna- tives to our usual mathematically stated scientific theories. Field (1980) sketches a nominalisation of Newtonian gravitational theory, but it remains an open question whether this strategy can be extended to apply to contemporary physics, which take a somewhat different form (Malament [1982] expresses clearly some concerns about the possibility of extending Field’s strategy to for example theories that trade in phase-spaces that codify all possible dynamical states for a physical system). Hellman’s approach to our scientific theories, on the other hand, paves the way for an intriguing alternative to Field’s ‘hard road’. Rather than straining to write down in purely non- mathematical terms what we take to be the truths about the physical world, why not make use of the hypothesis that there are mathematical objects in expressing our scientific theories but restrict our realism to the claim that the material world is as described in those theories (i.e., to the claim that those theories would be true were there a realm of non-interfering mathematical objects over and above whatever physical objects there actually are). This is the strategy of ‘easy road’ nominalists (myself included [Leng 2010], but see also Azzouni [2004]; Balaguer [1998)]; Melia [2000]; and Yablo [2002, 2005]) for approaches along these lines). Such nominalists hold that, since mathematical objects are hypothesised not to interfere with physical objects (in the sense that the axioms generating our mathematical theories – such as ZFU with physical objects as urelements – are designed so as to be compatible with all contingent matters of fact), it is plausible to be realist not about our full – mathematically stated – scientific theories but only about their nominalistic content. We say that a theory that is correct in its nominalistic content is nominalistically adequate, so according to easy road nominalists, the empirical success of our theories confirms not their truth but only their nominalistic adequacy. The claim that a theory is nominalistically adequate has been cashed out in various more or less technical ways. For Balaguer (1998: 135), “The nominalistic content of a theory T is just that the physical world holds up its end of the ‘T bargain’, that is, does its part in making T true”. Rosen (2001) gives a modal characterization. For Rosen, the “concrete core of a world W is the largest wholly concrete part of W: the aggregate of all the concrete objects that exist in W”, and a theory S (usually describing mathematical and nonmathematical objects and their relations) is “nominalistically adequate iff the concrete core of the actual world is an exact intrinsic dupli- cate of the concrete core of some world at which S is true – that is, just in case things are in all

413 Mary Leng concrete respects as if S is true” (75). In my own (2010: 183) discussion of this notion, I argue that Balaguer’s notion can be captured by the idea that the claims of our theory are fictional in the fiction generated by taking the physical world as it actually is and adding to that as assumptions the axioms of ZFU. This is (with inessential details such as the precise choice of set theory aside) essentially equivalent to Hellman’s picture, according to which an empirical claim A is elliptical for the claim that A is a consequence of the assumption that there are structures satisfying the axioms of ZFU, which do not interfere with the way material objects actually are. The only real difference is that Hellman takes this to be an interpretation of what our scientific theories are really saying, whereas I take a face-value reading of those theories and argue that, while those theories are false, their success is explained by the fictionality of their claims (which, given the non-interference of the mathematical assumptions, implies that their material consequences will be true). This difference is, however, perhaps a matter more of taste than of substance.

4 Naturalism, realism, and scientific confirmation Assuming (against Field’s laudable efforts to show the contrary) that our best scientific theories are mathematically stated, we have, then, on the table four approaches to accounting for their mathemat- ical components within a broadly naturalist setting. These approaches are not meant to be exhaustive but indicative of a variety of routes one might take. The first two, Maddy’s and Shapiro’s, attempt to build up to an epistemology of mathematical objects from the epistemology of the everyday but ultimately, I have suggested, have little more to say about our knowledge of mathematical abstracta than the standard Quinean holistic picture (that we know them through their role in our scientific theories), for which the question remains: how could the empirical methods which lead us to develop the theories we have provide us with reliably correct beliefs about objects of that sort? On the other hand, both Hellman and the easy-roaders take it that there is an understanding of the role played by mathematics in our scientific theories such that empirical methods do not confirm the existence of mathematical objects but only that our theories accurately capture the consequences of the assump- tion that, along with all the material things there actually are, there are also non-interfering (read ‘abstract’) objects satisfying the axioms of our favourite set theory with physical objects as urelements (e.g., ZFU). Can anything be said to decide between these competing pictures of what is confirmed by the success of science (that is, its truth or its nominalistic adequacy)? In the analogous debate between scientific realism and constructive empiricism, realists try to point to aspects of scientific practice that can’t easily be understood from within the constructive empiricist’s worldview. In particular, realists will ask, can constructive empiricists in good con- science appeal to explanations of observable phenomena in terms of unobservables if they do not believe there are such things? Card-carrying constructive empiricists of course resist inference to the best explanation, holding that they can simultaneously accept the explanation that the pres- ence of a charged particle caused the presence of the track in the cloud chamber and insist that they don’t believe in charged particles. But many nominalist philosophers of mathematics wish to resist what they see as the extremes of constructive empiricism and will agree with realists that unless we believe in the particle alluded to as cause in this explanation, this putative explanation has no explanatory force. But this, then, opens up the possibility for evidence that might similarly require us to be realist about more than just the nominalistic content of our scientific theories. For, if mathematical objects play an analogous explanatory role, then if we wish to make use of explanations that appeal to mathematical objects it looks as though we cannot restrict ourselves to belief in just the nominalistic content of our mathematical theories but must also believe in the mathematical objects appealed to in providing mathematical explanations of physical phenomena. (See also J. Saatsi, “Realism and the limits of explanatory reasoning”, ch. 16 of this volume.)

414 Mathematical realism and naturalism

Thus a great deal of recent discussion of the indispensability argument for mathematical realism has revolved around the question of whether there are any genuine mathematical expla- nations of physical phenomena. A number of examples of purported mathematical explanations have been given, but perhaps the most influential (and most straightforward to grasp) is raised by Alan Baker (2005) and concerns the explanation of the period length of periodical magicicada cicadas, North American insects that spend a long nymphal period underground before emerg- ing as adults to breed every 13 or 17 years (13 in the warmer southern states and 17 in the cooler northern states). In explaining why these particular length periods, biologists look to the evolutionary history of the cicadas and hypothesise that in their early history there were other periodical creatures (such as predators) that it would have been advantageous for the cicadas to avoid meeting. With this hypothesis in place, we can answer the question of why the cicadas set- tled on the particular period lengths they did: because 13 and 17 are prime numbers, and prime periods minimize overlaps (compare with period lengths of 16, which would overlap with peri- ods of length 2, 4, and 8). Like the constructive empiricist, who wishes to say ‘The presence of a charged particle explains the appearance of the track in the cloud chamber, but I do not believe in charged particles’, Baker (2009: 627) charges that the easy-road nominalist who accepts such explanations will be faced with the ‘Moorean’ prospect of claiming, ‘The fact that 13 and 17 are prime numbers explains why the cicadas have evolved the period lengths they have, but I do not believe in prime numbers’. There have been many ‘easy-road’ responses to purported examples of mathematical expla- nations of physical phenomena. A natural nominalist response would be to argue that, while mathematics may be indispensable to formulating some of these explanations, the explanatory work is nevertheless being done by the nominalistic content expressed by the explanations rather than the mathematics that is used to express this nominalistic content, so that these examples of mathematics playing an explanatory role do not present any further problem to the nominalist than is presented by the standard indispensability argument and can be answered in the same way (see Saatsi [2011] and Daly and Langford [2009] for proposals along these lines; Baker and Colyvan respond to the latter of these in their [2011]). In my own work (Leng 2012) I have argued that there are some mathematical explanations of physical objects where the mathematics is doing genuine explanatory work (over and above what is expressed in the nominalistic content of these explanations) but that these explanations when properly understood can be seen to func- tion as structural explanations. Such explanations explain by showing a particular physical system to instantiate (or approximately instantiate) an (axiomatically described) mathematical structure. As such, they do not require belief in abstract mathematical objects but only in the approximate instantiation of mathematical axioms in physical systems. (See also J. Saatsi, “Realism and the limits of explanatory reasoning”, §4, ch. 16 of this volume.)

5 Naturalistic thin realism or arealism? The debate over the explanatory role of mathematical objects in empirical science continues, but let us imagine for now it settled, and settled in favour of the ‘easy-road’ nominalists. Would that put paid to the attempt to derive mathematical realism out of a naturalistic respect for science? Argua- bly, norms of economy require that we believe only in the objects that are required to account for the success of our scientific theories, so if that success can be explained without adopting realism about mathematical objects, then we ought not to be realists. However, intriguing recent work by Penelope Maddy (2011) has suggested that anti-realists may be too quick to draw this conclusion. Despite Maddy’s (1990) endorsement of the QPIA, by Naturalism in Mathematics (1997) Maddy was questioning the idea that mathematics receives confirmation from its role in our empirical

415 Mary Leng theories. In her work since then Maddy has developed a naturalistic approach to mathematics that sees mathematics as a successful discipline in its own right, focussing on mathematical reasons to adopt particular axiom systems rather than reasons for believing mathematics stemming directly from the role of mathematics in empirical science (she is sceptical that there are any such reasons). Maddy suggests a realist approach to mathematics that takes as its starting point the thought that the reasons appealed to within mathematics as reasons to adopt particular axioms just are reasons to believe those axioms. This starting point stems from Quinean naturalism, together with the idea that mathematics is itself a science and as such is deserving of the respect naturalism affords to the sciences. Since the reasons that mathematicians appeal to in support of adopting particular mathematical axioms seem to stem from internal mathematical considerations to develop math- ematically deep theories, Maddy (2011: 82–83) suggests that we should conclude from this that mathematical objects just are the objects required by mathematically deep theories:

the objective ‘something more’ that our set-theoretic methods track is these underlying contours of mathematical depth. Of course the simple answer – they track sets – is also true, so we’ve learned here is that what sets are, most fundamentally, is markers for these contours, what they are, most fundamentally, is maximally effective trackers of certain strains of mathematical fruitfulness.

It is interesting to note the degree of agreement here between Maddy’s recent realist contribution (which she labels ‘thin realism’, as against the ‘robust’ realism of more traditional platonism) and the nominalism of the easy-roaders. Both sides agree that mathematical theories do not receive confirmation directly from their role in science. And both sides can agree that within mathe- matics, when developing mathematical theories mathematicians are guided by considerations of mathematical depth: we do not simply choose any old axioms but craft our axioms to describe mathematically interesting possibilities. Where they disagree is on whether to call the resulting mathematical theories true and their objects existing. We might hope that Quinean naturalism can provide some help with this standoff over whether to use these terms. After all, Quine’s nat- uralism presents a proposal for how to use the words ‘truth’ and ‘existence’, taking our cue from natural science. Part of the motivation of Quine’s naturalism was as a response to Rudolf Carnap’s (1950) scepticism about the meaningfulness of ontological debates. On Carnap’s picture, we can only conduct an empirical enquiry after we have adopted conventions concerning the meanings of our terms (via linguistic frameworks), but once a framework has been adopted then what we traditionally take to be substantial ontological questions are actually trivial consequences of the framework’s meaning conventions. We cannot ask whether, for example, sets exist but only whether adopting a framework that includes set theory is practical for our purposes. Quine’s response is to argue that a practical reason to adopt a framework for scientific purposes, that is, as part of our best efforts to systematize our experience, just is a reason to believe that that framework has got things right, so the fact that some components of our theories start out as conventions governing the meanings of terms does not prevent them from receiving empirical confirmation. So while we recognise the possibility of multiple different frameworks with their own internal standards for what they count as ‘true’ and which objects they take to ‘exist’, our notions of truth and existence should be the notions at work in our best science. Here, though, is where things get tricky. Recall that we are granting that mathematical theories don’t get confirmed through their role in empirical science. Maddy’s thin realist says, nevertheless, that we should take mathematical theories to be true since mathematics is itself a science, with its own internal standards of confirmation that we should, as naturalists, respect. On the other hand, the nominalist easy roaders (in Maddy’s terms, arealists), hold that mathematics, though an

416 Mathematical realism and naturalism extremely useful instrument in the sciences, is not itself a science and therefore not a body of truths. “So, which is it?” Maddy (2011: 102) asks. “Is pure mathematics just another inquiry among many or is it a different sort of thing that’s immensely helpful to the others?” Maddy’s surprising answer is that there is no objectively correct answer to this question. We have a notion of ‘exists’ that comes from empirical science, whose paradigmatic examples include medium-sized physical objects and their physical constituents and objects entering into causal relations. In some respects the objects posited by our mathematical theories are like these objects – they have their properties independently of what we believe them to have; they function semantically as objects – but in other respects they are quite different – non-spatiotemporal, acausal, known through their presence in mathematically deep theories rather than through their leaving a causal trace on the world. In asking whether we should take it that our scientific methods have confirmed the existence of these objects, Maddy argues that our baseline understanding of truth and existence does not answer this question. Drawing on an example of Mark Wilson’s (2006), of the decision whether to count new non-paradigmatic examples of frozen water as ice or merely ‘ice-like’, Maddy (2011: 112) argues that this is simply a terminological decision:

Just as amorphous ice can be classified as ice or as ice-like, mathematics can be classified as science or as science-like – and nothing in the world makes one way of speaking right and the other wrong.

If Maddy is right, then the naturalistic approach to ontology (reading our ontology off our scien- tific theories) does not require mathematical realism or nominalism but rather shows the choice between the two positions to rest ultimately, to borrow Carnap’s (1950: 249) phrase, on “a matter of decision, rather than assertion”.

6 Conclusion The QPIA offers a route from methodological naturalism to mathematical realism. But the presence of mathematical assumptions in our empirical theories may not be sufficient to support realism, either in truth value or in ontology. Nominalist alternatives suggest that we can account for the use of mathematics in science by understanding those theories as elaborating what follows from the assumption that there are, alongside all the physical objects in the actual world, ‘nonin- terfering’ mathematical objects satisfying the axioms of mathematical theories such as ZFU. But even with indispensability considerations neutralised this does not yet settle the question of the status of mathematical realism in a naturalistic worldview. The naturalist’s mission is to take their notion of ‘existence’ from natural science. But if Maddy is right that this notion is not sufficiently specified to determine its own application in the case of mathematics, then the right naturalist conclusion might be that fictionalism, modal structuralism, and ‘thin’ realism are not in the end in competition but simply “alternative ways of expressing the very same account of the objective facts that underlie mathematical practice” (Maddy 2011: 112).

References Azzouni, J. (2004)Deflating Existential Consequence: A Case for, Oxford: Nominalism Oxford University Press. Baker, A. (2005) “Are There Genuine Mathematical Explanations of PhysicalMind 114,Phenomena?” 223–238. Baker, A. (2009) “Mathematical Explanation Britishin Science,” Journal for the Philosophy 60,of Science 611–633.

417 Mary Leng

Baker, A. and Colyvan, M. (2011) “Indexing and Mathematical Explanation,” Philosophia Mathematica 19(3), 323–334. Balaguer, M. (1998) Platonism and Anti-Platonism in Mathematics, Oxford: Oxford University Press. Benacerraf, P. (1973) “Mathematical Truth,” Journal of Philosophy 70, 661–679. Reprinted in Benacerraf and Putnam (1983), pp. 403–420. Benacerraf, P. and Putnam, H. (1983) Philosophy of Mathematics (2nd ed.), Cambridge: Cambridge University Press. Carnap, R. (1950) “Empiricism, Semantics, and Ontology,” Revue Internationale de Philosophie 4, 20–40. Reprinted in Benacerraf and Putnam (1983), pp. 241–257. Chihara, C. (1990) Constructibility and Mathematical Existence, Oxford: Oxford University Press. Colyvan, M. (2010) “There Is No Easy Road to Nominalism,” Mind 119, 285–306. Daly, C. and Langford, S. (2009) “Mathematical Explanation and Indispensability Arguments,” Philosophical Quarterly 59, 641–658. Dummett, M. (1978) Truth and Other Enigmas, Cambridge, MA: Harvard University Press. Field, H. (1980) Science without Numbers: A Defence of Nominalism, Princeton, NJ: Princeton University Press. ——— (1989) Realism, Mathematics and Modality, Oxford: Blackwell. Gödel, K. (1947) “What Is Cantor’s Continuum Problem?” American Mathematical Monthly 54, 515–525. Revised and expanded version (1963), reprinted in Benacerraf and Putnam (1983), pp. 470–485. Hellman, G. (1989) Mathematics without Numbers, Oxford: Clarendon Press. ——— (1996) “Structuralism without Structures,” Philosophia Mathematica 4, 100–123. Hempel, C. G. (1945) “On the Nature of Mathematical Truth,” American Mathematical Monthly 52, 543–546. Reprinted in Benacerraf and Putnam (1983), pp. 377–393. Kalderon, M. (ed.) (2005) Fictionalism in Metaphysics, Oxford: Oxford University Press. Leng, M. (2010) Mathematics and Reality, Oxford: Oxford University Press. ——— (2012) “Taking It Easy: A Response to Colyvan,” Mind 121, 983–995. MacBride, F. (2008) “Can Ante Rem Structuralism Solve the Access Problem?” Philosophical Quarterly 58(230), 155–164. Maddy, P. (1990) Realism in Mathematics, Oxford: Clarendon Press. ——— (1997) Naturalism in Mathematics, Oxford: Clarendon Press. ——— (2011) Defending the Axioms, Oxford: Oxford University Press. Malament, D. (1982) “Review of Hartry Field, Science without Numbers,” Journal of Philosophy 79, 523–534. Melia, J. (2000) “Weaseling Away the Indispensability Argument,” Mind 109, 455–479. Putnam, H. (1971) “Philosophy of Logic,” reprinted in his (1979), pp. 323–357. ——— (1975) “What Is Mathematical Truth?” reprinted in his (1979), pp. 60–78. ———. (1979) Mathematics Matter and Method: Philosophical Papers (Vol. 1, 2nd ed.), Cambridge: Cambridge University Press. Quine, W. V. (1951) “Two Dogmas of Empiricism,” reprinted in his (1961), pp. 20–46. ——— (1961) From a Logical Point of View, Cambridge, MA: Harvard University Press. ——— (1981) Theories and Things, Cambridge, MA: Harvard University Press. ——— (1981a) “Things and Their Place in Theories,” in Quine (1981), pp. 1–23. Resnik, M. D. (1997) Mathematics as a Science of Patterns, Oxford: Clarendon Press. Rosen, G. (2001) “Nominalism, Naturalism, Epistemic Relativism,” Philosophical Perspectives 15, 69–91. Saatsi, J. (2011) “The Enhanced Indispensability Argument: Representational vs. Explanatory Role of Math- ematics in Science,” British Journal for the Philosophy of Science 62, 143–154. Shapiro, S. (1991) Foundations without Foundationalism, Oxford: Clarendon Press. ——— (1997) Philosophy of Mathematics: Structure and Ontology, Oxford: Oxford University Press. ——— (2011) “Epistemology of Mathematics: What Are the Questions? What Count as Answers?” Phil- osophical Quarterly 61(242), 130–150. Weir, A. (2010) Truth through Proof: A Formalist Foundation for Mathematics, Oxford: Oxford University Press. Wilson, M. (2006) Wandering Significance, Oxford: Oxford University Press. Yablo, S. (2002) “Abstract Objects: A Case Study,” Philosophical Issues 12, 220–240. ——— (2005) “The Myth of the Seven,” in Kalderon (2005), pp. 88–115.

418 33 SCIENTIFIC REALISM AND EPISTEMOLOGY

Alexander Bird

1 Introduction Here are some theses frequently endorsed by scientific realists:

R1 The theories of mature sciences are very frequently highly successful (where the success of a theory may be articulated in various ways, for example the theory passes severe tests, or it makes novel predictions that are confirmed by observation, or it provides a unified expla- nation of disparate phenomena, etc.). R2 The theories of mature sciences are very frequently true or close to the truth. And so, frequently, the entities, often unobservable, posited by the theories of mature sciences exist. R3 This success is not accidental. Our belief in theories (or in their approximate truth) is frequently justified and often amounts to knowledge. R4 The reasoning process by which we come to believe the theories of mature sciences very often is an inference to the best explanation. R5 The reason why we should believe R2 is R1. That is the best explanation for the success of the theories of mature sciences is that those theories are very frequently true or nearly true.

Here are two theses endorsed by many anti-realists:

A1 The apparently well-confirmed theories of mature sciences are very frequently found, in the long run, to be false. A2 We should expect current theories to be falsified in due course (by induction on A1).

And here are theses endorsed by some anti-realists:

A3 We cannot know that a theory involving commitment to unobservable entities is true or close to the truth. A4 Inference to the best explanation is not a reliable means of inferring that a theory is true (or approximately true), at least if it involves commitment to unobservable entities.

419 Alexander Bird

We might distinguish between pessimistic anti-realists and optimistic anti-realists. The latter, while believing that theories, or at least theories that concern unobservables, cannot obtain truth, none- theless accept that they can obtain something else worthwhile, for example empirical adequacy. The pessimistic anti-realists hold that science cannot obtain even that. The pessimists will tend to emphasize A1 and A2, whereas the optimists will focus on A3 and A4. The optimists may reject R4, claiming instead that the reasoning processes of science are something other than Inference to the Best Explanation (IBE) that can deliver a worthwhile outcome that is less than knowledge of the truth of theories. For example, a subjective Bayesian might claim that Bayesian condition- alization can give us rational credences; others may assert that enumerative induction can give us conclusions about laws concerning the phenomena that are increasingly close to the truth. The purpose of this chapter is to consider how these theses are informed by reflections in mainstream epistemology.

2 Inference to the best explanation – the problems Inference to the Best Explanation (IBE, abduction) plays an important part in the realist’s argu- ment for the four realist theses articulated earlier. R4 itself notes that IBE is often the means by which we come to believe in the (at least approximate) truth of scientific theories and the existence of the entities they posit. And so if one believes R2, one must also believe that IBE is frequently a good guide to the truth. Furthermore, R5 itself employs an inference to the best explanation in inferring a general truth about scientific theories (that most of them are nearly true) from a general observation about science (science is generally successful). Correspondingly one flavour of anti-realism (a flavour that accepts the success of science) rejects the value of IBE as a guide to the truth, hence A4. (See J. Saatsi, “Realism and the limits of explanatory reasoning,” ch. 16 of this volume.) IBE is an inference procedure that is not exclusive to science (Harman 1965). Prima facie we use IBE to make everyday inferences, for example, concerning where fault in a motor car is, why there is a fresh hole in the lawn this morning, who murdered Lord Edgware, and how Leicester City won the league. Furthermore IBE is seen by a number of mainstream epistemologists as pro- viding the right approach to answering Cartesian scepticism (e.g. Vogel 1990). So the question of the reliability of IBE is not exclusively a concern for epistemological philosophy of science; the efficacy of IBE as a truth-tropic inference procedure is of general epistemological interest. Lipton (2004) identifies a number of general problems for IBE.Hungerford’s objection states that the evaluation of ‘goodness’ in deciding which explanation is the best is too subjective to correlate with the truth.1 Voltaire’s objection states that IBE assumes that the actual world is the best possible world by explanatory standards; but why should we believe that? And Underconsideration raises the concern that we can only select the best of the explanations we have been able to think of, but we may not have considered the actual, true explanation. Lipton provides his own responses to these problems. For example, he holds that Voltaire’s problem is just a version of Hume’s problem of induction and so does not present a special problem for IBE. That means that this problem should not be the basis of an argument for A4 by, for example, a constructive empiricist, who thinks that there are some good ampliative inferences in science (e.g. uses of enumerative induction) but that IBE is not among them. Lipton’s response to Underconsideration exploits the theory-dependence of the reasoning by which we rank hypotheses in order to select the best hypothesis. For example, coherence with well-established theories and background beliefs will be one factor in determining how good an explanation is. That process of ranking will not be reliable if the background theories/beliefs are false. So we can assume that if ranking is reliable those background theories/beliefs are often true. And frequently those background theories and

420 Scientific realism and epistemology beliefs will have been arrived at by IBE. So it must be that IBE does frequently deliver the truth (and a fortiori it cannot be the case that we frequently fail to consider the correct explanation). Lipton does acknowledge that this argument assumes that the ranking of competing explanations is reliable. However, he says, if we deny this, then again we are back with Hume’s problem of induction rather than with a special problem for IBE. So Lipton’s response to the problems facing IBE is, in effect, that they are not special problems for IBE. Insofar as they are problems they are also problems for any inductivist. If Lipton is right about that, then the position of the optimistic anti-realist is unstable. For that anti-realist must accept that there is some answer to Hume’s problem, or else they should not hold that science can yield any worthwhile substitute for truth – even knowledge of the empirical adequacy of the- ories requires the reliability of enumerative induction. If, however, there is an answer to Hume’s problem, and Lipton is right, then there is no epistemological reason to avoid IBE.

3 Hume’s problem In the light of the thought that the problems of IBE reduce to Hume’s problem, we should look at the nature of the latter and the most important response to it. The starting point is the observation:

I1 The conclusion of an inductive argument is not deducible from its premises.

So there is no logical contradiction in asserting the premises of an inductive argument while denying its conclusion.2 So on what ground should we believe the conclusion of an inductive argument given its premises? One response is to appeal to the general reliability of induction:

I2 Inductive arguments generally lead to true conclusions from true premises.

But on what ground should we believe I2? It is a general claim, which if it is known at all is known on the basis of our finding that our inductive reasoning generally works. If it did not generally work – if it did not help us distinguish the safe from the dangerous on the basis of experience – then we would not be around to discuss this question. As Hume points out, this reasoning, from past experience of the success of induction to its general reliability, is itself an inductive argument. And so this attempt to justify our use of induction presupposes that induc- tion is a good way to argue. Consequently that attempt is vitiated by circularity and so provides no justification. Hence our use of induction cannot lead to knowledge:

¬♢IK It is not possible to gain knowledge by using induction.

Do the objections raised against IBE reduce to Hume’s problem? Certainly something like Hume’s problem can arise for IBE. It is widely assumed that:

E1 The conclusion of an inference to the best explanation is not deducible from its premises.

So why should we believe the conclusion of an instance of IBE? As in the case of induction we might need to appeal to IBE’s general reliability:

E2 Inferences to the best explanation generally lead to true or nearly true conclusions from true premises.

421 Alexander Bird

To justify E2 we might appeal to the fact that IBE has proven reliable in the past – when we have been able to check the conclusions of many past instances of IBE, they have been found to be true. That argument is an appeal to enumerative induction, and so we have a direct reduction to Hume’s problem of induction. An alternative justification for E2 takes R2 and R4 as premises. Since our theories are mostly true or approximately true and the reasoning processes that deliver them are often instances of IBE, then IBE must generally lead to true or nearly true conclusions from true premises. Why should we believe R2? R5 tells us that we should believe R2 because it is the best explanation of the success of science. So this justification of IBE employs an inference to the best explanation. Furthermore, someone might query the inference from the truth of theories to the reliability of the method that produced them. For even an unreliable method can produce true conclusions on occasion. Still, a better explanation of the truth of the many theories of mature sciences is that they are produced by a reliable method. Hence this route to establishing the reliability of IBE itself depends on a least one prior application of IBE. So the parallel objection will be raised as in Hume’s problem, that it uses the very inference procedure that it seeks to justify (Fine 1991: 82). On the other hand, these are not the problems that Lipton identifies: Underconsideration and Voltaire’s objection. It is no part of Hume’s problem that we might have failed to consider the full range of possible inductive projections in addition to our preferred one. Voltaire’s objection asks for our reason for thinking that the actual world is the best by our explanatory standards. And that seems not to have a parallel in Hume’s problem. Yes, the inductivist looks as if she is assuming that the world is regular rather than irregular. But she does not seem to have to assume that it is the most regular world or the best world by some standard of regularity. So these prob- lems do not have a parallel in Hume’s problem. Furthermore, the characteristic feature of Hume’s problem – the apparent circularity of using induction in justifying induction – does not arise in Underconsideration or Voltaire’s problem. Nor does it appear that these problems would disap- pear if we had a solution to Hume’s problem.

4 Externalism and reliabilism What does a solution to Hume’s problem look like? The most significant kind of response draws on epistemological externalism. Internalism is the claim that whether a subject is justified in believing that p or knows that p supervenes on the internal states of the subject.3 Externalism is the denial of internalism. (One could be internalist about justification but externalist about knowledge. For convenience I will consider internalism about justification and about knowledge together.) One influential version of externalism is reliabilism (Armstrong 1973; Goldman 1975, 1979), of which the following are simple formulations:

S’s belief that p is justified iff S acquired the belief that p by a reliable method. S knows that p iff S believes that p, S acquired the belief that p by a reliable method, and it is true that p.4

The crucial thing about reliabilism is that the reliability of the belief-forming method in question is an external condition. Two subjects may be internally alike, yet the method used by one to form a belief is reliable and that used by the other is not; the former therefore may be justified in their belief and have knowledge, while the latter is not. So whether you know the answer to an arithmetical problem by using a calculator will depend on whether the calculator is functioning reliably. And that will not supervene on your internal states.

422 Scientific realism and epistemology

One important motivation (but not the only one) for externalism is epistemological natural- ism (Quine 1969; Papineau 1993; Kornblith 2002). The latter aims to understand the epistemic states of an organism (human or otherwise) in terms of its need to interact successfully with its environment. In particular we can explain, in terms of natural selection, why sentient organisms have systems, above all their senses, for gathering and processing information about the organism’s environment in a reliable way. The resulting states of the organism are states of knowledge. This perspective is externalist because this explanation appeals only to the reliability of the organism’s cognitive systems. This reliability may depend not only on the organism but also on its environ- ment; furthermore, this explanation makes no reference to the organism’s ability to reflect on and justify its use of those systems. The naturalistically inclined scientific realist will argue that the belief-forming processes of science can be seen as extensions of or additions to the cognitive systems with which the human organism is born.

4.1 Reliabilism and Hume’s problem The pertinence of this to Hume’s problem is as follows. In some worlds, the regular worlds where observed regularities tend to hold into the future, using induction will be a reliable method of form- ing beliefs concerning generalizations (Mellor 1991). In such worlds the users of induction will be justified and will often gain knowledge. So reliabilism implies that inductive knowledge is possible:

♢IK It is possible to gain knowledge by using induction.

Thus denying the sceptical conclusion from Hume’s problem, ¬♢IK. One common response to externalist views such as reliabilism is: ‘how does one know that the externalist condition is met (for example, that the method in use is indeed reliable)?’ The first thing to note is that the very point of externalism is that the subject does not have to be able to answer this question in order for the subject to have knowledge. The subject may be ignorant concerning their belief-forming method – so long as it is reliable, the belief will be knowledge. So the raising of the question cannot be an objection to reliabilism or to externalism more generally. That is precisely where the reliabilist and the inductive sceptic part company. The sceptic holds that the subject must be able to provide a justification for the use of induction in order for it to deliver knowledge. The reliabilist denies this: it suffices for inductive knowledge that induction is reliable; it is not necessary that the subject also has a justified belief that it is reliable. The question ‘how does one know that the external condition is met (for example, that the method in use is indeed reliable)?’ might be intended in a different way. It is all very well showing that inductive knowledge is possible. But is it actual? Do our inductive sciences deliver knowledge? The reliabilist has an answer to this question. Note first of all that a positive response establishes that inductive knowledge is actual:

@IK We actually do gain knowledge by using induction.

It is not necessary to show that @IK is true in order to refute inductive scepticism, ¬♢IK, since the latter denies that knowledge is possible, not merely that we do not actually have knowledge. Nonetheless, it is not unreasonable to want to know whether @IK is true. And the reliabilist can argue that @IK can be known to be true. (The reliabilist may parenthetically remind us that her position implies that she is under no obligation to do this as far as refuting scepticism is concerned.) As Hume envisages, one examines the track record of inductive reasoning. Given its past success, one can argue that inductive reasoning is reliable and so that when a subject does

423 Alexander Bird use induction to gain a true belief she does thereby have knowledge. Is this not circular, as Hume complained, aiming to establish the reliability of induction by using induction? The reliabilist notes that the subject does not assume that reliabilism is reliable when using it. She may have not even considered the matter. So this is not like using a proposition as a premise in an argument in order to establish the truth of that very proposition – which is circular. So some externalists distinguish between the latter, premise-circularity, and the argument to establish @IK, which uses induction as a method or inference rule, which is rule-circularity (Braithwaite 1953; van Cleve 1984; Psillos 1999). Externalism denies that rule-circularity is a vicious kind of circularity. If induction is reliable, then it can be used to generate knowledge; and one of the things it can generate knowledge about is the reliability of induction.

4.2 Reliabilism and IBE Do reliabilism and the reliabilist solution to Hume’s problem help with the problems of IBE? First, reliabilism tells us that if IBE is a reliable means of generating true beliefs, then it will give us knowledge. Do the sceptical worries addressed so far give us reason to think that IBE cannot be reliable? Hungerford’s objection says that our standards of explanatory goodness are too sub- jective to be a reliable guide to the truth. If that objection is correct, then it looks as if IBE is not in fact reliable. Is there any possible world where IBE, with subjective goodness, is reliable? That’s unclear; at least it would be an odd sort of world where the subjective standards of that world were a good guide to the truth. Voltaire’s objection says that for IBE to lead us to the truth, our world would have to be the best possible world by explanatory standards – and that seems implausible. Implausible but not impossible. Some world is the best by explanatory standards, and in that world IBE is reliable (and perhaps it is sufficiently reliable in some other nearly best worlds). Under- consideration says that we cannot be sure that we have thought of the correct explanation among those we are ranking. The reliabilist’s initial response here will be that if scientists (or others) are good at generating potential explanations, so that they reliably do include the correct explanation among those they consider, then IBE’s reliability will not be compromised on this count. Reliabi- lism requires only that our practice of IBE is reliable in this respect. Reliabilism does not require that for knowledge we must also have an argument to show that we always consider the correct explanation. However, some have argued that there are additional reasons to think that our practice of IBE is not reliable in this respect, since in the past we have clearly failed to consider potential explanations that at a later date we do consider and then in fact take to be true. For example, New- ton did not consider anything like Einstein’s theory of space, time, and gravity when devising his own explanation of celestial and terrestrial motion. Stanford (2006) gives further examples of more recent theories and greater theoretical differences between established theories and later successors that had not previously been considered. (See also K. Stanford, “Unconceived alternatives and the Strategy of Historical Ostension,” ch. 17 of this volume.) So the best one can say is that there is some world where IBE is reliable, in which case in that world IBE will lead to knowledge. That refutes the sceptic who espouses abductive scepticism:

¬♢AK It is not possible to gain knowledge by using inference to the best explanation.

Hence:

♢AK It is possible to gain knowledge by using inference to the best explanation. is true.

424 Scientific realism and epistemology

On the other hand, the three objections do seem to cast doubt on the actuality of abductive knowledge:

@AK We actually gain knowledge by using inference to the best explanation.

(@AK entails E2 or something close to it.) While it is possible that we inhabit an explanatorily optimal world, Voltaire’s objection maintained that this just looks to be unlikely. Why should we have that good fortune when there are many worlds where the best explanations are often not true? Turning to Underconsideration, it is possible that scientists are always able to think up a range of possible explanations that includes the actual explanation. But one might doubt that they actually always do so. For, as mentioned, there is evidence that when looking for explana- tions of the evidence, scientists have not always been good at generating hypotheses that later scientists come to regard as correct. So these problems do not establish ¬♢AK, but they would establish a weaker form of abductive scepticism:

¬@AK We do not actually gain knowledge by using inference to the best explanation.

To reject the argument for ¬@AK it is not necessary to establish the truth of @AK. One might be able to undermine the reasons for thinking that @AK is implausible without having to gather evidence that that shows @AK to be true. Since ¬@AK is a contingent claim and Undercon- sideration is supported in part by empirical data, then it is likely that arguments to undermine support for ¬@AK will themselves use empirical data. They may well do that while falling short of providing evidence and argument that would show @AK to be true.

5 Observation and evidence So far I have considered blanket scepticism with regard to IBE (and enumerative induction). But one might think that IBE can be reliable in some circumstances but not in others. This might allow for an optimistic scepticism: science, by IBE, can achieve some worthwhile epistemic goals but not some kinds of knowledge. Van Fraassen’s constructive empiricism (1980) comes close to saying something like this.5 IBE may be good for giving us knowledge about theories that wholly concern observable features of the world. On the other hand, when IBE is applied to theories with content concerning the unobservable, we cannot be sure of its reliability. The underdetermination argument seeks to establish that any attempt to theorize about the unobservable cannot lead to substantive knowledge about the unobservable (whether by IBE or some other route). It is not immediately obvious why the observable/unobservable boundary should also mark a boundary between the reliability of IBE and its unreliability. According to the Under- determination argument, this is because our evidence concerns the observable. For any such set of evidence there will be many competing theories that entail the very same evidence. So that evidence cannot provide a reason for holding one of those theories be true and rejecting the others as false. This argument makes two assumptions:

S There is no more to evidential support than that the evidence be entailed by the hypoth- esis in question. and:

E = O All our evidence is observational.

425 Alexander Bird

The standard response to this familiar underdetermination argument is to reject S. The hypo- thetico-deductive model of confirmation may endorse S, but that model is discredited. A central advantage of IBE is that it can allow for degrees of evidential support. Two hypotheses might both entail and explain the same evidence, but one will explain that evidence better than the other does. For example, one of the ways in which one explanation can be better than another is that it is simpler or more elegant than the other. So the evidence may support one hypothesis more than it supports the other, because the former is simpler or provides a more elegant explanation than the latter. The underdetermination argument has affinities to the problem of induction, insofar as the latter starts by pointing out that our evidence, which concerns the past, is consistent with numer- ous ways in which the future can develop, including ways in which past patterns break down. One difference between the two arguments is that the inductive justification of induction looks plausible, initially at least. Some of our inductive predictions can be verified, and so we can see whether induction has a good track record. On the other hand, the unobservable does not become observable.6 So we cannot verify our past inferences concerning the unobservable. We are able to assess the track record of IBE when the latter generates conclusions that themselves wholly concern the observable. But we cannot assess its track record regarding unobservables. Unlike the inductive justification of induction, we do not have an evidence base for the inductive justification of IBE as applied to unobservables. A reliabilist may complain, as we have seen, that regarding both inductive scepticism and the Underdetermination argument, the demand for a justification is misplaced. For a method (induc- tion, IBE) to deliver knowledge, the subject does not have to know about the reliability of the method employed; it is sufficient that the method be reliable. However, the reliabilist also thinks that an inductive justification of induction is possible, even if it is not necessary (for inductive knowledge). It is (prima facie) an asymmetry between the two cases that there is not an inductive justification of IBE as applied to unobservables. While S is the standard target of responses to the underdetermination argument, rather less attention has been given to E = O. This premise is an assumption of van Fraassen’s constructive empiricism as well as of the underdetermination argument. This premise is widely accepted by epistemologists and philosophers of science, often for reasons stemming from a broadly empir- icist general epistemology (cf. Maher 1996; Schurz 2014: 23). Nonetheless, it can be rejected. Which propositions are evidence propositions? According to Williamson (1997) and Bird (2016) one’s evidence propositions are just what one knows. In the light of this, the premise E = O is asserting that all one’s knowledge is observational. That proposition, however, is precisely what the underdetermination argument seeks to establish. So the realist ought not concede E = O. Furthermore, scientists’ use of the term ‘evidence’ does not restrict it to the observational – at least not in any sense of ‘observational’ that is helpful to the anti-realist (Bogen and Woodward 1988). For example, when scientists at CERN report evidence regarding theories relating to the standard model of particle physics, the reports concern the behaviour of sub-atomic particles in certain conditions. They say things such as, ‘Indirect evidence for the Higgs coupling to the top quark, an up-type quark and the heaviest elementary particle known to date, is implied by an overall agreement of the gluon-gluon fusion production channel cross-section with the standard model prediction’ (The CMS Collaboration 2014). Here the evidence is the value of an abstract theoretical quantity (the cross-section) relating to a sequence of particle interactions and decay events. For van Fraassen and other anti-realists, what is observable is what is perceptible (without artificial aids). So what the scientists regard as evidence is not observational in van Fraassen’s sense. Now, it is also true that the same scientists are comfortable with talking about observing

426 Scientific realism and epistemology gluon-gluon fusion. In that sense what they say about evidence does not contradict E = O. In which case neither evidence nor observation have any close relation to perception. If sub-atomic particles are observable, then A3 loses its interest – there isn’t much left to be anti-realist about.

6 On scepticism and realism – what can the arguments show?

6.1 Sceptical arguments and realism The Cartesian sceptic argues that even if in fact a subject’s perceptual beliefs are true, they cannot amount to knowledge because the subject cannot distinguish them from corresponding false beliefs (since the evil demon could have implanted erroneous but seemingly veridical experi- ences). The point of current interest is that this is an a priori argument that seeks to establish a necessary conclusion:

¬♢PK Perceptual knowledge is not possible.

The reliabilist response regards the argument as invalid, since according to the reliabilist the scep- tic uses a mistaken (internalist) conception of knowledge. So the first thing that the reliabilist establishes is:

The sceptic’s argument for ¬♢PK is invalid.

This does not itself show that ¬♢PK is false – perhaps there are other, valid arguments for that conclusion. However, the realibilist hopes to do better. There are possible worlds where per- ceptual belief formation is reliable, and so according to her analysis, subjects do have perceptual knowledge in that world. Hence:

♢PK Perceptual knowledge is possible (¬♢PK is false).

This refutes the sceptic. But it does not establish:

@PK Perceptual knowledge is actual.

@PK is clearly a contingent claim that could only be established by a posteriori means. Estab- lishing @PK is not something that could be achieved by purely a priori, philosophical reasoning. So although there can be prima face reasonable purely philosophical (a priori) arguments for ¬♢PK and ♢PK, there is no such argument for @PK. If one were to try to establish @PK one’s empirical, a posteriori investigation would most likely need to use perception. So that attempt would involve using perception to try to show that perception is actually reliable and so actually produces knowledge. While sceptics will complain that this is circular, we have already seen that the reliabilist has an answer – that this would be rule circularity and not premise circularity and so not a case of begging the question. The position as regards the debate between the anti-realist and realist over IBE is largely the same, with a slight difference. As we saw, the strong abductive sceptic maintains:

¬♢AK It is not possible to gain knowledge by using inference to the best explanation.

427 Alexander Bird

A reliabilist may reject the argument for ¬♢AK by rejecting its internalist assumptions. The reliabilist will argue that ¬♢AK must be false because there is some world where IBE is reliable. The reliabilist thereby takes herself to have established, by a priori means, the negation of ¬♢AK:

♢AK It is possible to gain knowledge by using inference to the best explanation.

The claims just considered concern the possibility of knowledge from IBE. We also considered the opposing claims, weak abductive scepticism and abductive possibilism concerning the actu- ality of knowledge from IBE:

¬@AK We do not actually gain knowledge by using inference to the best explanation. and:

@AK We actually gain knowledge by using inference to the best explanation.

Both of these are contingent propositions, and we cannot expect purely philosophical (a priori) arguments to establish either. We saw that Voltaire’s objection and Underconsideration gave grounds that non-deductively support ¬@AK and so reject @AK. A scientific realist might want to do two things in response. First, she may wish to resist these arguments for ¬@AK and against @AK. This is a defensive move; if successful the conclusion is that realism has not been refuted. Secondly, she may wish to establish @AK as true. However, note that these aims are not only distinct, the second is far more demanding than the first. The first is just a matter of show- ing that the anti-realist’s argument for ¬@AK is defective. The weak abductive sceptic thinks that it is implausible that we gain knowledge from IBE. The realist need only show that despite the anti-realist’s argument, there is some plausible way, consistent with what we know about the world, that IBE delivers knowledge. It is quite another thing for the realist to show that this way is actual, that we do in fact gain knowledge with IBE. More generally, there is a considerable difference between the modest realist and the ambitious realist. The modest realist’s aim is to resist anti-realist arguments for propositions such as A2–A4 that reject claims to scientific knowledge or the truth/verisimilitude of scientific theories. The ambitious realist wants to argue in favour propositions such as R2–R3 asserting that we do get to the truth or close to it with our theories and that we do generate scientific knowledge.

6.2 Ambitious realism The ambitious realist tries to establish R2 and R3 via R5 – the best explanation of the success of the mature sciences is the (near) truth of their theories. This is the No Miracles Argument (Putnam 1975: 73). A more sophisticated version of the argument is developed by Boyd (1981: 617–618) and Psillos (1999: 78–81). This argues that our theory-laden methods are highly suc- cessful in experiments, tests, and other applications. So these methods are instrumentally reliable. And the best explanation of this reliability is the truth of the theories with which those methods are laden (the truth of the causal claims that underlie the reliability of the methods). What would it take for the ambitious realist to succeed? This is a contentious question. From one perspective the task is enormous. R2 is a contingent claim. Although it is simply stated, it indirectly makes a very considerable claim about the world. For the theories of the mature sciences individually make substantive contingent claims. To say that such a theory is true is to

428 Scientific realism and epistemology repeat or endorse the claims it makes. And to say that all theories of a mature science are true is therefore to repeat or endorse very many substantive claims about the world. (This is only slightly reduced in force by downsizing realism to say that most theories are at least close to the truth.) How could one make such a claim without at least reviewing and endorsing the detailed evidence and arguments of the scientists? On the other hand, it might be reasonable to endorse someone’s claims without examining their basis in detail, if one has a reason for thinking that they are generally reliable – if, for example, we know that they are the sort of person who only ever asserts what they have strong reason to believe. In the case of scientific realism, we might think that there is something in common among the theories of the mature sciences that makes them reliable. That might be, perhaps, the scientific method. One might, for example, think of the scientific method as a set of rules that tells one, given any hypothesis, what evidence one needs in order for the hypothesis to be likely to be true. So if a scientist conscientiously applies the scientific method and acquires the appropriate evidence, then her theories are likely to be true. But this looks to be too simplistic a picture of the way that science works. It does not seem to allow for IBE. According to IBE, how well evidence e supports hypothesis h depends on what the rival hypotheses to h are. It is implausible that those rivals could always be generated methodically. So it is difficult to see how rules could tell us what evidence supports h in such a case. Furthermore, whether e supports h will typically depend on the truth of various background theories. This might be accommodated by the rules of method. But then it would not be possible to tell whether a scientist was using the method in a reliable way without investigating the truth of her background theories. So it is doubtful whether there is anything like the scientific method that will do what the ambitious scientific realist wants of it, namely for it to be such that (i) it is highly general across science, (ii) one can tell without extensive scientific investigation that its use will tend to lead to the truth, (iii) one can tell fairly easily that it is in fact being used in a reliable way across a lot of science. It does not look as if anything fulfils these desiderata. IBE itself does not. For the standards of explanatory goodness are not fixed and uniform across time and across scientific disciplines (cf. Kuhn 1977). What counts as a simple or elegant explanation in physics will look very different from a simple explanation in physiology. And it will differ between Aristotelian physics and quantum physics. And any application of IBE also depends on background assumptions, whose truth may vary from field to field. So we should not think that there is a single thing, inference to the best explanation, that is the same in its applications across all science. The use of IBE might be reliable in one field but not in another. In summary, for the ambitious realist’s claim about science in general to succeed, she must either consider all the first-order arguments of the scientists and endorse their claims, or she must find some relevant feature that is common to the mature sciences and which she has reason to think would render them reliable. The first is too ambitious; and in any case it does not seem a job for a philosopher – assessing first-order science is a scientist’s job. And the second is chimerical.

6.3 The pessimistic induction The anti-realist ought not take too much heart from this failure of ambitious realism. For her view, if founded on the pessimistic (meta-) induction (PMI), may suffer from a parallel problem. From the premise, A1, that past, well-confirmed theories of the mature sciences are frequently refuted, the PMI makes an inference, A2, to the falsity of their current theories.7 However, as Goodman’s (1954) new riddle of induction reminds us, we should only endorse inductive reasoning of this sort if the properties involved in the induction are projectible.8 Is ‘well-confirmed theory of a mature science’ a projectible property? It is not obvious that it is. For one thing, the sciences change and develop – a

429 Alexander Bird very mature science is not the same as a recently matured science. Likewise, standards of confirma- tion vary and tend to become more stringent over time. So the well-confirmed theories of current very well matured sciences may not be sufficiently similar to the failed theories that make up the evidence for the pessimistic induction. Just as the ambitious realist hoped to trade on the virtues of the scientific method (or something similar) as the common basis for the success of science, the anti-realist might hope that the vices of something like the scientific method will underpin the pessimistic induction. If there were a single scientific method in use throughout (mature) science, and unchanging over time, then one might reasonably infer from the failure of previous theories generated or confirmed by it to the probable failure of current and future theories. While there might not be anything like the scientific method that is common to all mature science, past and present, the anti-realist may wish to base her argument on something weaker than a method, but which nonetheless might be thought to explain the (alleged) consistent fail- ure of science. This brings us back to the problems of Inference to the Best Explanation. If IBE were in widespread use among the mature sciences for the justification of their theories, then epistemic inadequacies in IBE would explain past failures and predict future ones. The modest realist can draw upon the very same argument just posed against the ambitious realist: the rele- vant features of IBE are not constant across all sciences and all times. Standards of explanatory goodness used to rank competing hypotheses will vary. Likewise, scientists’ ability to generate competing hypotheses may differ between fields and eras. Just as this variation and change are reasons to reject any proposal that IBE is reliable across all science, it will undermine the claim that it is unreliable across all science.

7 Conclusion In this essay I have touched on the following themes:

• The centrality of IBE to debates concerning realism. • Internalism versus externalism in epistemology. • Coming to know the reliability of induction and IBE using induction and IBE. • Whether the observable/unobservable distinction marks a significant epistemological boundary. • The dialectic of the realism versus anti-realism debate. • Modest versus ambitious realism.

I conclude by tying these together. At the heart of many debates surrounding scientific realism we tend to find Inference to the Best Explanation. For example, IBE is held to be an ampliative form of inference – the conclu- sions of an inference to the best explanation are not deducible9 The from standard its premises. reason given for this assumption is that our evidence concerns the observable while theories often refer to the unobservable. Any such inference must be ampliative. Indeed, the value of IBE, on this view, is precisely that it allows us to make inferences about the existence of unobservable atoms and molecules, for example, on the basis of what we observe in our experiments. Because such an inference is ampliative, the evidence will underdetermine the choice of theory. But must our evidence be always observational? While philosophers of science often take that for granted, some non-empiricist epistemologists deny that claim. Rejecting that claim also removes a premise from the underdetermination argument. Inference to the best explanation is central to the discussion of scientific realism for two rea- sons. First, because IBE characterizes the reasoning of much of science, at least of the theoretically

430 Scientific realism and epistemology more interesting parts. And, secondly, because the No Miracles Argument itself employs an infer- ence to the best explanation. The anti-realist therefore has a number of options:

• Argue that there are certain flaws in IBE that render it unable or unlikely to deliver the truth (such as Hungerford’s objection, Voltaire’s objection, and Underconsideration). • Argue that realists’ use of IBE in the No Miracles Argument is viciously circular. • Argue that the poor track record of science (shown by the eventual refutation of theories) gives us reason to expect the future failure of science.

The third of these, the Pessimistic Meta-Induction, does not mention IBE. Nonetheless, it can support the attack on IBE in two related ways. First, it provides evidence that IBE is unreli- able. On numerous occasions it has led to false conclusions. And if it is unreliable, it cannot lead to knowledge, even on those occasions when the scientific theory in question is in fact true. Secondly, when successful theories are refuted they are frequently replaced by theories that had not been previously conceived of. That suggests that one of the sources of unreliability is Underconsideration. At this point it is useful to consider the aims of the various participants in this debate. The anti-realist is a sceptic, though the degree and kind of scepticism varies. The most extreme anti-realist will (like the Cartesian sceptic) claim that scientific knowledge is not possible. To achieve knowledge by using IBE would require having a justification for IBE. But no justifica- tion is available that does not beg the question. For example, to provide such a justification one might try to use the No Miracles Argument to show that IBE does lead to the truth. But this, the extreme anti-realist will say, is viciously circular, for this NMA itself employs IBE: this is justifying IBE by using IBE. We can think of Voltaire’s objection and Underconsideration as objections from the extreme anti-realist along these lines. The anti-realist will say that in order to use IBE to gain knowledge, one must show that the actual world is the explanatorily best of possible worlds and show that we do conceive of the true explanation among those potential explanations we actively consider. But how could we show that without begging the question? Such a justification of IBE is not possible, and so knowledge from IBE is not possible either. Now the epistemological internalism–externalism debate becomes relevant. According to a typical externalist position, such as reliabilism, it is not necessary that the knowing subject should be able to justify the belief-forming methods she uses. If the method is in fact reliable, that will suffice for knowledge. Since IBE could be reliable in a propitious world, a subject using IBE in such a world gets to know. The externalist view of knowledge thus claims to refute the extreme scep tic: knowledge from IBE is possible after all. Furthermore – and this is an additional point, not required for the former point – one of the things IBE can lead to knowledge about is the reliability of IBE itself. That reasoning is not viciously circular, since it is rule-circular not premise-circular. Extreme anti-realists are epistemological internalists; they say that scientific knowledge is not possible. A less extreme version of anti-realism will argue that even if knowledge from IBE is possible, there are good reasons for thinking that it is not actual. The moderate anti-realist does not need to be an internalist. Earlier I said that Voltaire’s objection and Underconsidera- tion are understood as extreme anti-realist arguments. However, they may also be used by the moderate anti-realist in a way that consistent with externalism ( just as the PMI is consistent with externalism). Because it is a more moderate form of scepticism, moderate anti-realism is less easy for a realist to refute than extreme anti-realism – an appeal to epistemological exter- nalism will not suffice. The moderate anti-realist will argue then that the conditions that would make scientific knowledge possible are not in fact found in the actual world. Their argument will thus have a

431 Alexander Bird contingent element to it. How should the realist respond? Here we distinguish between the ambi- tious and the modest realist. The ambitious realist wants to show that the relevant conditions are actual – we therefore do have scientific knowledge when we use IBE. A modest realist aims only to undermine the anti-realist’s arguments. That can be shown by arguing that certain conditions that would allow for knowledge are not in fact ruled out by the anti-realist’s arguments or by other contingent evidence. The modest realist can argue for this conclusion without also arguing for the stronger claim that such conditions actually obtain. One reason for preferring modest realism is the worry that ambitious realism is overly ambi- tious. It aims, in effect, to endorse most of the findings of the mature sciences. That could rea- sonably be achieved only by identifying some belief-forming method common to the mature sciences and showing that method to be highly reliable. But is there such a method? Even if IBE is common to much science, it does not seem to be sufficiently method-like that we can be confident that its uses in disparate (mature) sciences amounts to the same thing, such that we can endorse its reliability in all or most of those sciences. If that is correct, there isn’t a global second-order argument for realism (cf. Magnus and Callender 2004). The most the realist can hope for is to defeat each global argument from the anti-realist. Still, the same lack of a common belief-forming method will also undermine the reasonableness of the Pessimistic Meta-Induction. Maybe what we are left with is principally the first-order examination of the scientists’ methods and arguments; but that looks like more of a job for a scientist than for a philosopher.

Notes 1 I will not be discussing Hungerford’s objection in this chapter at any length. 2 This is not the same as saying – as is frequently said – that it is possible for the premises of an inductive argument to be true and the conclusion false. For the problem of induction remains even if the inductive

conclusions are (a posteriori) necessary truths (e.g. all water is composed of H2O). 3 This is not the only way of defining internalism (Pappas 2014). A slightly different one maintains that justification depends only on mental states. Mental states (for many externalists, e.g. Williamson 1995) may themselves be conditions external to the subject, though internalists will typically have an internalist conception of all mental states. Another characterization of the central internalist claim is: justification depends only on what is directly accessible (by introspection or reflection) to the subject. 4 Reliabilism in this form (and others) faces objections. The most obvious objection is that since the two reliabilist claims together entail that knowledge is justified true belief, this reliabilism does not escape the Gettier counter-examples. 5 Strictly, constructive empiricism is principally a view about the goal of science (that goal is empirical adequacy, not truth). However, it is difficult to see why this is a plausible goal without some degree of scepticism about the unobservable. Van Fraassen (1989) seems to adopt a stronger scepticism about IBE. 6 According to views such as van Fraassen’s, where observation is unaided perception. 7 For discussion see Poincaré (1943: 160) and Laudan (1981). See also P. Vickers, “Historical challenges to realism,” ch. 4 of this volume. 8 Goodman and subsequent authors have focussed on the consequent property green or grue in ‘all emeralds are green/grue’, but similar concerns arise for the base of antecedent property, emerald in this case. 9 But see Bird (2005) for a non-ampliative account of IBE. The problems discussed so far would be rather less pressing if IBE were not ampliative.

References Armstrong, D. M. (1973) Belief, Truth, and Knowledge, Cambridge: Cambridge University Press. Bird, A. (2005) “Abductive Knowledge and Holmesian Inference,” in T. S. Gendler and J. Hawthorne (eds.), Oxford Studies in Epistemology, Oxford: Oxford University Press, pp. 1–31. ——— (2016) “Evidence and Inference,” Philosophy and Phenomenological Research. doi:10.1111/phpr.12311 Bogen, J. and Woodward, J. (1988) “Saving the Phenomena,” Philosophical Review 97, 302–352.

432 Scientific realism and epistemology

Boyd, R. (1981) “Scientific Realism and Naturalistic Epistemology,” in P. D. Asquith and T. Nickles (eds.), PSA 1980 (Vol. 2), East Lansing, MI: Philosophy of Science Association, pp. 613–662. Braithwaite, R. B. (1953) Scientific Explanation, Cambridge: Cambridge University Press. Cleve, J. van (1984) “Reliability, Justification, and the Problem of Induction,”Midwest Studies in Philosophy 9, 555–567. The CMS Collaboration (2014) “Evidence for the Direct Decay of the 125 GeV Higgs Boson to Fermions,” Nature Physics 10, 557–560. Fine, A. (1991) “Piecemeal Realism,” Philosophical Studies 61, 79–96. Fraassen, B. van (1980) The Scientific Image, Oxford: Oxford University Press. ——— (1989) Laws and Symmetry, Oxford: Oxford University Press. Goldman, A. (1975) “Innate Knowledge,” in S. P. Stich (ed.), Innate Ideas, Berkeley, CA: University of Cal- ifornia Press, pp. 111–120. ——— (1979) “What Is Justified Belief?” in G. Pappas (ed.),Justification and Knowledge, Dordrecht: Reidel, pp. 1–23. Goodman, N. (1954) Fact, Fiction, and Forecast, London: Athlone Press. Harman, G. H. (1965) “The Inference to the Best Explanation,” Philosophical Review 74, 88–95. Kornblith, H. (2002) Knowledge and Its Place in Nature, New York: Oxford University Press. Kuhn, T. S. (1977) “Objectivity, Value Judgment, and Theory Choice,” in The Essential Tension, Chicago, IL: University of Chicago Press, pp. 320–339. Laudan, L. (1981) “A Confutation of Convergent Realism,” Philosophy of Science 48, 19–48. Lipton, P. (2004) Inference to the Best Explanation (2nd ed.), London: Routledge. Magnus, P. D. and Callender, C. (2004) “Realist Ennui and the Base Rate Fallacy,” Philosophy of Science 71, 320–338. Maher, P. (1996) “Subjective and Objective Confirmation,”Philosophy of Science 65, 149–174. Mellor, D. H. (1991) “The Warrant of Induction,” in Matters of Metaphysics, Cambridge: Cambridge Uni- versity Press. Papineau, D. (1993) Philosophical Naturalism, Oxford: Blackwell. Pappas, G. (2014) “Internalist vs. Externalist Conceptions of Epistemic Justification,” in E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2014 ed.). URL: https://plato.stanford.edu/ archives/ fall2014/entries/justep-intext/ Poincaré, H. (1943) La Science et l’Hypothèse, Paris: Edition Flammarion. First published 1902; references to English language edition Dover, New York 1952. Psillos, S. (1999) Scientific Realism: How Science Tracks Truth, London: Routledge. Putnam, H. (1975) Mathematics, Matter, and Method: Philosophical Papers (Vol. 1), Cambridge: Cambridge University Press. Quine, W. V. (1969) “Epistemology Naturalized,” in Ontological Relativity and Other Essays, New York, NY: Columbia University Press, pp. 69–90. Schurz, G. (2014) Philosophy of Science a Unified Approach, New York, NY: Routledge. Stanford, P. K. (2006) Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives, New York: Oxford University Press. Vogel, J. (1990) “Cartesian Skepticism and Inference to the Best Explanation,” The Journal of Philosophy 87, 658–666. Williamson, T. (1995) “Is Knowing a State of Mind?” Mind 104, 533–565. ——— (1997) “Knowledge as Evidence,” Mind 106, 1–25.

433 34 NATURAL KINDS FOR THE SCIENTIFIC REALIST

Matthew H. Slater

1 Introduction To a first approximation, natural kinds are the objects of successful scientific classification. When physicists announced the discovery of the Higgs Field Boson, for example, they were announcing their discovery of a natural kind of thing (or phenomenon) whose properties were roughly as predicted by physical theory. This was a notable confirmatory success of the dominant model of particle physics: the world seemed to be structured as the theory said it was. Examples like this abound in the natural sciences. We often apparently discover new kinds of things or new facts about known kinds, and we sometimes improve on our classifications by discovering that our concepts and categories did not correspond to the structure of reality as we thought it was. Thus, for Scientific Realists, the notion of a natural kind can play an important role in the philosophical picture of how our scientific theories relate to an objective, mind-independent world.

2 Naturalness The concept of a natural kind has a long and contentious pedigree, but the basic idea can be illustrated by thinking about some contrasts. The first contrast is between individual things and kinds of things. Here’s a panda; there’s another panda. What do they have in common? Many things, perhaps; but the most salient response is that they kindare –both they’re of theboth same pandas. We thus organize a lot of our thought and talk by dividing things up into different con- ceptual categories. The sciences in particular are rife with such efforts to divide the world into categories – more on this soon. But the sciences are not alone in this. Here we encounter a second contrast. Consider thePsycho classic andThe filmsSilence of the Lambs; like our pandas, these individuals can be placed in a few common categories: they are both ‘thrillers’, they are both R-Rated movies, and so on. But there appears to be an important difference between the thrillerskinds andpandas . Only the latternatural is kinda – a category with some objective existence – the former appears to be a by-product of our more or less arbitrary way of dividing up the world. There are a number of ways we might try to draw this second distinction. One strategy might use scientific attention. ‘Thriller’ does not name a scientific category; ‘panda’ does. On further reflection, however, this idea is problematic. Are we talking about actual or merely possible

434 Natural kinds for the scientific realist scientific attention? The former is clearly too conservative – at least for a scientific realist – in that we often take science to be in the business of uncovering preexisting divisions in nature. Pan- das were a natural kind (if they in fact are a natural kind) before anyone had seen a panda. The latter strategy of seeing naturalness as dependent on what science could find interesting is more tempting, but it is difficult to work out precisely. Are we really so sure that future psychologists might not formulate scientific theories that involved thrillers as important categories (say, in their capability of producing certain physiological/emotional responses in us)?1 Another suggestion would be to focus on the sense that thrillers are artifacts: they’re things that people make, not items in the ‘natural world’. Whereas in science we speak comfortably of discovering new kinds of things – pandas, electrons, typhoid – it’s at least pretty strained to think of ourselves (or humanity) as ‘discovering’ thrillers. We invented them. But here too the distinc- tion is not as clean as paradigm cases suggest. Some chemical elements and materials – such as the element technetium or the patented material Kevlar® – are human artifacts in the sense that they do not exist in nature but by our intervention. Yet despite needing to be produced in a laboratory, they seem just as much to be natural kinds as gold or hydrogen in the role they play in science (LaPorte 2004; Hacking 2007a: 204).2 What is the role that natural kinds play in science, then? This is a bigger question that we will need to address piecemeal over this chapter, but starting on it raises a third gloss on what makes a kind natural. Scientific theories often purport to describe general features of the world. This is most obvious when the sciences are at their most ambitious: uncovering laws of nature. For example, the law that electrons have a charge of -1.6 × 10–19 coulombs is a description not just of a sample of electrons studied in a lab in Chicago but of all electrons – of the kind electron. Given further laws governing the ways that charges relate to forces, a physical theory enables under- standing and prediction of how electrons will interact with other kinds of particles. There is thus a straightforward sense in which many scientific theories areabout kinds of things – or processes, events, properties, and so on more generally – by being in the business of producing a certain kind of generalized knowledge. And this is one of the reasons why we are inclined to think of technetium as a natural kind, despite the fact that it is a human artifact: we can determine laws of nature that govern how samples of technetium behave such that when a different quantity of it is produced for the first time in another lab, we rightly bring to the table some robust expectations for how it will behave. While this gloss on what it is to be a natural (rather than socially constructed) kind might work for physical kinds, it is contentious whether natural kinds should be thought of as the subject matter of natural laws (Kitcher 1984: 315–316; Lange 2000: 210). As we will see, there are other ways of identifying the role of natural kinds in science (or in systematic inquiry more generally) that do not require the tight connection with laws – something that will come as welcome news to those who wish to view species as natural kinds and who yet doubt that biological taxa are governed by distinctively biological laws (Beatty 1995; Rosenberg 2001; cf. Lange 2004). Ditto for many of the special sciences – even social sciences – in which laws play little role. However we work this out precisely, the idea of natural kinds’ ‘mind-independence’ will loom large. And here we can draw a useful distinction between causal independence from human activity and conceptual independence which may help resolve some of the puzzles about natural- ness. The existence of technetium or Kevlar or perhaps of social kinds (like races and refugees) are causal artifacts of human activity. They are, to use a once-trendy phrase, social constructions, as are the systems of classification we use to sort them into categories. But once made, even the social world has an existence that is to some degree independent of our thought and talk. So says the Realist – to whom we now turn.

435 Matthew H. Slater

3 Scientific realism and realism about kinds The core of Scientific Realism is a general and commonsensical thesis: there’s the world and then there’s us; the features of the world enjoy a substantial independence from our thought and talk about them. In other words, we don’t make the world what it is by merely forming beliefs about it.3 Of course, as we discussed, we can certainly influence the world – causally, by altering it, or just in the sense that human beliefs are part of the world. Peter Godfrey-Smith encapsulates this thought into his sophisticated version of Realism:

Common-sense Realism Naturalized: We all inhabit a common reality, which has a struc- ture that exists independently of what people think and say about it, except insofar as reality is comprised of thoughts, theories, and other symbols, and except insofar as reality is dependent on thoughts, theories, and other symbols in ways that might be uncovered by science. (2003: 176)

This amounts to the thesis that science has a potential subject matter. The world – including those aspects of reality that are comprised of or influenced by our thoughts – exists for us to investigate; moreover, the thesis that the world has a repeatable structure is part of what makes it tractable for science to investigate. The success of science, in this sense, might be thought to depend on there being natural kinds for our scientific theories to latch onto. So far, so good. But the Scientific Realist will need to go further. The degree to which this reality is independent of our thought and talk raises the possibility that the descriptions stem- ming from these investigations may be mistaken. What stance should we take on this matter? The Scientific Realist is optimistic, suggesting that it is a reasonable goal of science to uncover facts about reality. This optimism is generally founded in the sense that recent scientific theories have enabled an extraordinary degree of success at predicting and controlling a world that we did not make. On this view, mature scientific theories can reasonably be taken to provide accurate descriptions of reality. Of course, this is not to say that science is infallible or even that it is in a position to report on the whole of reality (whatever that would mean). Scientific Realists must temper their optimism with the knowledge that plenty of scientific theories have fallen by the wayside – a fact to which we will return in the next section. Thus, we may think of Scientific Realism as being comprised by two main tenets: the metaphysi- cal thesis about the existence of a largely independent reality with a tractable structure discussed ear- lier and an epistemic thesis concerning the proper attitude we should have concerning the sciences’ access to this reality. To these, a third, semantic thesis is sometimes added: that mature scientific theories consist, in part, in literal descriptions of the world. As Psillos puts it, the Scientific Realist

takes scientific theories at face-value, seeing them as truth-conditioned descriptions of their intended domain, both observable and unobservable. Hence, they are capable of being true or false. Theoretical assertions are not reducible to claims about the behaviour of observables, nor are they merely instrumental devices for establishing connections between observables. The theoretical terms featuring in theories have putative factual reference. So, if scientific theories are true, the unobservable entities they posit populate the world. (1999: xvii)4

The metaphysical and semantic components of Scientific Realism establish a prima facie connec- tion with the concept of natural kinds. Taking scientific theories at face value involves accepting

436 Natural kinds for the scientific realist their claims about classes of things as true descriptions of reality. So to return to our previous example, when a mature and successful theory refers to the properties of electrons, the Realist accepts that there really is this kind of thing, independent from our theorizing, with properties described by the theory. Indeed, in Psillos’s telling, the metaphysical thesis focuses on natural kinds, asserting “that the world has a definite and mind-independent natural-kind structure” (ibid.). Enthusiasts of natural kinds have appropriated a famous metaphor from Plato that captures this idea (Phaedrus, 265e). When scientists are engaged in the business of discovering and char- acterizing different kinds of things, we might think of them in the model of butchers parceling up an animal. Good butchers, it is said, keep their knives sharp by cutting only at the joints, the physiological discontinuities. Reality, in this metaphor, is like the animal under the knife. It features natural discontinuities along which our best scientific theories may ‘carve’; in doing so, they truly describe the world. By contrast, when the sciences have invoked categories such as ‘aether’, ‘phlogiston’, or ‘caloric’ – substances that did not in fact correspond to any natural dis- continuities – they erred. They missed the natural discontinuities, sawing with difficulty through ‘reality’s bones’. Plato’s metaphor and talk of the “structure of reality” plainly evoke realism about natural kinds. And it seems natural for a realist about natural kinds to accept the broad metaphysical and semantic commitments associated with Scientific Realism.5 Must they accept the epistemic commitment? This is not as clear. One might, after all, accept that reality has some independent natural kind structure without evincing much optimism that we are very good at uncovering it. Of course, it may seem strange to adopt anything more than agnosticism about the reality of natural kinds if one doubted that science routinely traded in accurate descriptions of reality in both its observable and unobservable aspects; but it is a logical possibility. What about the reverse direction? Do the metaphysical and semantic commitments of Scien- tific Realismdemand that we accept Realism about natural kinds? This issue is more complex, turn- ing on nuances of our conception of the Scientific Realist’s theses and our precise understanding of what natural kinds are. We will explore some of these issues in more detail in a later section, but for now we can illustrate at least the localized possibility of a Scientific Realist disavowing natural kinds by considering the debates over the metaphysical status of biological species. I used the example of pandas previously to illustrate the concept of a natural kind. But an influential account of species took root in the 1970s that rejected the thesis that species were natural kinds at all. Reacting, in part, against the idea that natural kinds could be given precise membership conditions – that there was an essence to being, say, a panda – the biologist Michael Ghiselin and philosopher David Hull proposed that species were instead individuals – particular, concrete objects. Similarity, they say, is a red herring when it comes to systematics; and natural kinds are all about similarity.6 Rather than possessing certain intrinsic properties that made some organisms pandas, what they shared was simply a connection to a particular chunk of the tree of life (Ghiselin 1974; Hull 1978). And insofar as the tree of life has an existence and topology independent of our classificatory whims, the species-as-individuals metaphysics would appear to be fully compatible with the commonsense realism described already. It would seem at first, then, that the advocate of species-as-individuals can accept all three tenets of Scientific Realism without committing herself to natural kinds. Much of what science does is concerned with learning about particular things rather than kinds of things. It remains to be seen, however, whether such commitment can be avoided more broadly. After all, in the course of doing the biology and paleontology needed to discern the structure of the history of life, it is plausible that reference to natural kinds of at least the physical or chemical sort will be common. But however this may be, it illustrates that there is at least some independence between realism about natural kinds and Scientific Realism.

437 Matthew H. Slater

4 A challenge to science realism and realist responses One of the central arguments against scientific realism begins from the observation that our present scientific theories are built upon the ruins of yesterday’s. As Larry Laudan puts it in an influential article, “what the history of science offers us is a plethora of theories which were both successful and (so far as we can judge) non-referential with respect to many of their central explanatory concepts” (1981: 33). Interesting for our purposes is the fact that many of these concepts purport to refer to kinds of stuff. Laudan mentions the humoral theory of medicine, effluvial theory of static electricity, phlogiston, caloric, and the optical aether (ibid.). The invited conclusion of this argument – what has come to be called the ‘pessimistic induction’ – is that we should not be as impressed as the Realist suggests about the success of our current scientific theories. After all, previous investigators might have offered the same argument for scientific realism concerning caloric and so forth. This appears to give us reason for doubting that we are in any better epistemic situation; we might even suspect that our theories will be just one more stratum in history’s deposit of ruined theories. (See P. Vickers, “Historical challenges to realism,” ch. 4 of this volume.) Notice that this argument focuses on the epistemic tenet of Scientific Realism, leaving unscathed the metaphysical tenet. An anti-realist of Laudan’s stripe need not deny that we live in a common reality largely independent of our thoughts; she simply argues that history gives us reason to doubt the ability of our sciences to provide us with accurate descriptions of the fun- damental aspects of this reality. She holds that the Scientific Realist is inappropriately optimistic.7 However, this pessimism does have consequences for the semantic tenet of Scientific Realism. We have already cited substance terms like ‘caloric’ and ‘phlogiston’ as foils for natural kinds. But unlike terms like ‘gourmet multigrain cracker’, which presumably no one has ever supposed referred to a natural kind, phlogiston was treated as a scientifically important kind of stuff – a putative natural kind. So it looks like the anti-realist’s doubt about the status of our scientific theories as literally true descriptions of the world leads to a corresponding doubt that our present natural-kind terms carve nature at the joints either. There is much to say about this argument, of course; but in the present context, it is worth considering two sorts of responses. One response involves seeking continuity through reference to natural kinds. We turn to this response in the next section, using it as a springboard for discuss- ing the metaphysics of natural kinds. A second response seeks continuity in the mathematical struc- tures of theories as they change, possibly at the cost of relinquishing reference to natural kinds and other unobservables in order to preserve a kind of continuity through scientific change. Con- sider, for example, the transition from wave optical theories that postulated a physical medium – the luminiferous aether – through which light travelled to Maxwell’s electromagnetic theory, which dispensed with this medium. On the one hand, this was a rather significant change. The theory of light was no longer about how a certain stuff (aether) moved; indeed, it repudiated the existence of the stuff altogether. But on the other hand, the mathematical formalism describing how light behaved remained remarkably constant. Similar comments can be applied to thermodynamics and caloric (Kitcher 1992). This strategy – Structural Realism – was introduced by John Worrall (1989) and has since been developed in various flavors.8 In the epistemic flavor, generally attributed to Worrall, Struc- tural Realism concedes to the anti-realist that optimism in the sciences’ ability to tell us about the underlying nature of reality is misplaced. Worrall thus rejects ‘standard scientific realism’. But again, this epistemic stance implies nothing about what natural kinds in fact exist. Ontic Structural Realism, on the other hand, represents a radical metaphysical claim with epistemic consequences: that “realists should believe only in structures described by our best theories

438 Natural kinds for the scientific realist because structure is all there is to reality. . . . The usual talk of objects, they say, is misguided, and engenders fatal metaphysical difficulties. Ontic structuralists are happy to speak of objects, but only as a façon de parler, not to be taken literally” (Chakravartty 2007: 70–71). At first glance it might seem as though the prospects for a metaphysics of natural kinds might fare no better than objects. After all, how can there be kinds of things without things? And indeed, Ladyman and Ross, advocates of Ontic Structural Realism, suggest that we should “give up the idea that science finds its metaphysical significance in telling us what sorts of things there are” (2007: 93). But it may be possible nevertheless to reconceive an account of natural kinds to accord with structuralist scru- ples.9 Chakravartty is more optimistic that Structural Realism may be wedded to an ontological picture that includes natural kinds as clusters of causal properties. In this, the realist is merely attempting to make sense of what seems to be “an important feature of what realists take to be a mind-independent reality”: that properties are not “randomly distributed across space-time”; they are “sociable” in systematic ways (2007: 170). As we’ll see, this sort of ‘cluster’ approach has been an important way of thinking about kinds in scientific domains that do not seem to admit of as much law-governed order as physics and chemistry.

5 Realism concerning natural kinds Let us take a step back. What is so useful about ‘cutting nature at the joints’, supposing that there are any such joints? Why should we care? One response is that doing so is simply part and parcel of doing science. This may have been what Hempel had in mind when he wrote of the two basic functions of the ‘vocabulary of science’: first, “to permit an adequate descrip- tion of the things and events that are the objects of scientific investigation”; and second, “to permit the establishment of general laws or theories by means of which particular events may be explained and predicted and thus scientifically understood; for to understand a phenom- enon scientifically is to show that it occurs in accordance with general laws or theoretical principles” (1965: 139). So it is in that way, as Ladyman and Ross put it, that “the concepts of ‘(natural) kind’ and ‘law’ are yoked together, on the general supposition that natural kinds are what laws are true of” (2007: 290). Another response has it that accommodating our inferential and explanatory practices to the ‘causal structure of the world’ – in part by attuning our categories to the natural kinds – is crucial to our success in these practices (Boyd 1991; Kornblith 1993; Sankey 1997; Godfrey-Smith 2011). An especially deflationary version of this idea commits to little else about natural kinds, electing to treat them more or less as epistemic categories (we will consider this approach to natural kinds in the final section). But the concept of a natural kind reentered philosophical discussion in the 20th century via a ‘middle-of-the-road’ strategy that packaged ideas about the metaphysics of natural kinds – what reality’s ‘joints’ are – together with a developing theory of how to refer to them. Here I will only sketch the highlights of this oft-told (contested) story as they bear on the connection to Scientific Realism (for details, see, e.g., Hacking 1991). Let’s start with reference. How do proper names refer to their objects? On one account – Descriptivism – reference is secured by some unique description associated with a name. So, for example, the name ‘Aristotle’ might be associated with the description ‘Plato’s most distinguished pupil’. But as Kripke pointed out, Descriptivism has odd consequences when we consider certain modal questions about Aristotle: for example, could Aristotle have studied sculpture instead of philos- ophy? At first blush, the obvious answer is ‘yes’; but if we define Aristotle as Plato’s most distin- guished (philosophical) pupil, we seem forced to treat it as a trivial, linguistic truth that Aristotle studied philosophy – something that could not have been otherwise. That’s quite implausible; of all people, Aristotle could have pursued all manner of careers.

439 Matthew H. Slater

To accommodate these and other intuitions about how names work in merely possible or epis- temically imperfect situations, Kripke pressed an alternative view that used descriptions inciden- tally (if at all). Reference, he argued, was better treated as a direct matter (Kripke 1980). Someone pointed to Aristotle and said something like (and I translate) “Let’s call him Aristotle” and thereby named a certain baby ‘Aristotle’. That usage has been transmitted down to us by a causal chain in such a way that we could be completely mistaken about our descriptions of Aristotle and still successfully refer to that person. In contrast, if names refer exclusively via descriptions, errors in description become errors (or outright failures) in reference. We can see this same dynamic play out in the case of natural-kind terms. How do they refer to natural kinds? Descriptivism is perhaps more tempting in this domain, as characterizing the kinds of things in the world is a major part of what science does. But it also seems plausible that we can successfully refer to a certain kind of stuff (say) before we begin any kind of systematic attempt to characterize it, in more or less the same manner that we can start using a proper name like ‘Aristotle’ to name a screaming child whose philosophical pedigree and acumen are as yet unknown. Moreover, even when we have a rough and ready description associated with a kind of stuff – for example, that water is clear, potable, a good solvent, that stuff that flows in streams and rivers, and so on – Putnam argues that we can conjure intuitions that this description may not apply to all (liquid) water and might in fact apply to a kind of stuff that is chemically completely different from actual water.10 What matters is whether a given sample is of the same kind as the stuff that was initially ‘baptized’ as water: “A term refers to something if it stands in the right relation (causal continuity in the case of proper names; sameness of ‘nature’ in the case of kinds terms) to these [gestured-at] things” (Putnam 1983: 73). This is the Causal Theory of Reference. Its intuitive appeal rests in part on the sensibility of a distinction between knowing that there is something to which a term refers (by dint of pointing at it and naming it) and understanding the nature or essence of that something (Psillos 1999: 273). Drawing this distinction and allowing an underlying metaphysics of sameness of ‘nature’ or kind to do much of the work in allowing us to refer to new instances or samples of the same stuff or kinds brings us back to the second response to Laudan’s pessimistic induction argument. The problem the Scientific Realist faced, recall, was to make sense of the dismal history of empirically successful scientific theories that were nevertheless incorrect or referentially unsuccessful. But if natural-kind terms find their reference in accordance with the Causal Theory, it looks like we can make sense of changing (improving) descriptions of the same natural kinds through even major scientific changes. Psillos summarizes this line of thought this way:

The causal theory lends credence to the claim that even though past scientists had par- tially or fully incorrect beliefs about the properties of some causal agent, their investiga- tions were continuous with investigations of subsequent scientists, since their common aim has been to identify the nature of the same causal agent. . . . Insofar as successor theories afford better descriptions of the same causal agent and its relations with other causal agents, one can conclude that science has improved our understanding of the world. And insofar as successor theories are more truth like than their predecessors in their descriptions of the nature of these causal agents, one can argue that science has achieved a better approximation to the objective causal structure of the world. (1999: 273–274)

This points to a further way of making sense of the Realist idea that we inhabit a common, objective reality that we can refer to and describe in a variety of ways. The great evolutionary biologist Ernst Mayr was fond of citing evidence of convergence in classificatory systems across

440 Natural kinds for the scientific realist cultures as justification for the reality of biological species, writing, “Nothing convinced me so fully of the reality of species as the observation . . . that the Stone Age natives in the mountains of New Guinea recognize as species exactly the same entities of nature as a western scientist” (1987: 146). Even if we offer different descriptions of these taxa from naturalists working in different traditions, we can make sense of our referring to the same kinds. Unfortunately, there are a number of difficulties and unanswered questions for this strategy. One class of difficulties concerns the methodology employed in motivating the Causal Theory in the first place. Putnam relies fairly heavily on intuitions about difficult-to-imagine scenarios – for example, that ‘water’ would not refer to a substance with a different underlying structure which played an identical superficial, causal role on ‘Twin Earth’ (see Putnam 1975). But are we really so sure that this is the right thing to say (Mellor 1977; Dupré 1993)? There are also difficulties getting the Causal Theory to work right. Pointing is ambiguous, even when it’s obvious which object I am intending to name. Suppose we are birding with Mayr and he points to a particular bird, naming it; has he named a species, a genus, a sub-species, a variety, an order?11 Setting aside these questions, there is also a question of whether the Causal Theory enables a plausible and satisfactory rejoinder to the Pessimistic Induction. It might seem like cold comfort for the Scientific Realist if all that remained constant through radical theory change was the fact that we continued to talk about the same kinds, despite being at times radically wrong about their nature (Stanford 2003). This would offer little in the way of assurance, after all, that our current theoretical understanding of these kinds is sound – presumably something that a Realist wants. It can also seem like the Causal Theory of reference is implausibly strong. Consider ‘phlogiston’, introduced by the causal description of that gas (whatever it is) causally involved in combustion. In contrast with the development of theory surrounding, for example, water, electrons, and such, its introduction was entangled with a conspicuous amount of mistaken chemical theory. Granted, Joseph Priestley and the other chemists investigating the phenomena of combustion were causally related to oxygen; as Psillos puts it, “they breathed it and it was causally involved in the experi- ments they made to investigate combustion”. But the properties that oxygen in fact has were not causally associated with the theory associated with phlogiston (Psillos 1999: 280–281). To get things right, argues Psillos, we must draw upon theoretical descriptions of what sorts of properties are relevant in fixing the reference of a particular term. Exactly what parts of a scientific theory should determine this in general (without backwards-looking ad hocery) remains a matter of continued discussion (Stanford 2003; McLeish 2006). Let us turn now to the metaphysics of natural kinds and its connection to theories of refer- ence for natural-kind terms. One way of thinking about Putnam and Kripke’s approach to nat- ural-kind terms (cf. Hacking 2007b) is as a revival of the Lockean distinction between nominal and real essence (Locke 1689/1975). Nominal essences are the descriptions associated with a term (which may be inconstant or about which we might just be wrong); the real essence is the nature of the kind itself – the properties which determine what it is to be a thing of that kind. What is it that makes this glass of water the same stuff as the stuff flowing in that clear stream? Putnam’s answer (picking up on a quick suggestion of Quine 1969): the fact that they share the same underlying structure – something that modern chemistry is in a position to report on. Moreover, it is this structure, presumably, that ‘binds together’ the properties figuring in water’s nominal essence. This suggests a metaphysics of natural kinds focusing on microstructural or otherwise fundamental features of reality. This view came to be called Scientific Essentialism (Bealer 1987) and is sometimes thought to provide a “metaphysic of scientific realism” (Ellis 2001: 2). There are a number of difficulties with this view that we cannot properly survey here, except to point out that one of the major problems facing the Essentialist is the difficulty in applying the sorts of strict, microstructural

441 Matthew H. Slater approaches to real essences that apparently worked for physical and chemical examples, to bio- logical kinds such as species, as these seemed to defy simple individuation on the basis of a shared ‘genetic structure’ as Putnam posited (see, e.g., Wilson 1999: 190).12 In the next section, we will briefly examine approaches to natural kinds that take a more flexible approach designed to reflect their deployment across a wider range of scientific disciplines.

6 Cluster approaches to natural kinds One response to the difficulties in application of the essentialist approach simply drops the ambi- tion of construing biological or social categories as natural kinds at all. As mentioned earlier, realists about species have a ready alternative: treat species taxa as individuals. But this strategy is not workable for other biological examples, such as cells and biological macromolecules. Yet we have ample reason for wanting to think of all of these categories as natural kinds in virtue of their contribution to our inferential and explanatory practices (Slater 2009, 2013b). Accordingly, non-essentialist approaches to natural kinds have been popular among philoso- phers of biology. Rather than seeking properties strictly shared by all and only the members of a given kind, one might follow Boyd (1991, 1999) in thinking of natural kinds instead as clusters of properties that tend to co-occur together (Chakravartty 2007: 170). The allowance for imperfec- tion in clustering is a key element in how this Homeostatic Property Cluster (HPC) account of natural kinds can be seen to apply to the messy and exception-ridden biological world (Wilson, Barker, and Brigandt 2007). The issues surrounding the metaphysics of HPC kinds get complex fast, however. In the case of essentialist kinds, it was relatively easy to think of natural kinds as certain kinds of intrinsic universals: for example, the property of having the molecular structure repre- sented by the chemical structure ‘H2O’. It’s less obvious what sorts of things clusters are or what sorts of properties can constitute them; nor is it clear that the Scientific Realist needs to alight on a specific metaphysical approach here in order to explain the success of science. To some extent, such agnosticism played a role in the early formulation of the HPC theory. A key idea for Boyd is what he calls the Accommodation Thesis: “the theory of natural kinds is about how schemes of classification contribute to the formulation and identification of project- ible hypotheses. . . . [W]hat the representation of phenomena in terms of natural kinds makes possible, is the accommodation of inferential practices to relevant causal structures” (1999: 147). According to Boyd, these causal structures are the ‘homeostatic mechanisms’ that maintain the coherence of a cluster of properties. At first glance, anyway, a Scientific Realist should find much to like in Boyd’s account. For we have at least the makings of a robust account of how scientific investigation is related to the structure and patterns in the world without a significant amount of airy metaphysics. But once again, there are open questions about how much metaphysics is enough here. Some philosophers of science and metaphysicians are inclined to push further into the metaphysics of causal properties (see Chakravartty 2007; Hawley and Bird 2011). Others have gone in the oppo- site direction, downplaying the precise metaphysics of natural kinds and emphasizing instead their epistemic role(s) in scientific inquiry. Khalidi, for example, adopts what he calls a Simple Causal Theory of natural kinds because he finds “that the causal roles of natural kinds are more variegated and diverse than envisaged in Boyd’s HPC theory” (2013: 122; see also Craver 2009); seeing natural kinds instead as ‘nodes in causal networks’ allows us to unify a primarily epistemic approach to kinds with an appropriately metaphysical conception in virtue of which their epistemic utility is realized. P. D. Magnus, by contrast, proposes that a category is a natural kind for a given domain just in case that category both “allows the scientific enquiry into [that domain] to achieve inductive and explanatory success” and that no other category would do as well (Magnus 2012: 48).

442 Natural kinds for the scientific realist

The relativity to domains or disciplines13 raises some thorny issues for the Realist. For it is less clear in this case that our scientific theories refer to a shared reality largely independent of our thought and talk. Ditto for accounts that make room for pluralism concerning the ways we divide up reality. Granted, such pluralists sometimes describe their views as “pluralistic realism” or “promiscuous realism” (Dupré 1983, 1993; Kitcher 1984), but one might doubt the coherence of this conjunction of ideas. How is it, one might ask that the world “can be divided up into kinds in numerous different ways, and the results are all equally real” (Hull 1999: 25)? In what sense would such incompatible sets of divisions correspond to ‘the’ structure of the objective world (cf. Waters 2017)? Scientific Realists of a more robustly metaphysical bent face the challenge here of connecting complex and sometimes pluralistic classificatory practices with an ontology that affords the mind-independence of the referents of the resulting categories. But even plu- ralists who place less emphasis on the metaphysical details of an account of natural kinds need an account of how their pluralism can avoid becoming an ‘anything goes’ picture of scientific classification (Kitcher 1987). These and other issues may convey the sense in which the subject of natural kinds is an impor- tant and fascinating nexus at which metaphysical and epistemological issues concerning science come together. What to say about these issues, how to approach them, and how to see them as connected to the debate about Scientific Realism remain in serious discussion.

Further reading Metaphysically oriented discussions of natural kinds may be found in the work of such Scientific Essentialists as B. Ellis, Scientific Essentialism (Cambridge: Cambridge University Press, 2001) and A. Bird, Nature’s Metaphysics: Laws and Properties (Oxford: Oxford University Press, 2007). A classic work on natural kinds and the unity of science that critically discusses the essentialism is J. Dupré’s The Disorder of Things (Cambridge: Harvard University Press, 1993); for more on the Causal Theory of Reference and the role of natural kinds in theory change, see J. LaPorte’s Natural Kinds and Conceptual Change (Cambridge: Cambridge University Press, 2004). Other fine, recent book-length discussions of natural kinds that attend to scientific realism to different degrees are P. D. Magnus’s Scientific Enquiry and Natural Kinds: From Planets to Mallards (London: Palgrave Macmillan, 2012) and M. A. Khalidi’s Natural Categories and Human Kinds (Cambridge: Cambridge University Press, 2013).

Notes 1 Indeed, as I think about it, I can’t say I’m confident that theyhaven’t . Perhaps this just means that thriller doesn’t in fact represent a good contrast with panda. But even for more obvious contrasts – say gourmet crackers or the items currently sitting on my desk – anyone with even a meagerly developed philosophical imagination could probably cook up some far out scenario in which an obscure and perhaps misguided science takes a shine to even such obviously subjective or gerrymandered categories. 2 One response to this observation might simply be to drop the ‘natural’ modifier and follow philosophers like Whewell (1840) or Mill (1872) in speaking of ‘real kinds’ or simply ‘Kinds’. For helpful histories, see Hacking (1991) and Magnus (2014). Though a question would still, of course, remain of how to distinguish such Kinds from miscellaneous categories. 3 This view (or something like it) has also been called Metaphysical Realism and has been somewhat difficult to characterize precisely; see Putnam’s many writings (e.g., his 1981) and Khlentzos (2011) for discussion of the controversies surrounding it, which I will largely ignore in this chapter. 4 Some might prefer a more lenient ‘accuracy’ condition in order to make room for idealization and other standards of success in the sciences’ representation of reality (see, e.g., Godfrey-Smith 2003: 176–177; Elgin 2004). I set this matter aside in the present discussion.

443 Matthew H. Slater

5 Laudan makes the connection to the semantic dimensions of Scientific Realism this way: “To have a gen- uinely referring theory is to have a theory which ‘cuts the world at its joints’, a theory which postulates entities of a kind that really exist” (1981: 24). 6 Quine (1969) thought that this is what would make natural kinds ultimately dispensable in general, as vague notions of overall similarity would ultimately give way to more specific notions from the relevant sciences or be eliminated altogether. The philosophical collective wisdom has not followed him to this extreme. 7 As usually understood, the pessimistic induction generalizes from the history of past theories to our own present-day theories. In Kyle Stanford’s (2006) ‘Problem of Unconceived Alternatives’, by contrast, the induction ranges over past theorists, pointing to the conclusion that we are no better than they were in being able to conceive of all relevant alternative explanations of natural phenomena between which to choose. See also K. Stanford, “Unconceived alternatives and the strategy of ostension,” ch. 17 of this volume. 8 For an overview, see Ladyman’s (2014) entry in the Stanford Encyclopedia of Philosophy and I. Votsis, “Structural realism and its variants,” ch. 7 of this volume. 9 See Ladyman and Ross (2007: 291) for some suggestive discussion on this front. 10 The famous example of ‘Twin Earth’ is discussed in Putnam (1975); for more on this thought experiment and discussion of some of the specific contentions Putnam makes about the definition of water, see P. Needham, “Scientific realism and chemistry,” ch. 27 of this volume. 11 For discussion of these sorts of difficulties, see Devitt and Sterelny (1987: 72–75); Stanford and Kitcher (2000); and LaPorte (2004). 12 For contrasting views, see Wilkerson (1995) and Devitt (2008); Slater (2013a) discusses alternative met- aphysics of species that either abandon or modify essentialism about natural kinds. 13 Something that was also a feature of Boyd’s approach: “the naturalness of a natural kind . . . is discipline relative” (1999: 148). See also Slater (2015) for an ‘adjectival approach’ to natural kinds that attempts to unify cluster accounts like Boyd’s with traditional essentialist accounts.

References Bealer, G. (1987) “The Philosophical Limits of Scientific Essentialism,”Philosophical Perspectives 1, 289–365. Beatty, J. (1995) “The Evolutionary Contingency Thesis,” in G. Wolters and J. G. Lennox (eds.), Concepts, Theories, and Rationality in the Biological Sciences, Pittsburgh: University of Pittsburgh Press, pp. 45–81. Bird, A. (2007) Nature’s Metaphysics: Laws and Properties, Oxford: Oxford University Press. Boyd, R. (1991) “Realism, Anti-Foundationalism and the Enthusiasm for Natural Kinds,” Philosophical Studies 61, 127–148. ——— (1999) “Homeostasis, Species, and Higher Taxa,” in R. A. Wilson (ed.), Species: New Interdisciplinary Essays, Cambridge: MIT Press, pp. 141–185. Chakravartty, A. (2007) A Metaphysics for Scientific Realism, Cambridge: Cambridge University Press. Craver, C. F. (2009) “Mechanisms and Natural Kinds,” Philosophical Psychology 22(5), 575–594. Devitt, M. (2008) “Resurrecting Biological Essentialism,” Philosophy of Science 75(3), 344–382. Devitt, M. and Sterelny, K. (1987) Language and Reality, Cambridge: MIT Press. Dupré, J. (1983) “The Disunity of Science,” Mind 92(367), 321–346. ——— (1993) The Disorder of Things, Cambridge: Harvard University Press. Elgin, C. Z. (2004) “True Enough,” Philosophical Issues 14, 113–131. Ellis, B. (2001) Scientific Essentialism, Cambridge: Cambridge University Press. Ghiselin, M. (1974) “A Radical Solution to the Species Problem,” Systematic Zoology 23, 536–544. Godfrey-Smith, P. (2003) Theory and Reality, Chicago: University of Chicago Press. ——— (2011) “Induction, Samples, and Kinds,” in J. K. Campbell, M. O’Rourke and M. H. Slater (eds.), Carving Nature at Its Joints, Cambridge, MA: MIT Press, pp. 33–52. Hacking, I. (1991) “A Tradition of Natural Kinds,” Philosophical Studies 61, 109–126. ——— (2007a) “Natural Kinds: Rosy Dawn, Scholastic Twilight,” in A. O’Hear (ed.), Philosophy of Science: Royal Institute of Philosophy Supplement (Vol. 61), Cambridge: Cambridge University Press, pp. 203–239. ——— (2007b) “Putnam’s Theory of Natural Kinds and Their Names Is Not the Same as Kripke’s,” Principia 11(1), 1–24. Hawley, K. and Bird, A. (2011) “What Are Natural Kinds?” Philosophical Perspectives 25, 205–221.

444 Natural kinds for the scientific realist

Hempel, C. G. (1965) “Fundamentals of Taxonomy,” reprinted in his Aspects of Scientific Explanation: And Other Essays in the Philosophy of Science, New York: The Free Press, 1965, pp. 137–154. Hull, D. L. (1978) “A Matter of Individuality,” Philosophy of Science 45, 335–360. ——— (1999) “On the Plurality of Species: Questioning the Party Line,” in R. A. Wilson (ed.), Species: New Interdisciplinary Essays, Cambridge: MIT Press, pp. 23–48. Khalidi, M. A. (2013) Natural Categories and Human Kinds, Cambridge: Cambridge University Press. Khlentzos, D. (2011) “Challenges to Metaphysical Realism,” in E. N. Zalta (ed.), The Stanford Encyclope- dia of Philosophy (Spring 2011 ed.). URL: http://plato.stanford.edu/archives/spr2011/entries/realism- sem-challenge Kitcher, P. (1984) “Species,” Philosophy of Science 51, 308–333. ——— (1987) “Ghostly Whispers: Mayr, Ghiselin, and the ‘Philosophers’ on the Ontological Status of Species,” Biology and Philosophy 2(2), 184–192. ——— (1992) The Advancement of Science, Oxford: Oxford University Press. Kornblith, H. (1993) Inductive Inference and Its Natural Ground, Cambridge: MIT Press. Kripke, S. (1980) Naming and Necessity, Cambridge: Harvard University Press. Ladyman, J. (2014) “Structural Realism,” in E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Spring 2014 ed.). URL: http://plato.stanford.edu/archives/spr2014/entries/structural-realism Ladyman, J. and Ross, D. (2007) Every Thing Must Go: Metaphysics Naturalized, Oxford: Oxford University Press. Lange, M. (2000) Natural Laws in Scientific Practice, New York: Oxford University Press. ——— (2004) “The Autonomy of Functional Biology: A Reply to Rosenberg,” Biology and Philosophy 19, 93–109. LaPorte, J. (2004) Natural Kinds and Conceptual Change, Cambridge: Cambridge University Press. Laudan, L. (1981) “A Confustation of Convergent Realism,” Philosophy of Science 48, 19–49. Locke, J. ([1689] 1975) Essay Concerning Human Understanding (P. H. Nidditch, ed.), Oxford: Oxford Uni- versity Press. Magnus, P. D. (2012) Scientific Enquiry and Natural Kinds: From Planets to Mallards, London: Palgrave-Macmillan. ——— (2014) “No Grist for Mill on Natural Kinds,” Journal for the History of Analytic Philosophy 2(4), 1–15. Mayr, E. (1987) “The Ontological Status of Species: Scientific Progress and Philosophical Terminology,” Biology and Philosophy 2, 145–166. McLeish, C. (2006) “Realism Bit by Bit: Part II. Disjunctive Partial Reference,” Studies in the History and Philosophy of Science 37(2), 171–190. Mellor, D. H. (1977) “Natural Kinds,” British Journal for the Philosophy of Science 28, 299–312. Mill, J. S. (1872) A System of Logic (8th ed.), London: Longman, Green, and Co. Reprinted by Spottiswoode, Ballantyne & Co., 1965. Psillos, S. (1999) Scientific Realism: How Science Tracks Truth, London: Routledge. Putnam, H. (1975) “The Meaning of ‘Meaning’,” reprinted in his Mind, Language and Reality: Philosophical Papers (Vol. 2), Cambridge: Cambridge University Press. ——— (1981) Realism, Truth and History, Cambridge: Cambridge University Press. ——— (1983) “Reference and Truth,” reprinted in his Realism and Reason: Philosophical Papers (Vol. 3), Cambridge: Cambridge University Press, pp. 69–86. Quine, W.V.O. (1969) “Natural Kinds,” reprinted in his Ontological Relativity and Other Essays, London: Columbia University Press, pp. 114–138. Rosenberg, A. (2001) “How Is Biological Explanation Possible?” British Journal for the Philosophy of Science 52, 735–760. Sankey, H. (1997) “Induction and Natural Kinds,” Principia 1, 239–254. Slater, M. H. (2009) “Macromolecular Pluralism,” Philosophy of Science 76(5), 851–863. ——— (2013a) Are Species Real? London: Palgrave – Macmillan. ——— (2013b) “Cell Types as Natural Kinds,” Biological Theory 7(2), 170–179. ——— (2015) “Natural Kindness,” The British Journal for the Philosophy of Science 66(2), 375–411. Stanford, P. K. (2003) “Pyrrhic Victories for Scientific Realism,”The Journal of Philosophy 100(11), 553–572. ——— (2006) Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives, New York: Oxford University Press. Stanford, P. K. and Kitcher, P. (2000) “Refining the Causal Theory of Reference for Natural Kind Terms,” Philosophical Studies 97(1), 99–129.

445 Matthew H. Slater

Waters, C. K. (2017) “No General Structure,” in M. H. Slater and Z. Yudell (eds.), Metaphysics and the Phi- losophy of Science: New Essays, New York: Oxford University Press, pp. 81–107. Whewell, W. (1840) Philosophy of the Inductive Sciences, London: Parker. Wilkerson, T. E. (1995) Natural Kinds, Brookfield: Ashgate Publishing Company. Wilson, R. A. (ed.) (1999) Species: New Interdisciplinary Essays, Cambridge: MIT Press. Wilson, R. A., Barker, M. J. and Brigandt, I. (2007) “When Traditional Essentialism Fails: Biological Natural Kinds,” Philosophical Topics 35(1–2), 189–215. Worrall, J. (1989) “Structural Realism: The Best of Both Worlds?” Dialectica 43, 99–124.

446 INDEX

abductive inference 40–42 Backus, George 340 abstract equilibrium theory 371 Bacon, Francis 188 accumulation of knowledge 188 Bader, R. 354 Achinstein, Peter 151, 157–158, 161, 206 Bain, J. 110, 114 active realism 184 Baker, A. 203, 415 Aguirre, A. 313, 315 Balaguer, M. 413–414 aim at truth in science 238–241 Barnes, Barry 261, 268 Albert, D. Z. 403 Barnes, Eric 38, 41 Alspector-Kelly, M. 231 Bayesianism 53–54, 65 ambitious realism 428–429 Bell, John 295 ammonoids 321–322; being realist about Belot, G. 69, 342 323–324; realism about tendency toward Benacerraf, Paul 409 greater ornamentation in 327–329; realism Bergson, Henri 8 about trend toward greater ornamentation in Berkeley, George 181 324–326 Berlin Group 9 Anjum, I. 401 Berzelius, Jöns J. 352 antirealism 44–45, 157, 438–439; accounts of Biedenharn, L. C. 56 scientific progress 196–197; cognitive science Big Bang see primordial cosmology and 357–367; economics and 374–377; Binmore, K. 376 epistemic stances 227–230; instrumentalism as Bird, Alexander 194–195, 426 89; local response to historical 157–160; neural Black, Joseph 349 computations and 359–360; perennial debate Blackburn, Simon 221, 230, 365 225–227; see also entity realism; unconceived black holes 122 alternatives Block, N. 358 applied general equilibrium theory 371 Blondlot, P.-R. 195 arealism, naturalistic 415–417 Bloor, David 261–262, 264–272 Aristotle 80, 345, 348–351, 354, 440 Bohr, Niels 55, 183 artificial viscosity 252–253 Boltzmann, Ludwig 185, 396 Asay, J. 72 Borel, Emile 376 atomic theory 346–347; Dalton’s 217, 219, 347, Born Rule 295–297 351–354; Perrin and 154–156 Boucher, S. 227 auctions 376 Bouguer, Pierre 335–336 Aufbau 10, 14 Boulton, Matthew 180 Avogadro’s number 155–157, 159, 351–352 Boyd, Richard 23–24, 67, 85, 146, 152; on Azhar, F. 316 abductive reasoning 40–41; ambitious realism Azzouni, Jody 102–103 and 428; HPC theory 442

447 Index

Boyle, Robert 193, 243–244, 346 conjunction-of-parts 137–141 Boyle-Mariotte law 193 conservationist pluralism 183–184 Boyle’s law 243–244 constructive empiricism 85, 97, 220, 384, 386; Brandon, R. N. 322, 327–329 economics and 373–374; local response to 157; Bueno, O. 110, 115 observation and evidence and 425–426; see also Bueno, Otávio 227 empiricism Bullen, K. 341 Contemporary Selective Scientific Realism (CSSR) Burge, T. 359 50–51; challenges to 51–52; selective realist strategy 54–56 Callender, C. 43–44, 49, 154, 160–161 Contessa, G. 92 Cannizzaro, Stanislao 352–353 contexts of use 172 Carnap, Rudolf 7–8, 13, 17, 31, 100, 416; on continuity in scientific realism 24–25 instrumentalism 92; Kaila on 14; on the conventional decision 12 Ramsey-sentence 21–22; Reichenbach on 10; convergent realism 27–28, 197 rejection of metaphysics 97 Copernican Revolution 165–166, 169, 188 Carnap-sentence 22 correspondence theory of truth 78–79, 385; Carroll, S. 313 modest 391–392; see also truth Cartesian epistemology 263–264 cosmology see primordial cosmology Cartwright, Nancy 120, 126–127, 129–130, 206, Craig, William 21–22 376; on physics laws 178; on robustness of causal Craig’s theorem 21–22, 61 inference 128 Crick, Francis 239 Cassirer, Ernst 108 crystalline spheres 172–173 Catastrophists 214–215 Cusanus, Nicholas 189 causal dependence 363 Cuvier, Georges 214 causal inference, robustness of 128–129 causal realism 129 da Costa, N. 109–110 Cavendish, Henry 335, 351 Dalton, John 39, 217, 219, 345, 346–347, 351–354 Cevolani, G. 138–140, 195 Darwin, Charles 193, 201, 292 Chakravartty, Anjan 43–44, 117, 130, 214, 245, 304, Davidson, Donald 80 405; on dispositionalism 401; on perspectivism Dawid, Richard 285, 288 169; on scientific realism 84; on semirealism debunking 377 194; sociability-based pluralism 173–174; on deflationary theory 384, 385 underdetermination 68 Demopoulos, W. 115–116 Chalmers, A. 346–347 Dennett, D. C. 326, 358 Chang, H. 50, 54, 56, 351 Der logische Aufbau der Welt 8–9 chemistry 354; Dalton’s atomism in 217, 219, 347, Descartes, René 188 351–354; introduction to 345–348; “water” descriptivism 439 preserved its extension 348–351 Devitt, Michael 121, 214, 363 Churchland, P. M. 358 diachronic perspectivism 166–168 Clarke, Steven 129 direct representation, modeling as 241 Cleland, C. 322 disjunction-of-possibilities 137–141 CMB see primordial cosmology divide et impera strategy 29–30; see also pessimistic cognitive science: and dependence of the parts versus induction the whole on minds 365–367; dependence on DNA model 239 second-order mental states in 365; dependence “Does the Sociology of Science Discredit on the enquirer versus the subject in 364–365; Science?” 263 introduction to 357; kinds of realism and Dorling, J. 53 357–360; puzzle about mind independence and Douglas, Heather 197 362–364; solutions to the puzzle in 364–367; why Douven, I. 246 care about mind independence and 360–362 Duhem, P.M.M. 91, 156, 188, 196, 212–213, 353 coherence theory of reality 182 Duhem-Quine Thesis 62–64 Collins, Harry 261, 268 Dummett, Michael 323–324 Colyvan, Mark 203, 413 Dupré, John 173–174, 182 common core of theory 125 Duvendack, P.-J. 377 conceptual pluralism 193 dynamic stochastic general equilibrium models confirmational holism 207 (DSGE) 374 confirmation theory 53–54 Dziewonski, A. 341

448 Index

Earman, J. 62 evolution, theory of 193, 201 earth sciences: earth models 340–341; introduction “Existential Hypotheses” 12 to 333–334; realism and 342; seismological era expected truthlikeness 142–143 336–340; before seismology 334–336 Experience and Prediction 10 economics: awkward fit of general realism debates experimental realism 179–180 in 370–374; introduction to 369–370; scientific explanationist defense of realism 20, 206–208 realism, and unrealistic models 374–377; social explanations 125–126; realist commitments of constructivist arguments in 377–378 206–208; redundancy of 126–128 Egg, M. 214 explanatory reasoning: context on 201–203; Ehrenhaft, Felix 265–267, 272 explanations’ realist commitments and 206–208; Einstein, Albert 78–80, 156, 173, 193, 407–408 IBE and realist arguments and 203–206; electrons 123–124 introduction to 200–201 Elgin, C. Z. 86 externalism and reliabilism 422–425, 431 eliminativism 111–112 external world metaphysics 8–9 Ellis, Brian 121 Empirical Equivalence Thesis (EET) 60–65 Fahrbach, L. 53 empirical realism 7; Feigl’s semantic/pragmatic Falkenburg, B. 281 realism 12–14, 17; Kaila’s invariantism 14–16; fallibilism 189–194, 195 Reichenbach’s probabilistic realism 9–11, 17; false models 257–259 three varieties of 9–16 Feigl, Herbert 12–14, 17, 21 empirical reality 9 Festa, R. 138–140 empiricism: constructive 85, 97, 157, 220, 384, Feyerabend, Paul 79, 91, 121, 226 386, 425–426; introduction to 96–98; levels of Field, Hartry 203, 409, 412–413 observation 102–103; logical 7–8, 15, 17, 229; final theory claims 287–289 roles of the observable/unobservable divide, and Fine, Arthur 184, 214, 323, 388–389; natural its troubles, in 98–102 ontological attitude (NOA) 230, 385–387 Encyclopaedia Britannica 87 Fitzpatrick, Simon 154, 158 Entailment Thesis (ET) 60, 65–68 Fodor, J. A. 358–359 entity realism 43, 194; high-energy physics versus Frank, Philipp 13 281–282; manipulability as necessary for reality Fraser, D. 62 and 121–123; manipulability as sufficient French, S. 109–110, 113; on eliminativism for reality and 123–125; from manipulation 111–112; on radicalism 111 to explanation 125–126; redundancy of Fresnel, Augustin-Jean 39, 43, 292 explanations 126–128; robustness of causal Friedman, M. 115–116 inference and 128–129; from theories to entities Frost-Arnold, Greg 38, 204–205 in 120–121; today 129–130 epistemic and ontic variants in structural realism Galilean strategy 44, 157, 160, 206, 242–243, 256 110–112 Galileo 39, 157, 172, 188 epistemic phase of semantic realism 20–21 Gall, Josef 262 epistemic relativism 74–77 game theory 370–371, 376, 378 epistemic stances 227–230; evaluating 231–233; gauge symmetry 280 voluntarism as about beliefs and 233–235 Gelfert, Axel 43, 124 epistemology and scientific realism 430–432; Gemes, K. 138 externalism and reliabilism 422–425; Hume’s general equilibrium theory 370–371 problem 421–422; inference to the best Geoffroy, E. F. 345, 350 explanation (IBE) 125–126, 129, 143–145, germ-plasm 217–218, 222 202–206, 246, 420–421; introduction to Gettier paradox 195 419–420; observation and evidence 425–427; Ghiselin, Michael 437 pessimistic induction 92, 151, 197, 212, 223, Gibbs, J. W. 345, 347 292, 429–430; scepticism 427–430 Giere, Ron 164, 166–168, 245 Esfeld, M. 112 Gilbert, Freeman 340 Essay Towards a New Theory of Vision, An 181 Gillan, Mark 253 eternal inflation 311–312 global versus local arguments 358; case of Perrin Eulerian geometry 208 and 154–156; discussion of 160–161; global Evans, Denis 253–254 approach and 152–153; going local and evidence and observation 425–427 153–154; introduction to 151; local realist evidential truthlikeness 142–143 approach to Perrin in 156–160

449 Index

Glymour, C. 66 Hoffman, R. 354 Gödel, K. 410 Homeostatic Property Cluster (HPC) 442 Godfrey-Smith, P. 214, 436 Hones, M. J. 282 God’s eye view 170–171, 390 Howson, Colin 39, 53 Goldstein, S. 299 “How to Make Our Ideas Clear” 230 Goodman, Nelson 181 Hoyningen-Huene, Paul 81–82 Gould, S. J. 325 Hull, David 437 Gouvêa, D. 329 Hume’s problem 421–422; reliabilism and 423–424 gravity 335–336 humoral theory of medicine 50 Greene, M. T. 333 Hungerford’s objection 420 Gross, Alan 122–124 group structural realism 283–285 idealization: correctable 242–243; non-correctable Guala, F. 373 243–244 Gutenberg-Richter model 340 incommensurability 79–82 indirect representation, modeling as 239–241 Hacking, Ian 43, 206, 266; charge against constructive indispensability argument for realism 14 empiricism 105; on common core of theory 125; induction 421, 423–424 on debunking 377; definition of scientific realism inference to the best explanation (IBE) 125–126, 84; entity realism 120–121, 130; experimental 129, 202, 420–421, 430–432; ambitious realism realism 179–181; on the observable/unobservable and 429; Hume’s problem 421–422; modeling divide 101–102; realism about electrons 123; and troubles for 246; observation and evidence skepticism of black holes and other theories 122 in 425–426; realist arguments and 203–206; Halley, Edmond 37 reliabilism and 424–425; truthlikeness and Hamming distance 140 143–145 Hardy-Weinberg principle 327–329 inference to the best theoretical explanation Harker, David 42, 56; on phlogiston theory 54; on (IBTE) 126–128; robustness of causal inference underdetermination 68 and 128–129 Hausman, D. 373 inference to the most likely cause (IMLC) Healey, R. 400 126–128; robustness of causal inference and Hebb, Donald O. 410 128–129 Heidegger, Martin 8 inflation, cosmological 307–311; eternal 311–312 Heilbron, J. L. 86 instrumentalism 21–22; common confusions Hellman, Geoffrey 411–412 concerning 87–90; conclusion on 93; defined Hempel, Carl Gustav 13, 190, 292, 413; on the 87–88; as form of anti-realism 89; interrelation theoretician’s dilemma 22 of theses in 85–87; introduction to 84–85; Herschel, William 180 objections to 90–92; Putnam’s argument against Hesse, Mary 20, 25–27 22–23; syntactic 88 Higgs boson 201, 434 internal realism 194 high-energy physics (HEP): versus entity realism intuition 26; realist 49–50, 203 281–282; final theory claims 287–289; first take invariantism 14–16 on general relevance of 281; group structural irrealism 375 realism and 283–285; introduction to 279–281; Irvine, E. 358 realism and nonempirical confirmation and 285; string dualities and 285–287 James, William 232 Hintikka, Jaakko 191 Jeffreys-Bullen model 340 historical challenges to realism 51–53; confirmation Jerkert, Jesper 255–256 theory and overall evidence in 53–54; contemporary selective scientific realism 50–51; Kaila, Eino 14–16 further options for the realist 56–57; preliminaries Kant, Immanuel 9, 12, 16, 165–166, 180, 184; on 48–49; selective realist strategy 54–56 conceptual pluralism 193; on scepticism 200 historical science, realism in: about tendency Kantorovich, Aharon 283–284 toward greater ornamentation in ammonoids Kelvin, Lord (Thomson W.) 87 327–329; about trend toward greater Kepler, Johannes 39, 133–134, 172–173, 188 ornamentation in ammonoids 324–326; Ketland, J. 115 being realists about ammonoids and 323–324; Kincaid, H. 117 conclusion on 329–330; introduction to Kinzel, K. 50 321–323 Kirchhoff, Robert 55

450 Index

Kitcher, Philip 29, 55–56, 193, 206, 251; Galilean Logistic Neopositivism 14 strategy 44; modest correspondence theory of Lutz, S. 110 truth 391–392; on selective realism 42 Lyons, Timothy 39, 214 Kochan, Jeff 261, 266–267, 272 Lyre, Holger 283–284 Kosso, Peter 176 Krause, D. 112 Mach, Ernst 9, 85–87, 89, 184–185 Kreisel, Georg 408 macroeconomics 371–372 Kripke, Saul 23, 439–441 McCulloch, Warren S. 410 Kuhn, Thomas S. 121, 212, 226, 383; conclusion McKenzie, Kerry 112 on 82; diachronic perspectivism and 166–167; McShea, D. W. 322, 325, 327–329 epistemic relativism and scientific realism 74–77; Maddy, Penelope 369, 410, 415–417 on “exemplars” 227; on fruitfulness 177; on Magnus, P. D. 43–44, 49, 154, 160–161, 214 incommensurability 79–82; introduction to 72; Mäki, U. 373, 375–376 model of scientific change 72–74; on Popper Maldacena, J. 286 196; on shared epistemic values 177; on truth Manchak, J. B. 62 77–79, 384, 387–389 Manhattan Project 252 Kuipers, T.A.F. 138–139, 143 manipulability: to explanation 125–126; as Kukla, A. 63–65 necessary for reality 121–123; as sufficient for Kurnakow, N. S. 345–346 reality 123–125 Martin, J. 310 Ladyman, James 43, 54–56; on eliminativism Maskelyne, Nevil 335 111–112; on metaphysics 400; on phlogiston Massimi, M. 65, 245, 282 theory 113; on physics 116; on radicalism 111 mathematical realism: introduction to 407–409; Lakatos, Imre 142, 146, 178 mathematical truth without mathematical Lam, V. 112 objects 411–414; naturalism, realism, and Lange, M. 400 scientific confirmation in 414–415; as language dependence problem 141–142 naturalistic thin realism or arealism 415–417; Laudan, Larry 20, 27–29, 48–49, 57, 177, 212, naturalizing mathematical ontology in 410–411; 333, 391; as axiological non-realist 196; on rejecting 412–414 challenge to science realism and realist responses mature theory 49; paradigms in 73 438; confutation of convergent realism 197; on Maudlin, T. 299 generating empirical evidence 62–63, 65–66; Maxwell, Grover 30–31, 90 on humoral theory of medicine 50; on the Maxwell, James Clerk 43, 156, 159–160, 166, 216, No Miracles Argument 38; on Peirce 189; on 287, 292–293 pessimistic induction 92, 440; on science as Mayo, D. G. 65 problem-solving activity 196; on success and Mayr, Ernst 440–441 truth 39–40 Meckel, J. F. 56 Lavoisier, Antoine 54, 145, 345–346, 349–351 mediating models 238 Lawvere, F. W. 110 Melia, J. 116 Leibniz, G.W.F. 112 mereological bundle theory 403–404 Leplin, Jarrett 39; on generating empirical meta-induction 25 evidence 62–63, 65–66 metaphysical pluralism 179–181 Le Verrier, Urbain 37 metaphysical realism 7–9, 178–179 Levi, Isaac 190, 192 metaphysical stance 228–229 Levy, Arnon 241 metaphysics 96–97, 202–203, 404–405; dismissal of Lewens, Tim 261, 264–265, 272–273 398–400; introduction to 394–395; rummaging Lewis, G. N. 346 in the toolbox of 400–404; of scientific realism Lewis, Peter 40 441–442 Lipton, Peter 41, 203, 206, 420–422 Mill, J. S. 99 local arguments see global versus local arguments Miller, A. 363 Lodge, O. 86 Miller, David 99; on Popper 191; on truthlikeness logical empiricism 229; heritage 17; introduction 136–137, 140–142 to 7; Kaila on principle of 15; metaphysical Millikan, Robert 265–267, 272 realism and 8 Milne, P. 53 logical positivism 394 mind independence 360–362, 435; puzzle about Logic of Scientific Discovery, The 189–190 362–364 Logik der Forschung 189–190 minimal realism 57, 375

451 Index

Miracles argument 291–292, 298 Newton, Isaac 37, 62, 79–80, 133–134, 222, 287; Mirowski, P. 378 on consilience of inductions 201; crystalline Misner, C. W. 307 spheres and 172–173; on gravity 335; Kuhn Mitchell, Sandra 173 on 73, 78; Uniformitarianism and 214–215, Miyake, T. 342 219–220 modeling: challenging the realist’s achievement’s Newton-Smith, William 26–27, 200–201, 207 claim 242–246; correctable idealizations Nietzsche, F. 165 242–243; as direct representation 241; Niiniluoto, I. 14, 141, 192 earth 340–341; false 257–259; as indirect No Miracles Argument (NMA) 23, 37–39, representation 239–241; introduction to 143, 197, 237, 402, 431; abductive inference 237–238; mediating models 238; and modelers and 40–42; ambitious realism and 428; aiming at truth 238–241; models and 238; explanatory reasoning and 204–206; as global non-correctable idealization 243–244; argument 151–152, 160–161; modeling and perspectivism 244–246; troubles for IBE 246 242–244, 246; novel phenomena and 38–39; model of scientific change, Kuhn’s 72–74 rethinking the evidence of success and 39–40; modest correspondence theory of truth 391–392 success-to-truth rule and 251–252; see also modest partisanship 391–392 truthlikeness Moore, G. E. 385 non-correctible idealization 243–244 Morgan, M. 375 nonempirical confirmation 285 morphisms 110 non-trivial modal extent 361 Morrison, Margaret 127, 129, 169, 245 Norton, J. 55 Morriss, G. 254 motivations in structural realism 112–115 observation: evidence and 425–427; levels of multiverse theory 311–316 102–103; what is special about 103–105 Mumford, S. 401 observational vocabulary 21 Musgrave, Alan 38, 100–101; criticism of entity Oddie, G. 137, 192 realism 125 Okasha, S. 65 Oldham, Richard 334 Nagel, Ernest 11, 13 Oldroyd, D. 333 naïve realism 296 On the Origin of Species 201 natural historical attitude 323 Oreskes, N. 333 naturalising the mind 361–362 ostension, historical 215–220 naturalism 8; introduction to 407–409; oxygen theory of combustion 145 mathematical ontology 410–411; mathematical realism and 407–417; realism, and scientific Papineau, David 261, 263–264 confirmation 414–415; “second philosophy” paradigms 73–76 369; as thin realism or arealism 415–417 parsons 323 Naturalism in Mathematics 415–416 partial structures 109–110 naturalistic thin realism 415–417 partisanship: Kuhn, Rorty, and Putnam 387–391; naturalized epistemology 264 modest 391–392; natural ontological attitude natural kinds: cluster approaches to 442–443; (NOA) 385–387 introduction to 434; naturalness notion and Pashby, T. 55, 56 434–435; realism concerning 439–442; scientific Paul, L. A. 403–404 realism and realism about 436–437 Pauling, Linus 354 naturalness 434–435 Peirce, Charles Sanders 181, 189, 230 natural ontological attitude (NOA) 230, 385–387 perceptual knowledge 427–430 Navier-Stokes equation 171–172 perennial realism debate 225–227 Needham, P. 347 performance adequacy, perspectival standards of 172 Nelson, J. 378 Perrin, J. 151, 161, 206, 345, 353; experiments by neoclassical theory 370–371 154–156; local realist approach to 156–160 neural computations 359–360, 363; dependence on perspectival realism (PR) 170–173 the enquirer versus the subject in 364–365 perspectivism: epistemic grounds for diachronic neutrality 13, 384–385 166–168; located in the landscape of realism neutral observation language 80 164–166; made compatible with realism Newman, M.H.A. 115 170–173; methodological grounds for Newman objection 115–117 synchronic 168–170; modeling and 244–246; New Neoclassical Synthesis 372 pluralism and pragmatism 173–174

452 Index

Pessimistic Induction (PI) 92, 197, 212, 223, 292, motivation in structural realism 114; on 429–430; Causal Theory and 440–441; as global instrumentalism 88–89; plausible conception argument 151 of rationality 231; on selective realism 42, 54; Pessimistic Meta-Induction 397, 429–430; see also on structure-versus-non-structure distinction Pessimistic Induction 116–117; on truth 384–385 philosophical problems of progress 187–188 Ptolemaic astronomy 39, 56, 133, 165, 169, 188 phlogiston theory 54–55, 113, 145, 180–181, 183 Putnam, Hilary 17, 22–23, 133, 143, 177, 207, 346, phrenological knowledge 262–263 348, 383; Causal Theory and 441; on existence physico-scientific reality 16 of God 408; explanatory reasoning and 204; Pierson, Robert 126–127, 129 global approach of 152; internal realism 194; Pincock, C. 245 on Kant 165–166; on naturalism 408; on neural Planck, Max 9, 184 computation and anti-realism 359–360; on Plato 187, 189, 437, 439 shared genetic structure 442 Platonism 394 pluralism 173–174; compatibility with quantum electrodynamics (QED) 294 scientific realism 176–185; conceptual 193; quantum mechanics (QM): ambiguity in 302; conservationist 183–184; from pluralistic success crash course in 294–296; interpreting 293–294, to metaphysical 179–181; in practice of realism 300–301; introduction to 291; realism and 184–185; success of science and 177–179 297–300; satellite level 291–293; some Poincaré, Henri 31, 48, 156, 212; on non-literal interpretations of 296–297 scientific discourse 86; on observations as theory quasi-particles 124 laden 91; on pessimistic induction 92; as pioneer quasi-set theory 112 of structural realism 108 Quine, W.V.O. 80, 97, 203, 207, 373, 441; on Pojman, P. 85–86 naturalism 408; notion of weak discernibility Polchinski, Joseph 280, 284–285, 287 401; on observation 98, 100, 102 Popper, Karl 196; classification of different kinds of Quine-Putnam indispensability argument for truthlikeness 134–136; concept of truthlikeness mathematical realism (QPIA) 408–409, 415–417 133–134; conjunction-of-parts and disjunction- of-possibilities accounts to truthlikeness radicalism 111 137–141; on fallibilism 189–191; intuitive Ramsey, Frank 21–22, 115–116 assumptions, formal explication and breakdown Ramsey-sentence 21–22, 30–31, 109, 116 of idea of 136–137 Rancourt, B. T. 86 positivism, logical 394 realisation 362–363 pragmatic realism 12–14, 173–174 realist intuition 49–50 pragmatism 173–174 realist turn in the philosophy of science: pragmatist coherence 181–184 concluding thoughts on 31–32; confutation of Preliminary Reference Earth Model (PREM) 341 convergent realism in 27–28; getting nearer to Priestley, Joseph 54, 180, 441 the truth in 26–27; instrumentalism in 21–22; primordial cosmology: confirming a multiverse introduction to 20; looking for a role for history theory in 311–316; inflation in the very early in 25–30; negative and positive arguments for universe 307–311; introduction to 304–307; realism 22–24; Principle of No Privilege in plethora of models in 310–311; prospectus 25–26; semantic realism in 20–21; structural 306–307; scientific realism and 316–317; what realism in 30–31; three theses of realism in caused accelerating expansion in 308–310 24–25; warrant-remover argument in 29–30 Principle of Correspondence 193 Reality 181–185 Principle of the Identity of Indiscernibles 398, 400; real realism 173, 176 weak discernibility 401 Redhead, Michael 109 Principle of No Privilege 25–27 reductionism 347 principle of retrogression 10 reductive base 365–366 probabilistic realism 9–11, 17 redundancy of explanations 126–128 problem of unconceived alternatives (PUA) 68; Reed, R. 377 see also unconceived alternatives reference 24–25 promiscuous realism 173, 182, 443 Reichenbach, Hans 9–11, 17 Psillos, Stathis 11, 12, 29, 32, 45, 50, 56, 152, Reiner, Richard 126–127, 129 193–194, 203, 268, 390; ambitious realism and relativism, epistemic 74–77 428; on explanatory criterion of reality 206; reliabilism and externalism 422–425, 431 explanatory reasoning and 204; on foundational Rescher, Nicholas 197

453 Index

Resnik, David 126 reasoning and 200–209; locating perspectivism Richtmeyer, Robert 255 in landscape of 164–166; metaphysics and 7–9, Ringeval, C. 310 394–405; minimal 57, 375; Miracles argument Roberts, Bryan 283–284 291–292, 298; modeling and 237–246; Rodrik, D. 376 naturalism and mathematical 407–417; natural Rorty, Richard 383, 388–389 kinds for 434–443; negative and positive Rosen, G. 413 arguments for 22–24; No Miracles Argument Rosenberg, A. 371 23, 37–39; nonempirical confirmation and Ross, D. 113, 116, 400 285; perennial nature of debate over 225–227; Roush, Sherri 159 perspectivism made compatible with 170–173; Rowbottom, Daniel 194, 227 philosophy of simulation and 250–259; Rueger, Alexander 168, 245 pluralism compatibility with 176–185; pluralism Ruhmkorff, S. 214 in the practice of 184–185; primordial Rumford, Count 180 cosmology and 304–317; probabilistic 9–11, Russell, Bertrand 108, 111, 116; correspondence 17; promiscuous 443; Putnam’s argument for theory of truth 385 23–24; quantum mechanics and 297–300; and Rydberg, Johannes 55 realism about kinds 436–437; reference, truth Ryle, G. 358 and continuity in 24–25; Reichenbach on 11; scepticism and 427–430; selective 42, 194–196; Saatsi, Juha 55–57, 153–154, 402; on structural semantic/pragmatic 12–14, 17, 20–21; social realism 113 epistemology and 261–273; success of science Salmon, W. C. 67, 161 and 37–46, 250–259; theories and truth and Sarton, Georg 188 383–392; three theses of 24–25 Saunders, S. 400 Searle, J. R. 359–360 scepticism and realism 427–430 second-order mental states 365 Scheffler, Israel 81, 176 Seilacher, Adolf 321 Schlick, M. 7–9, 17 selective realism 42, 194–196 Schrödinger equation 296–297, 346, 353, 402–403 selective realist strategy 54–56 Schurz, G. 56, 113, 383; on truthlikeness 136–140, 146 self-corrective thesis 189 science, success of: abductive inference and 40–42; Sellars, Wilfrid 21, 120, 193, 203 anti-realist alternatives 44–45; concluding semirealism 194 remarks on 45; introduction to 37; No Miracles Sepkoski, Jack 322 Argument (NMA) and 37–39; pluralism and Shapere, Dudley 122, 373 177–179; rethinking the evidence for 39–40; Shapin, Steven 262 scientific realism and 250–259; selective realism Shapiro, Stewart 410–411 and 42; varieties of 43–44 simplicity 99 scientific essentialism 441–442 simulation, philosophy of: introduction to scientific progress: antirealist accounts of 196–197; 250–251; examples 252–254; some objections debates among realists in 194–196; fallibilism to 254–257; synthetic thermostats and 253–254; and truthlikeness in 189–194; introduction to vorticity confinement and vortex particles and 187; philosophical problems of 187–188 253; Wimsatt and 257–259 scientific realism 1–4; about cognitive science skepticism 96–97 357–367; about historical science 321–330; Smith, V. 376 active 184; ambitious 428–429; challenge Smoluchowksi, Marian 156 and response to 438–439; chemistry and Sober, E. 90 345–354; concerning natural kinds 439–442; sociability-based pluralism 173–174 contemporary selective scientific (CSSR) 50–56; social constructivist arguments in economics 377–378 earth science and 333–342; economics and sociology of scientific knowledge (SSK) 167; 357–367; empirical 7, 9–16; entity 43, 120–130, introduction to 261; irenic resolution 267–273; 194, 281–282; epistemic relativism and 74–77; Kochan’s defense of the strong programme epistemic stances 227–230; epistemology and in 266–267; Lewens’s ambivalent response to 419–432; experimental 179–180; Feigl on 14; 264–265; Papineau’s conciliatory response to global versus local arguments for 151–161; 263–264; strong programme 262–263; Tosh’s high-energy physics and 279–289; historical uncompromising response to 265–266 challenges to 48–57; historicist challenges to Sorensen, R. 89, 241 212–215; IBE and 203–206; kinds of 357–360; Sperling, G. 358 Kuhn and 72–82; limits of explanatory sphere-model, Lewis-type 140

454 Index

Spurzheim, Caspar 262 truthlikeness 196–197; classification of different Srednicki-Hartle proposal 314–315 kinds of 134–136; conclusion and outlook 146; St. Clair Deville, Henrí-Etienne 352 conjunction-of-parts and disjunction-of-possibilities Stachel, J. 114 accounts to 137–141; epistemic evaluation of Stahl, Georg Ernst 180 142–143; fallibilism and 189–194; inference from standalone models 238 empirical success to 143–146; introduction to 133; Standard Model of elementary particle physics intuitive assumptions, formal explication and 399, 402 breakdown of 136–137; Popper’s concept of Stanford, Kyle 45, 52, 55–56, 130, 304; on 133–134; problem of language dependence in instrumentalism 91–93; on Pessimistic Induction 141–142 213; on reliabilism 424; on scientific theories truth-simpliciter 408 202; on unconceived alternatives 157, 159–160; truth theory of meaning 10 on underdetermination 68 Tulodziecki, Dana 56; Entailment Thesis and Stein, Howard 221 66–67 Strategy of Historical Ostension 215–220; Turner, D. 68–69 instruments and miracles 220–222 Strevens, Michael 243 Ultimate Argument for Realism 38 string dualities 285–287 unconceived alternatives 157, 159–160; historicist string theory 288 challenges to scientific realism and 212–215; structural realism 30–31, 438–439; challenges instruments and miracles 220–222; Strategy to 115–117; epistemic and ontic variants of Historical Ostension and 215–220; see also 110–112; group 283–285; introduction to 108; antirealism motivations in 112–115; Newman objection to underconsideration 420, 422, 431 115–117; notions of structure and 108–110 underdetermination 372; Empirical Equivalence structure, notions of 108–110; set-theoretic Thesis (EET) and 60–65; Entailment Thesis conception 109 (ET) and 60, 65–68; introduction to 60–61; Suárez, Mauricio 126, 129 recent discussions on 68–69 success of science see science, success of Uniformitarianism 214–215, 222–223; instruments success-to-truth rule 251–252 and miracles 220–222; Strategy of Historical Sudbury neutrino observatory 92 Ostension and 217–220 synchronic perspectivism 168–170 unobservability 98–105 syntactic instrumentalism 88 synthetic thermostats 253–254 van Fraassen, Bas 20, 41–42, 229, 245, 291, 304, 391; on aim of science 177; anti-realist Tambolo, L. 195 alternatives 44–45; as axiological non-realist taxonomic pluralism 173–174 196; on constructive empiricism 85, 220, Tegmark, M. 315 386, 425–426; on criteria of assessment Teller, Paul 168, 171, 245 for stances 231; distinction between terminal objects 110 acceptance and belief 103; on empiricist theory-independent truth 389–390 dogma 102; on the observable/unobservable theory of measurement 16 distinction 98, 100–101; on Perrin 156–157; thermodynamics 216 on restricting observability 62; on truth thin realism, naturalistic 415–417 384–385 Thorne, K. S. 307 Velikovsky, Immanuel 51 Tichý, P. 136–137, 191 Vennin, V. 310 Tosh, Nick 261, 265–267, 272 verisimilitude 133–134, 191, 194–195 truth: correspondence theory of 385; introduction Vickers, Peter 39, 42, 51, 55–57 to notion of 383; Kuhn on progress and 77–79, Vienna Circle 9–10, 14 384, 387–389; mathematical 411–414; modelers viscosity, artificial 252–253 aiming at 238–241; modest correspondence Voltaire’s objection 422, 424, 431 theory of 391–392; modest partisanship 391–392; voluntarism: as about beliefs and stances 233–235; natural ontological attitude (NOA) 230, 385–387; perennial realism debate and 225–227 neutrality and 384–385; Newton-Smith on von Neumann, John 252, 255 26–27; pragmatist coherence as basis of reality and vortex particles 253 181–184; realism and theories of 383–392; Rorty vorticity confinement 253 on 388–389; in scientific realism 24–25; success and Votsis, I. 56, 117; on structural realism 111, 39–40, 251–252; theory-independent 389–390 113–114

455 Index

Wallace, D. 298–299 Wilson, Mark 417 Walton, K. 241 Wimsatt, William 244, 257–259 Ward, P. 322 Winch, Peter 268 warrant-remover argument 29–30 Winsberg, Eric 40, 251–252, 255 water: early theories on bonding of 346–347; Woodhouse, J. 341 preserved extension 348–351 Worrall, John 30–31, 56, 116, 226, 292, 390–391; Watson, James 239–240 defense of structural realism 43, 113, 438–439 Wedgwood, Josiah 180 Wray, K. Brad 40, 45 Weinberg, E. J. 307 Wylie, A. 226 Weingartner, P. 136–140 Weismann, August 217–218, 222 Yablo, S. 241 Wheeler, J. A. 307 Williamson, T. 426 Zermelo-Fraenkel set theory 411

456