<<

The /Antirealism Debate in the of

Dissertation

zur Erlangung des akademischen Grades des Doktors der Philosophie (Dr. phil.) an der Universit¨atKonstanz, Geisteswissenschaftliche Sektion, Fachbereich Philosophie

vorgelegt von

Radu Dudau

Konstanz, Mai 2002 Contents

1 Introduction 5 1.1 What is realism? ...... 5 1.2 Varieties of scientific realism ...... 8

2 The Success Arguments for Scientific Realism 13 2.1 The “No Miracle Argument” ...... 15 2.2 Smart’s ‘no cosmic coincidence’ argument and Maxwell’s argument from the epistemic of ...... 17 2.3 The argument from realism’s exclusive capacity to give causal explanations ...... 20 2.4 Van Fraassen’s arguments against IBE ...... 25 2.4.1 The context-dependency objection ...... 26 2.4.2 The inconsistency objection ...... 27 2.5 Arguments against the ability of IBE to link empirical success with truthlikeness ...... 33 2.5.1 The downward path ...... 34 2.5.2 The upward path ...... 34 2.6 The Success of Argument ...... 38 2.6.1 Fine against IBE ...... 40

3 The Experimental Argument for 45 3.1 – from fictions to entities ...... 45 3.2 The common cause ...... 50 3.3 Manipulability, entities, and structure ...... 58 3.3.1 Entity realism and realism ...... 58 3.3.2 On structural realism ...... 62

4 The Argument: The Theoretical/Observational Distinction 63 4.1 The theoretician’s dilemma ...... 68

2 4.2 Van Fraassen’s observable/ distinction ...... 73 4.2.1 Maxwell’s continuity argument ...... 74 4.2.2 The technological argument ...... 78 4.2.3 The phenomenology of science ...... 81 4.2.4 The incoherence arguments ...... 83 4.3 Fodor’s theory/ distinction ...... 91 4.3.1 Against Holism ...... 92 4.3.2 Psychological arguments ...... 93 4.4 Kukla’s observable/unobservable distinction ...... 101

5 Against the Underdetermination Thesis 106 5.1 Against algorithmically generated empirical equivalents . . . . 106 5.1.1 The dismissal of T2 ...... 108 5.1.2 The dismissal of T1 ...... 109 5.1.3 The insufficiency of Kukla’s solution to the problem of scientific disregard ...... 113 5.2 Versions of empirical equivalence ...... 114 5.3 Arguments against the entailment thesis ...... 120 5.3.1 EE does not entail UD ...... 120

6 Social Constructivism 124 6.1 Varieties of social constructivism ...... 125 6.2 The reflexivity problem ...... 132 6.2.1 The reflexivity of metaphysical constructivism . . . . . 133 6.2.2 The reflexivity of epistemic constructivism ...... 137 6.2.3 The reflexivity of semantic constructivism ...... 138 6.3 Spatial and temporal inconsistencies ...... 144 6.3.1 Spatial inconsistencies ...... 144 6.3.2 Temporal inconsistencies ...... 148

7 A Case for Selective Scientific Realism: S-Matrix Theory 150 7.1 The S-Matrix Theory (SMT): a historical case study ...... 152 7.1.1 Quantum field theory (QFT) ...... 152 7.1.2 The origins of the S-matrix. S-matrix theory (SMT) . . 154 7.2 Philosophical conclusions ...... 158

8 Appendix: Truthlikeness 162 A.1 Popper’s theory of verisimilitude ...... 163 A.2 The possible worlds/ approach ...... 164

3 A.3 Anti-truthlikeness: Giere’s constructive-realist proposal . . . . . 172

Summary 175

Zusammenfassung 178

References 181

4 Chapter 1

Introduction

1.1 What is realism?

The term ‘realism’ designates a of philosophical doctrines about . What these doctrines have in common is the claim that we confront a material world, existing objectively and independently of our and . Typically, this ontological claim is accompanied by the epistemic claim that we can and indeed do have of the external world. Upon careful inspection, the ontological thesis reveals two dimensions of realism: an independence dimension, and an dimension. An entity exists independently in that it does not depend on our epistemic capacities. In other words, “it is not constituted by our knowledge, by the synthesizing powers of our , nor by our imposition of or theories.” (Devitt 1984: 13). In this sense, Kant’s phenomenal world and Goodman’s world- version (see 6.3) do not have independent existence. Realism allows for a part of the world to be dependent on our epistemic capacities. As be shown in chapter 6, we can accept that many about the world are social constructions. However, it is presupposed that there is a brute external world out of which these facts are constructed. The view that all the facts about the world are constructed represents a radical kind of constructivism (see 6.1) which splits with realist assumption of an external material world. A corollary of this assumption is that the world does not entirely consist of mental states. The realist’s antagonist in this dimension is the idealist. The idealist argues that the mind (or spirit) constitutes a fundamental reality and that the physical world exists only as an appearance to or as an expression of the mind. The radical constructivist is an idealist, one who, following Berkeley, thinks of physical objects as collections of sensory . The existence dimension of realism is concerned with the entities that are

5 claimed to exist. We can picture the existence dimension as a vertical one, having at its lowest level the claim that something exists objectively and in- dependently of the mental (Devitt 1984: 15). Devitt labels it a fig-leaf real- ism, committed to nothing above an undifferentiated and uncategorized brute world. This might be Goodman’s (1978) world “without kinds or order or or rest or ”, “a world not worth fighting for” (1978: 20). (We shall see in chapter 6, when discussing the spatial and temporal inconsistencies of constructive , that this world is not as worthless as some seem to believe.) Next on the vertical dimension comes the claim that common-sense enti- ties, such as stones and trees, exist independently of us, thus representing a commonsense realism. A significant move upwards on the vertical dimension consists in the claim that the unobservable entities posited by scientific theories, such as electrons and genes, exist independently of us. This describes scientific realism, the doctrine we are mostly concerned with here. Finally, to maintain that abstract entities – that will be labelled in section 6.1 ideas – such as numbers, values, propositions, etc., exist, is to adopt abstract realism. As Kukla (1998: 4) remarks, in spite of the logical independency of these sorts of realism, there is a tendency of those who adopt any of these levels to accept all the lower levels. Scientific realists are most of the commonsense realists, as abstract realists have no problems in admitting both the existence of stones and of electrons. However, there are notable exceptions. Platonism is a species of abstract realism which admits exclusively the existence of abstract entities. The above mentioned epistemic claim presupposes, as Wright (1993) puts it, two sorts of ability:

the ability to form the right concepts for the classification of genuine, objective features of the world; and the ability to come to know, or at least reasonably to believe, true statements about the world whose expression those concepts make possible. (Wright 1993: 2)

The epistemic opponent of the realist is the sceptic. The sceptic does not dispute the independent existence of an external world, but refuses to admit that our epistemic practices can provide us with knowledge or warranted about this world. Wright’s above passage also introduces a semantic aspect in the discussion of realism: the of the statements about the world. The semantic issue of realism is whether truth is an objective relation between language and reality. It is common to take semantic realism as defining truth as a correspondence

6 between language and reality. However, one ought to note the diversity of views about truth within the realist camp. Most of the scientific realists em- brace indeed a correspondence theory of truth. Among them, some take truth as constitutive of realism: Hooker (1974: 409) states that realism is a semantic thesis, “the view that if a scientific theory is in true then there are in the world exactly those entities which the theory says there are. . . .”; Ellis defines scientific realism as the view that “the theoretical statements of science are, or purport to be, true generalized descriptions of reality” (1979: 28); according to Hesse (1967), realism holds that “theories consist of true or false statements referring to ‘real’ or ‘existing’ entities.” (1967: 407). Other supporters of the correspondence theory of truth believe that scientific realism has nothing es- sential to do with truth (e.g. Devitt 1984; 2001). Nonetheless, other realists believe that truth has no at all, hence no bearing to scientific realism. The deflationary views – Ramsey’s (1927) redundancy theory, Quine’s (1970) disquotationalism, Horwich’s (2001) minimalism – share the conviction that there is nothing more to the truth of a sentence/proposition than our com- mitment to the sentence/proposition itself. It is also interesting how most antirealists find it convenient to construe scientific realism in terms of the cor- respondence theory of truth. Van Fraassen, for example, ascribes scientific realism the claim that “science aims to give us, in its theories, a literally true story of what the world is like” (1980: 8). After having described the issues of the ontological, the epistemic, and the semantic aspects of realism, it is important that we keep them distinct. Provided that one in ontological realism, it is optional for one to be an epistemic, and/or a semantic realist. For one thing, one may subscribe to ontological realism without epistemically doing so: belief in the existence of an external world does not entail that this world is in any sense ascertainable to us. The reverse does not hold: one cannot have knowledge about the world if there is no world. It follows that epistemic realism logically entails ontological realism. For another, one can be a semantic realist without thereby an epistemic realist: one can maintain that statements about the world have truth-values, regardless of whether we can come to know them or not. At the same , epistemic realism cannot be in place without semantic realism. One cannot have knowledge about anything in the world unless statements about the world are truth-valuable. It follows that epistemic realism logically entails semantic realism. The conceptual relation between ontological and semantic realism seems to be one of independence. On the one hand, one can admit the independent existence of a physical world without for one’s beliefs to refer to anything in the world. Thus, ontological realism does not entail semantic realism. On the other hand, semantic realism specifies how the world should be like in order to

7 make sentences about the world truth-valuable (i.e. either true or false). This in no way involves commitment to the view that the world is indeed like this. Thus, semantic realism does not entail ontological realism. Certainly, these conceptual relationships should not hinder our awareness of the intertwining of ontological, semantic, and epistemic issues in a constitutive sense: the ontological thesis has been formulated in terms of external world’s independence of our epistemic capacities. Besides, both the ontological and the epistemic thesis ought may need to be subjected to a meaning analysis.

1.2 Varieties of scientific realism

Scientific realism has been located on the vertical, ontological dimension of realism. It has been defined via the claim that the unobservable entities posited by scientific theories, such as electrons and genes, exist independently of us. We can acquire a more precise understanding of scientific realism by ana- lyzing it under the ontological, semantic, and epistemic aspects that we have identified within the general doctrine of realism. Metaphysical scientific realism – henceforth MSR – is the claim that the unobservable entities posited by science exist objectively and independently of us. It assumes the existence of common-sense entities like stones and trees. We saw that the opponent of the doctrine of realism on the ontological line was the idealist. However, to deny scientific realism does not in the least involve a radical departure from belief in the existence of an independent ex- ternal world. One can dismiss scientific realism while embracing commonsense realism. From the position of commonsense realism, a rejection of MSR will mean either to take all sentences about scientific as being false; or to be agnostic about the existence of such entities; or, finally, to take it that all such sentences are ill-formed, hence nonsensical. Semantic scientific realism – henceforth SSR – is the claim that all state- ments about theoretical entities have truth values. As pointed out above, semantic realism only tells us what the truth-makers of factual sentences are, i.e. how the world should be in order to make these sentences truth-valuable. This does not involve that the world is indeed like that. Particularized to SSR, the claim is that statements about theoretical entities are truth-valuable. As Kukla indicates, the statement “Electrons are flowing from point A to point B” would be true if and only if electrons were indeed flowing from A to B. (Kukla 1998: 8). Yet to accept all of this does not mean that the actual existence of electrons must be accepted, nor anything else about their properties. The rejection of SSR can well take place from the position of common- sense semantic realism. This is the stance taken in semantic ,

8 through both its variants: eliminative and non-eliminative. Eliminative se- mantic instrumentalism (also known as ) states that theoretical terms are to be defined in observational terms, and that theoretical statements are to be translated into observational statements. Presupposed, of course, is a clear-cut distinction between the theoretical and the observational, an which we shall criticize and reject in chapter 4. Non-eliminative semantic instrumentalism views theoretical terms as semantically uninterpreted instru- ments useful for organizing . Epistemic scientific realism – henceforth ESR – is usually taken to be the claim that we can and do acquire knowledge about the existence of theoretical entities. ESR requires multiple qualifications. (1) According to the of knowledge, there are two kinds of ESR: a restrained kind, claiming knowledge only of the existence of theoretical entities, and an extended one, additionally claiming knowledge of the properties of, and between, these entities. The former will concern us throughout this work. (2) According to the strength of the knowledge claims, there is a strong ESR stating that we know our scientific theories to be strictly true. Yet such an is so fragile as to be untenable. First, all past scientific theories turned out to be stricto sensu false, so that, by “pessimistic meta-induction”, we can conclude to the very low probability of our current theories being all true. Second, any scientific theory involves idealizations, approximations, and ceteris paribus clauses, which inevitably induce a degree of imprecision in the scientific statements. Therefore, to embrace epistemic standards so high so as to accept only literally true sentences would mean virtually to expel the entire body of science (see the Appendix). Scientific realists have learned the lesson of . They do not actually claim more than knowledge of the approximate truth of our well-established theories. Approximate truth has been criticized for lack of conceptual clarity by both friends and foes. As shown in the Appendix, while some scientific realists, such as Devitt (1991) and Psillos (1999) advocate for an intuitive of approximate truth, critics – among which Laudan (1981) is the most adamant – object that the lacks the minimal clarity needed in order to ensure whether it can be of any philosophical avail. However, Niiniluoto (1999) offers a robust theory of approximate truth and of its cognates, truthlikeness and verisimilitude. We rely upon it when embracing the claim that we have knowledge of the approximate truth of our best theories. A weaker epistemic claim is that we are rationally warranted to believe in our well-established theories. This position circumvents the attack by pes- simistic meta-induction, but only at the price of a major inconvenience: it

9 cannot explain the methodological success of science (see 2.4). By descending even further down the epistemic scale, we reach the point where is it claimed that “it is logically and nomologically possible to attain a state that warrants belief in a theory.” (Kukla 1998: 11). Such a view can barely explain anything about science. I take it that its only merit is didactic: it shows how closely one can get to scepticism while still remaining an epistemic realist. Anything that goes underneath this level ought to be called epistemic an- tirealism. Van Fraassen’s , which we shall discuss in detail, is a famous species of epistemic antirealism. Van Fraassen believes in the truth of scientific theories with respect to their observable posits, but de- clines belief when it comes to unobservables. Constructive empiricism takes empirical adequacy, and not truth, as the goal of science. Accordingly, em- pirically successful theories are to be accepted, i.e. believed to be empirically adequate, and not believed to be (approximately) true. We shall devote ex- tensive space to criticizing constructive empiricism in several respects – see sections 2.1, 2.4, and 3.2. After having exposed the varieties of scientific realism and the conceptual relations between them, it is appropriate to present our working definition of scientific realism: Scientific realism is the doctrine according to which

(i) Most of the essential unobservables posited by our well-established cur- rent scientific theories exist independently of our .

(ii) We know our well-established scientific theories to be approximately true.

Claim (i) was stated by Devitt (2003) and presents the ontological aspect of scientific realism. We shall discuss in chapter 6 one important contender of the independence dimension of realism: social constructivism. It will be distinguished between many varieties of social constructivism, which will be inspected from the standpoint of their consistency. Among them, radical social constructivism denies that there is any part of the external world which is not of our making; all facts are the outcomes of intentional activity. Thus, radical constructivism turns out to be a form of . Claim (ii) underscores the epistemic aspect of our concept of scientific realism. Epistemic scientific realism is seriously challenged by the underdeter- mination argument, which constitutes the themes of chapters 4 and 5. One premise of the underdetermination argument is the empiricist assumption that the only warranted knowledge is that concerning observable entities. The sec- ond premise is that for any given body of observational evidence, there are indefinitely many theories which entail it. Therefore, as the argument goes,

10 the epistemic ascent to any particular theory is blocked. In other words, the- ories are underdetermined by the empirical data. In chapter 4, we shall criticize and dismiss the underdetermination argu- ment by showing that there is no principled way to draw a distinction between the observable and the theoretical, on which its first premise relies. We ar- gue next (chapter 5), that even if such a distinction could be made, the second premise (the empirical equivalence thesis) cannot be established in a form that generally blocks epistemic ascent to the best theory. An implicit point about the definition of scientific realism concerns its scope of application. Scientific realism is often taken as a global, overarch- ing doctrine, appropriate in accounting for most cases of successful scientific practice. However, scientific realism ought to be more true to scientific life: it ought to do to the cases in which, indeed, scientific theories have merely an instrumental , and incorporate elements constructed without causal constraints. Chapter 7 illustrates such a case (the S-matrix theory of strong interaction) and pleads for a selective scientific realism.

11 Acknowledgments

I am very much obliged to my doctoral supervisor, Professor Wolfgang Spohn, for his invaluable assistance. His criticism and demand for clarity and precision was doubly outweighed by his patience and confidence in the merits of my work. My gratitude is also extended to professors William Newton-Smith, James Robert Brown, , and Gereon Wolters for their advice and guidance through the intricate paths of scientific realism. Ludwig Fahrbach and Erik Olsson provided me with welcome criticism and commentary, either by reading various sections of my dissertation, or through conversation about its contents. This dissertation would not have reached completion, were it not for the fi- nancial support of the Open Society Institute and the Herbert Quandt Stiftung. I am thankful to Dr. Gerhild Framhein for her generosity and understanding, as well as to Professor Andrei Hoisie. I am in debt to the DAAD for making possible my academic contact with Konstanz University. I am also very much indebted to Professors Erhard Roy Wiehn and Kath- leen Wilkes for their moral support. Finally, I wish to thank my friends Debbie Allen for improving the English standards of the present work, and Sascha Wolff for his patient LaTex advising. Special thanks to my friend Till Lorenzen for all his support.

12 Chapter 2

The Success Arguments for Scientific Realism

It is virtually incontestable that science is an immensely successful enterprise. First, science is successful in entailing successful . Second, scientific methods have proven to be effective in generating successful theories. Let us call the former aspect the empirical success of science, and the latter, the methodological success of science. According to the scientific realist, both sorts of success are non-trivial facts. They demand explanation: Why do scientific theories tend to produce correct observational predictions and to deliver adequate explanations of observable phenomena? By which means is scientific methodology so good at forming successful belief systems? The reason why the scientific realist thinks that the success of science un- dergirds his doctrine is that he arguably has the best explanations for both the empirical and the methodological success of science. Indeed, his argu- mentation relies on an to the best explanation (henceforth IBE): the best explanation for the fact that scientific theories are empirically success- ful is that theoretical terms typically refer, and that theoretical statements are approximately true or truthlike. Similarly, the best explanation for the methodological success of science is that scientific methodology is reliable (in a sense to be explained in subsection 2.4.2). It is important to emphasize that the explananda of the two IBEs are different. On the one hand, the empirical success argument seeks to explain the success of theories in systematizing and explaining phenomena, and in making highly confirmed predictions. On the other, the methodological success argument attempts to explain the success of scientific methods in producing successful theories.1

1As we shall see in section 2.2, one antirealist argument capitalizes on the alleged insuffi- cient stringency of IBE at the methodological level.

13 Second, the explanandum of the empirical success comprises two parts, asserting, respectively, that (i) theoretical terms are referential, and (ii) the- oretical statements are approximately true. I would like offer two remarks about them. The first and general one is that, in line with Devitt (1984; 2003), I take it that it is not essential to state the argument by means of the terms ‘refer’ and ‘true’:

...such usage should be seen as exploiting the disquotational properties of the terms with no commitment to a robust relation between language and the world. The realist argument should be that success is explained by the properties of unobservables, not by the properties of truth and reference. So the argument could be urged by a deflationist. (Devitt 2003: Fn. 11).

I shall not go into the details of any specific theory of truth, since I take it that no particular view of truth is constitutive of realism – see section 1 in the introductory chapter. I subscribe to Devitt’s (1984: 4) Third Maxim that requires us to settle the realism issue before any semantic issue. The benefits of disquotation also extend to the concepts of approximate truth and truthlikeness. Instead of talking about the approximate truth of the sentence ‘a is F ’, we can just talk of a’s approximately being F . However, with respect to approximate truth and truthlikeness, I often prefer to talk in these terms instead of disquotation. The reason lies in the fact that, on many occasions, we want quantitative comparisons of truthlikeness, and that, as such, disquotation would only make them more awkward. Besides, I also believe that we have a robust and serviceable account of truthlikeness given by Niiniluoto (1999) (see A.2). My other remark about the distinction between the two parts within the ex- planandum of the empirical success of science is that each of them corresponds to a different version of realism. The claim that most (essential) unobserv- able entities posited by scientific theories exist independently of our minds, language, and representations defines the doctrine of scientific entity realism. While entity realism is committed to science’s being mostly right about the en- tities it posits, it is partly noncommittal on the truth values of those theoretical sentences describing the properties of entities and the relations between them. Nonetheless, many realists defend a logically stronger version of scientific re- alism, committed not only to theoretical entities, but also to the descriptions of their properties. This is the doctrine that Devitt calls strong scientific real- ism: “most of the essential unobservables of well-established current scientific theories exist mind-independently and mostly have the properties attributed to them by science.” (Devitt 2003). Since strong scientific realism seems to be

14 embraced by most self-declared scientific realists, I label it scientific realism and distinguish it from mere entity realism. It is clear that scientific realism implies entity realism, but not the converse. Moreover, the best-known ad- vocates of the latter (Hacking 1983; Cartwright 1983) explicitly argue against what they call theory realism – realism about scientific laws – thereby denying the scientific realist claim complementary to entity realism. As we have seen, the argumentation for scientific realism proceeds by IBE. This implies that any IBE in favor of scientific realism will also support entity realism. Yet, entity realism enjoys supplementary support from the so-called experimental argument, which will be the of the next chapter. The efficiency of IBE in defending scientific realism has been detracted in two im- portant ways: first, some anti-realists (van Fraassen (1984; 1989); Fine (1984; 1986; 1991)) levelled objections of principle against IBE-based arguments, de- nouncing them for being context-dependent, inconsistent, and viciously circu- lar. I dismiss these objections in 2.2 and 2.4. A different kind of criticism concerns IBE’s specific role in the defence of scientific realism. Laudan (1984) deployed the most extensive attack of this kind. I argue against it in section 2.3. Let us now outline the argumentative strategy of this chapter. Section 2.1 presents several explanationist arguments for the empirical success of science: Putnam’s (1975; 1978) ‘no miracle argument’ (NMA), which is the most popu- lar formulation of an IBE-based explanation of the success of science; Smart’s (1963) ‘no cosmic coincidence argument’, and Maxwell’s (1970) argument from the empirical virtues of realistically interpreted theories; and finally, my argu- ment based on the exclusive ability of realistically interpreted theories to give causal explanations. Section 2.2 investigates the general objections against IBE, while 2.3 continues with a detailed discussion of the alleged inability of IBE to connect the empirical success of science with the approximate truth of theories. Section 2.4 discusses Boyd’s (1984; 1985) explanation of the method- ological success of science, as well as Fine’s replies to it.

2.1 The “No Miracle Argument”

The most famous argument from the empirical success of science is Putnam’s (1975) ‘no miracle argument’ (henceforth NMA). NMA claims that the predic- tive success of scientific theories is best explained by their being approximately true:

The positive argument for realism is that it is the only that does not make the success of science a miracle. That terms in mature scientific theories typically refer (this formulation is due to

15 ), that the theories accepted in a mature science are typi- cally approximately true, that the same terms can refer to the same even when they occur in different theories – these statements are viewed not as necessary but as parts of the only scientific explanation of the suc- cess of science, and hence as part of any adequate description of science and its relations to its objects. (Putnam 1975: 73)

The argument emphasizes the overwhelming improbability (indeed the miracle) of any explanation which would not rely on the referentiality of theoretical terms and on the approximate truth of scientific theories. Putnam does not bother here to distinguish between entity realism and scientific realism. However, if cogent, his argument will defend both the claim that theoretical terms typically refer, and the logically stronger one that the theories themselves are approximately true. NMA is obviously an IBE-based argument: to accept that theoretical terms refer, and that scientific theories are approximately true is the best explanation of why phenomena are the way they are predicted by those theories. This, according to NMA, is not only a good explanation of empirical success, but also the best explanation we have for it. Suppose we ask, for example, why are observations that scientists report on as if there were atoms? The realist answer is: because there are atoms and – stronger claim – because the atomic theories are approximately true. Were this not the case, what else but a miracle would explain the empirical success of theories? (See section 3.1 for a historical case study of modern atomism). As it can be seen, the explanans of NMA is not the strict truth of sci- entific theories, but their approximate truth, or truthlikeness. Strict truth deductively entails the truth of all consequences of a given theory. However, theoretical descriptions are most of the time only approximately correct or truthlike. Approximate truth and truthlikeness are not uncontroversial no- tions. Among others, Laudan (1981) maintains that these are undefined no- tions, and accordingly disapproves the realist’s explanatory appeal to such “mumbo-jumbo” (1981: 32). However, for one thing, the notion of approx- imate truth has a quite strong intuitive support. As Devitt (2003) notes, “science and life are replete with such explanations; for example, a’s being ap- proximately spherical explains why it rolls.” For another, it is surely not the case that approximate truth is an undefined notion. Quite the contrary, there is an appreciable literature approaching a quantitative definition of approxi- mate truth via its related concept, truthlikeness – see, among others, Oddie (1986), Kuipers (1992), and Niiniluoto (1987; 1999). As far as I am concerned, I favor Niiniluoto’s similarity approach (see the Appendix). I take it to be a robust account of truthlikeness, dependable for most purposes of our analysis.

16 2.2 Smart’s ‘no cosmic coincidence’ argument and Maxwell’s argument from the epistemic virtues of theories

NMA has been preceded by quite similar statements by J. J. C. Smart (1963) and Grover Maxwell (1970). In Smart and Maxwell’s approaches, the archrival of realism is the instrumentalist understanding of science. Semantic instrumentalism assumes that the language of science is to be divided into an observational and a theoretical part. The observational lan- guage contains, apart from the logical vocabulary, only observational terms, directly connected to the empirical world through ‘operational definitions’. As pointed out in the introductory chapter, non-eliminative semantic instrumen- talism takes theoretical terms to have the role of systematizing observational statements, thus making theories more simple and economical. They are lin- guistic instruments which have no referents, so that the statements containing them do not have truth values. Thus, according to instrumentalism, state- ments about, say, electrons are nothing but instruments meant to enable us to make predictions at observational level of tracks in the cloud chamber.2 For reasons of convenience, instrumentalism is sometimes depicted in terms of a ‘black box’ metaphor. In the suitable description by Bird,

One puts into the box regarding observed background condi- tions, and the box generates predictions regarding what one will observe. What one wants from such a black box is that if the input information is accurate, then the predictions it yields will be accurate too. We are not especially concerned with the mechanism inside the box. That can be anything so long as it works. In particular, there is no requirement that it depict the way the world is. (Bird 1999: 125–6)

Smart argued that instrumentalism has no means to account for a multitude of ontologically disconnected phenomena other than belief in cosmic coinci- dences. By contrast, scientific realism offers a close-at-hand and reasonable explanation, which leaves no room for large scale fortuitousness. As Smart puts it,

Is it not odd that the phenomena of the world should be such as to make a purely instrumental theory true? On the other hand, if we interpret a theory in the realist way, then we have no need for such a cosmic coincidence: it is not surprising that galvanometers and cloud chambers behave in the sort of way they do, for if there are really electrons, etc., this is just what we should expect. (Smart 1963: 39)

2I shall henceforth be explicit about the varieties of instrumentalism whenever the dis- tinction is relevant, and the context itself cannot indicate it.

17 At a first sight, Putnam’s and Smart’s formulations are virtually identical: the first speaks of miracles, while the latter speaks of cosmic coincidences. How- ever, as Psillos (1999: 72–3) has pointed out, their argumentative structures are different. While Putnam’s argument is empirical, Smart’s one is a conceptual analysis; i.e., while Putnam’s NMA relies on an abductive inference, Smart’s argument is part of his attempt to clarify a conceptual dispute concerning the ontological foundations of science. The realist–instrumentalist dispute in- stantiates such conceptual confrontation with respect to the interpretation of scientific theories. As Psillos properly states,

...Smart’s ‘no cosmic coincidence argument’ relies on primarily intuitive judgements as to what is plausible and what requires explanation. It claims that it is intuitively more plausible to accept realism over instru- mentalism because realism leaves less things unexplained and coincidental than does instrumentalism. Its argumentative force, if any, is that anyone with an open mind and good sense could and would find the conclusion of the argument intuitively plausible, persuasive and rational to accept – though not logically compelling. (Psillos 1999: 73)

So Smart argued from the intuitive plausibility of the realist position. His ‘no cosmic coincidence’ argument relies on intuitive judgements about what is plausible and what needs to be explained. The point, as Psillos phrases it, is that it’s intuitively more plausible to accept realism over instrumentalism because realism leaves less things unexplained and coincidental than realism does (cf. Psillos 1999: 73).3 An attempt to account for the plausibility of the realist judgements was made by Maxwell (1970), who turned to the epistemic virtues – such as ex- planatory power, simplicity, comprehensiveness, lack of ad hocness – of realis- tically interpreted scientific theories:

As our theoretical knowledge increases in scope and power, the competi- tors of realism become more and more convoluted and ad hoc and explain

3We should note that many realists are wary of talk of conceptual analysis and a priori reasoning. The so-called epistemological naturalists (see BonJour (1998) defend the thesis that the one and only way for knowledge acquisition is empirical. Devitt’s (1997, 1998) for this position is, first, that the very idea of the a priori is obscure and second, that it is unnecessary, since an empirical approach of justification seems to be available. Stich (1998) argues that all that can be obtained by analysis knowledge about our implicit assumptions about the nature of things – assumptions embedded in our language – and no knowledge about the nature of things themselves. However, without trying to resolve here this complex debate, I agree with Jackson’s (1998) that “there is a lot of ‘closet’ conceptual analysis going on” (1998: vii). For example, one’s utterance that sentence ‘Jones is six foot and Smith is five foot ten’, implies that Jones is taller than Smith (cf. Jackson (1998: 3)). It is precisely in this sense that an analysis of sentences’ semantic properties can be located in an empirical account of the world: semantic is actually being entailed by .

18 less than realism. For one thing, they do not explain why the theories they maintain are mere cognitively meaningless instruments are so successful, how it is that they can make such powerful, successful predictions. Re- alism explains this very simply by pointing out that the predictions are consequences of the true (or close true) proposition that comprise the theories. (Maxwell 1970: 12)

Maxwell obviously submits that statements displaying epistemic virtues are more plausible than those which lack such virtues. Thus, as Psillos (1999: 74–5) indicates, Maxwell gives a Bayesian twist to his argument. Supposing that both realism and instrumentalism entail the empirical success of scientific theories, they will both have likelihoods equal to unity:

p(S | R) = p(S | I) = 1, where R stays for realism, I for instrumentalism, and S for the empirical suc- cess of scientific theories. According to Bayes’s theorem, the posterior proba- bilities of realism and respectively, instrumentalism, are

p(R | S) = p(R)/p(S)

p(I | S) = p(I)/p(S), where p(R) is the prior probability of realism, p(I) is the prior probability of instrumentalism, and p(S) is the probability of science’s success. Certainly, p(S) does not depend on the philosophy of science which accounts for it, so that it has the same value for both realism and instrumentalism. Therefore,

p(R | S) p(R) = p(I | S) p(I)

That means that any difference in the degree of confirmation of realism and instrumentalism stems from a difference in their respective priors.4 Arguing that realism is clearly better supplied with epistemic virtues than realism – an idea that many antirealists will, of course, not accept5 – Maxwell infers that the prior probability of realism is much higher than the prior probability of instrumentalism. 4The reference to prior probabilities underlines the difference from Putnam’s NMA, as Wolfgang Spohn (personal correspondence) points out. 5This line of argument goes exactly opposite to the more popular Popper/van Fraassen inference from the fact that the probability of the observational consequences of any theory is at least equal or higher than the probability of the theory itself, to the conclusion that instrumentalism is generally more probable than realism. I shall have more to say about this in chapter 7, where I shall reject this latter line of argument.

19 It will be seen that, in spite of their merits in undermining the credibility of ‘black box’ instrumentalism, Smart’s and Maxwell’s arguments are vulnerable when faced with more sophisticated versions of instrumentalism, such as van Fraassen’s constructive empiricism.

2.3 The argument from realism’s exclusive capacity to give causal explanations

My argument for scientific realism stands against instrumentalism and its brethren: , claiming the equivalence of meaning of sentences about physical entities to sentences about sensations; fictionalism and the phi- losophy of ‘as-if’, according to which theories or concepts can be reliably used without for the to be true, or for their terms to refer (they can serve as ‘heuristic fictions’ or ‘regulative ideas’, according to Hans Vaihinger); and constructive empiricism, which will be investigated in detail. Let us proceed by supposing that a given theory T is empirically success- ful, that is to say, it makes accurate observational predictions. Why does everything happen as if T were true? As we have seen, T ’s realist supporter typically resorts to the following IBE: if T is a well-established theory, T is empirically successful because the entities it posits exist, and their properties are correctly described by T . Put differently, T ’s success is explained by T ’s truthlikeness. In response, the instrumentalist typically advances the following counter- arguments: T ’s empirical success is not in need of any explanation. According to van Fraassen (1980) – whose constructive empiricism is an epistemic sort of instrumentalism – there is no wonder that scientific theories are success- ful, because they are the result of natural selection in the jungle of epistemic competition:

...science is a biological , an activity by one kind of organism which facilitates its interaction with the environment. I claim that the success of current scientific theories is no miracle. It is not even surprising to the scientific (Darwinist) mind. For any scientific theory is born into a life of fierce competition, a jungle red in tooth and claw. Only the successful theories survive – the ones which in fact latched on to actual regularities in nature. (van Fraassen 1980: 39–40)

Van Fraassen urges us not to ask for the (approximate) truth of theories, but for their empirical adequacy (i.e., for the truth of their observable consequences): “Do not ask why the mouse runs from its enemy. Species which did not cope with their natural enemies no longer exist. That is why there are only one who

20 do.” (van Fraassen 1980: 39). However, it is legitimate to ask why precisely this theory and not a different one has survived in the cruel epistemic jungle. That is, we want to identify some specific feature of the mouse which accounts for what it is that made its behavior fit for survival. We want to know what causes the mouse to run from its enemy. Instead, all that van Fraassen tells us is that the mouse is a survivor because it runs from its enemy. This can barely satisfy our need of an explanation. A different instrumentalist move is motivated by the pragmatist view that the concept of truth should be defined in terms of pragmatic usefulness. The thought – entertained by Fine (1991), among others – is that in order to ex- plain the empirical success of science, we should not inflate the explanation with any features that go beyond the instrumental reliability of theories. Ac- cordingly, as the instrumentalist proposes, we ought to replace truthlikeness with an epistemically weaker notion, like ‘empirical adequacy’ or ‘pragmatic usefulness’. Nonetheless, in line with Niiniluoto (1999) and Psillos (1999), I argue that such explanatory strategies have the major inconvenience of not explaining at all the practical success of science. To clarify this, let us first write down the typical realist explanatory schemata:

T is empirically successful, because T is truthlike.

Yet, here is what happens if we replace ‘truthlike’ with ‘empirically adequate’:6

T is empirically successful, because T is empirically adequate.

But empirical adequacy means just the truth of T ’s observational conse- quences, i.e. T ’s empirical success. Consequently, the above explanatory schemata is nothing but an idle tautology:

T is empirically adequate, because T is empirically adequate.

Therefore, it appears that, following such a strategy, instrumentalism doesn’t actually explain at all. Now, to be more true to scientific practice, we ought also to explicitly take into account that, as a of fact, instrumentalist attitudes are quite often present in science. As Brian Ellis (1985) states, “scientific realists run into trouble when they try to generalize about scientific theories.” They tend to make their cases with rather simple historical examples of causal explanations, which urge a realistic understanding – as is the case study of atomism, which we discuss in chapter 3.

6This is a slight adaptation of Niiniluoto’s (1999: 197) explanatory sentences.

21 However, I argue here in favor of scientific realism and against a general instrumentalist “reading” of theories, that there is a crucial respect in which the former does systematically better than the latter: it can give causal ex- planations. For the sake of notational simplicity, let us henceforth call T a realistically interpreted, well-established theory, and TI its instrumentalist re- striction to observables. My contention is that, in general, TI cannot offer causal explanations. The argument will be that by systematically rejecting the commitment to the unobservables posited by T , TI frequently blocks our asking why-questions. As Wesley Salmon (1984) puts it, “one obvious fact” about scientific ex- planations is its frequent appeal to unobservable entities.

We explain diseases in terms of microorganisms. ...We explain televi- sion transmission by appealing to electromagnetic waves which propagate through space. We invoke DNA to explain genetic phenomena. We ex- plain human behavior in terms of neurophysiological processes or, some- times, in terms of unconscious motives. (Salmon 1984: 206)

In line with this view, I seek to establish the thesis that unobservables are essential to the causal structure of the world. Using Salmon’s nomenclature, the constituents of the world’s causal struc- ture are causal interactions, by which “modifications in structure are pro- duced; causal processes, by which “structure and order are propagated from one space-time region of the universe to other times and places (1984: 179); and causal laws, which “govern the causal processes and interactions, providing regularities that characterize the evolution of causal processes and the modifi- cations that result from causal interactions.” (1984: 132). What we typically observe are statistical correlations between events. In one of Salmon’s exam- ples, Adams and Baker are students who submitted virtually identical term papers in a course. Undoubtedly, the teacher will be very likely to consider it highly improbable that the papers came out like that by pure chance. Instead he will countenance one of the following reasonable possibilities: “(1) Baker copied from Adams, (2) Adams copied from Baker, or (3) both copied from a common source.” (Salmon 1984: 207). In other words,

There is either (1) a causal process running from Adams’s production of the paper to Baker’s, (2) a causal process running from Baker’s produc- tion of the paper to Adams’s, or (3) a common cause – for example, a paper in a fraternity file to which both Adams and Baker had access. In the case of this third alternative, there are two distinct causal processes running from the paper in the file to each of the two papers submitted by Adams and Baker, respectively. (Salmon 1984: 207)

22 Suppose it turns out that (3) is the case. We say then that there is an indirect causal relevance between the considered events: The common cause (the re- production of the original paper) is connected through causal processes to each of the separate effects. Let T be the theory positing the relevant causal mech- anisms. T thus explains causally the statistical correlations between events A and B. In the above example, A and B are the teacher’s establishing that Adams and Baker have, respectively, submitted virtually identical papers. All the same, in this case TI (which rejects T ’s unobservable part) will do as well as T . Since both T and TI account for the correlations between the observ- able events in terms of observable interactions and observable causal processes, there is no reason not to take TI (instead of T ) as the theory providing the right causal explanation. The same point applies when two events are directly causally relevant to each other, i.e. when A and B are connected by a causal process through which the causal influence is transmitted. This corresponds either to (1) or to (2) in the term-paper example. However, there are many familiar circumstances under which TI ’s causal explanations clearly fail. Consider the following situation: “Someone threw a stone and broke the window.” As TI ’s supporter would have it, it is per- fectly all right to take the stone’s being thrown as the cause, the motion of the stone through the space as the causal process transmitting the causal in- fluence, and the window’s being broken as the effect in an causal connection between observable events. So, after all, T seems to have no monopoly on causal explanations; TI can also explain causally. If this was the case, TI could explain everything that T can, and so would be a priori preferable on grounds of its ontological parsimony. Nonetheless, this construal misunderstands the idea of explaining causally. In the window example, one may legitimately ask, why does a normal window- pane actually break when hit by a stone. T ’s advocate can (at least try to) locate an C on the spatiotemporal line going from A to B, C consisting of the absorption of the stone’s kinetic energy into the molecular structure of the glass. C screens off A from B, meaning that knowledge of C renders A and B statistically independent (section 3.2 will detail Salmon’s statistical analysis). Obviously, C’s description must include terminology referring to microphysical entities. To express it differently, although causal explanations can in some cases be given merely by reference to observable events, the former ought to be com- patible with underlying causal mechanisms by which some conserved physical quantity is transmitted from the cause to the effect. From this perspective, causal explanations relying only on correlations between observable events, though often satisfactory for common purposes, are in fact mere fragments of more detailed descriptions, given in terms of unobservable causal interac-

23 tions and of transmission of causal influence through continuous spatiotempo- ral processes. Knowledge of these hidden mechanisms is inherent to scientific investigation. As Philip Dawid (2001) puts it,

such deeper understanding of [the hidden workings of our units] ... is vital for any study of inference about ‘causes of effects’, which has to take into account what has been learned, from , about the inner workings of the black box. (Dawid 2001: 60).7

By definition, TI ’s defender cannot present a causal process to parallel the one posited by T , since TI only talks of observables. Thus, TI ’s explanatory capability will not answer many of our legitimate why-questions. T ’s explana- tory superiority over TI is thus reinstated. Certainly, antirealists of the Humean tradition will reject causal talk alto- gether. By assuming the existence of causes, my argument seems actually to assume realism. My answer is straightforward: by taking scientific practice at face value, I also assume the legitimacy of causal talk. It is not among the purposes to answer here about causation.8 A different objection is that describing causal relations in terms of under- lying unobservable mechanisms is question-begging within the debate about scientific realism. Although antirealists like van Fraassen turn themselves occa- sionally to unobservables for explanatory purposes, they explicitly deny belief in such entities (cf. van Fraassen 1980: 151–2). That is, although van Fraassen turns to T ’s theoretical posits for purposes of pragmatic explanation, he does not accept T as true, but only as empirically adequate. He is agnostic about T ’s unobservables. Again, I shall not delve too much into the details of refuting this objection. I refer to Kukla’s (1998) own argument, drawing on Friedman (1982) – which

7In the same fragment, Dawid admits that probing into the hidden parts of the causal mechanisms is not necessary for assessing ‘effects of causes’, “which can proceed by an es- sentially ‘black box’ approach, simply modelling dependence on the response on whatever covariate information happens to be observed for the test unit.” (Dawid 2001: 60) 8Note, however, that in the complex task of identifying causal structures from probabilistic relationships among events, supporters of are in good company. Judea Pearl (2000), for example, argues in great detail for the advantages of encoding knowledge in causal rather than probabilistic structures. I share with him the intuition that “probabilistic relationships, such as marginal and conditional independencies, may be helpful in hypothesizing initial causal structures from uncontrolled observations. However, once knowledge is cast in causal structure, those probabilistic relationships tend to be forgotten; whatever judgments people express about conditional independencies in a given domain are derived from the causal structure acquired. This explains why people feel confident asserting certain conditional independencies (e.g., that the price of beans in China is independent of the traffic in Los Angeles) having no idea whatsoever about the numerical probabilities involved (e.g., whether the price of beans will exceed $10 per bushel) (Pearl 2000: 25)”.

24 will be discussed in detail in 4.8 – showing that it is inconsistent to use the language of T while bracketing (or plainly rejecting) T ’s unobservable part. The argument-line is that if one believes T ’s observable consequences, and if the existence of an entity X is among the observable consequences of T , then one must believe in the existence of X. Consequently, TI ’s defender who accepts T ’s unobservables for pragmatic purposes comes eventually to believe in (at least some) unobservables, thus contradicting TI itself. Let us summarize the steps of the argument:

(1) Well-established theories committed to unobservables, such as T , gener- ally allow the formulation of frameworks for causal, as well as for other – abstract model, functional, and systematic forms of explanation.

(2) TI precludes the search for causal explanations appealing to unobserv- ables.

(3) Causal explanations appealing to unobservables are essential to scientific investigation.

(4) Therefore, we should always prefer T to TI . T in fact accommodates all above enumerated sorts of explanatory frameworks, while TI bars, by definition, at least the possibility of explaining causally in terms of unobservables.

Let us now proceed by rejecting a few general criticisms against IBE, which is the pillar of the explanatory defence of realism.

2.4 Van Fraassen’s arguments against IBE

Bas van Fraassen (1984; 1989) has undertaken one of the most original attacks against IBE. From a Bayesian perspective, he maintains that the probability calculus and the requirement that beliefs should be updated by conditionaliza- tion provide necessary and sufficient conditions for justified belief. He rejects the view that demands any supplementary ampliative rule to place qualitative constraints on the way in which we update our beliefs. He explicitly views IBE as a example of such a rule. Inspired by Kvanvig (1994), we divide van Fraassen’s attack on IBE into two parts, according to the following objections:

(I) IBE is context-dependent, so it probably leads to ‘the best explanation out of a bad lot’.

(II) IBE leads to inconsistency in one’s diachronic probability distribution.

25 Let us examine each of them in turn.

2.4.1 The context-dependency objection

Van Fraassen objects that given that the best explanation is always to be selected out of a of already formulated hypotheses, we cannot warrant that the true hypothesis lies among them. Therefore IBE probably leads to ‘the best of a bad lot’. Moreover, given that the pool of possible explanations is very wide (probably infinite, as the advocates of the underdetermination of theories by empirical data – which will be criticized in chapters 3 and 4 – would have it), and given that we cannot be sure about the truth-value of the best explanatory hypothesis, it follows that the probability of the latter is very low. Yet, as Niiniluoto points out, a proper Bayesian construal of the problem offers a straightforward solution: “we always consider cognitive problems as sets of mutually exclusive and jointly exhaustive hypotheses.” (1999: 188). In order to exhaust the universe of discourse, one of the hypotheses may be the so-called ‘catch-all’ hypothesis, that is, the negation of all other hypotheses.9 A situation may no doubt occur, where none of the presented hypotheses qualifies as an acceptable explanation; it just may not make sense to apply IBE if its outcome has an unacceptably low probability. For such a case, Niiniluoto recommends the suspension of judgement as the most rational thing to do. In order to improve upon this situation, we need to enrich the basis of selection for the best explanation either by acquiring new information, or by introducing concepts more adequate to the object of our explanation. It is also interesting to note, along with Psillos (1999: 224–225), that if valid, this objection of van Fraassen would be problematic to his own phi- losophy of science, constructive empiricism. Constructive empiricism takes empirical adequacy – not approximate truth – as the goal of science: when it accepts a theory, it accepts it as empirically adequate, i.e. as true on the ob- servable level. In order to establish the empirical adequacy of a theory, we have to examine an infinite set of empirically equivalents, and pick the empirically adequate one through an ampliative step from the finite evidence. Yet, assum- ing the validity of van Fraassen’s objection, no such step can be authorized, so belief in the empirical adequacy of a theory cannot be warranted. In any event, this fact is not directly relevant to van Fraassen’s epistemic criticism against IBE, which will now be pursued.

9The point has also been made by Lipton (1993).

26 2.4.2 The inconsistency objection Van Fraassen (1984, 1989) sets out to demonstrate that the practice of giving a bonus to explanatory hypotheses leads one to accept bets which one is guar- anteed to lose. His strategy is based on constructing a Dutch Book strategy – that is, a diachronic Dutch Book – against an agent who, in addition to the classical probability calculus, also adopts IBE as a procedure to update his beliefs.10 Let us explain the key terminology. A Dutch Book is a set of bets offered to an agent by a clever (though mischievous) bookie, bets which have the following characteristics: (i) the bookie who generates them only knows the agent’s degrees of belief; (ii) each of the bets is accepted as fair by the agent; and (iii) the set of bets guarantees that the bookie will garner a net win. If an agent is liable to Dutch booking, then his degrees of belief violate the probability axioms.11 For that reason, a necessary condition for rational belief is taken to consist in an immunity to Dutch Books. Van Fraassen (1989) distinguishes between two varieties of Dutch Books: synchronic and diachronic. Synchronic Dutch Books are known as Dutch Book Arguments and consist of sets of simultaneously offered bets, at one moment in time. By contrast, diachronic Dutch Books – known as Dutch Book strategies – rely on the bookie’s option to offer new bets at later moments. As already mentioned, to a Dutch Books Argument is equivalent to showing that an agent’s degrees of belief violate the axioms of probability calculus; the agent holds inconsistent beliefs, which is irrational. Yet, it will be shown that the same cannot be said about Dutch strategies: immunity from them is not required for rational belief. Van Fraassen criticizes IBE for making those who follow it liable to Dutch Book strategies. He (van Fraassen 1989: 166–9) proceeds by imagining a series of bets between Peter and his Bayesian friend, Paul, with respect to a die whose bias of coming up ace is not known. Assuming that the first four tosses of the die come up ace (proposition E), the bookie (Paul) proposes the following bets concerning the hypothesis H that the fifth toss will be also ace:

Bet I pays $10,000 if E is true and H is false. Bet II pays $1,300 if E is false.

10More recently (1995), van Fraassen has ceased to rely on Dutch Books as a means to discuss rationality issues. 11As John Earman (1992: 39–40) points out, the Dutch Book theorem proves that if any of the probability axioms is violated, then a Dutch book can be made. The Converse Dutch Book theorem proves that if the axioms are satisfied, then a Dutch book cannot be made in a finite number of steps. Earman also indicates the difficulties with the Dutch Book justification of the probability axioms.

27 Bet III pays $300 if E is true.

The bookie bases his proposal on his knowledge of the fact that Peter has learned IBE from a travelling Preacher (reason for which van Fraassen calls IBE the “Preacher’s Rule”; it says that the posterior probability for each hypothesis of bias depends not only on the initial probabilities and the series of outcomes, but also on their explanatory success). Thus, Paul knows that Peter will assign a higher value to the probability that the fifth toss will also come up ace than he would if he only followed Bayesian conditionalization. Peter assesses the fair costs of the bets on the basis of the outcomes’ prob- abilities and the offered values. The model used to calculate initial prob- abilities introduces a factor X of bias, which can take N different forms: X(1),...,X(N). If the die has bias X(I), then the probability of ace on any one toss is I/N. X(N) is the perfect bias, giving ace a probability (N/N) = 1 (cf. van Fraassen 1989: 163). In Peter’s model, N = 10:

P (E) equals the average of (.1)4,..., (.9)4, 1, that is .25333. P (¬E) is .74667. P (E &¬H) is the average of (.1)4(.9), ...(.9)4(.1), 0, that is .032505.

These probabilities, along with the values of the bets, give the following costs of the bets, which both Peter and the bookie consider to be fair:

The fair cost of bet I is $325.05. The fair cost of bet II is $970.67. The fair cost of bet III is $76.00.

The total cost of the bets is $1,371.72. Since he deems them fair, Peter buys them all from his friend. Now, van Fraassen continues, suppose that not all four tosses have come up ace, that is, E is false. So Peter loses bets I and III and wins bet II. In other words, Peter spends a total of $1,371.72 and receives $1,300. Hence the bookie makes a net gain of $71.72. If E has come out true, Peter loses bet II, but has already won bet III, so he gets $300 from the bookie. At this point, bet number I can be formulated as

Bet IV pays $10,000 if H is false.

The bookie proposes to buy this bet from Peter, who agrees to sell it for $1000, because his probability that the next toss will be an ace is, as dictated by Preacher’s Rule, .9. The moment Peter pays $1000 for bet IV, Paul can be happy: even before the fifth toss, he has a guaranteed gain. He paid $300 for

28 losing bet III and $1000 for buying bet IV, i.e. $1300, which ensures him a net gain of $71.72. So, van Fraassen concludes, as a belief-updating rule, IBE leads to inco- herence:

What is really neat about this game is that even Peter could have figured out beforehand what would happen, if he was going to act on his new probabilities. He would have foreseen that by trading bets at fair value, by his own lights, he would be sure to lose $71.72 to his friend, come what may. Thus, by adopting the preacher’s rule, Peter has become incoherent – for even by his own lights, he is sabotaging himself. (van Fraassen 1989: 168–9)

Let us now analyze van Fraassen’s argument against IBE. Kvanvig (1994) reconstructs it neatly:

(1) There exists a series of bets, described as bets I–IV above, with a cost as noted above, and that, if taken, guarantee a net loss no matter what happens.

(2) If one is a consistent follower of a particular ampliative rule, one regards this set of bets as fair.

(3) It is irrational to regard such a series of bets as fair.

(4) Therefore, it is irrational to be a consistent follower of the particular ampliative rule that implies than one regards all of the bets in question as fair.

(5) For any ampliative rule, there is a set of bets that are regarded as fair if one consistently follows that ampliative rule which constitutes a Dutch Book strategy.

(6) Therefore, it is irrational to be a consistent follower of any ampliative rule. (Kvanvig 1994: 331)

The weak point of this argument is premise (3). In fact, it is doubtful whether it is true. Moreover, as will be seen, contrary to van Fraassen’s assumption, the bookie actually did cheat against Peter. Note that not all sets of bets which guarantee a net loss are irrational. To see this, let us consider two of bookies with extraordinary powers, whom, inspired by Christensen, we shall term Super-bookies: (i) the Omni- scient Super-bookie, and (ii) the Prescient Super-bookie.

29 (i) The Omniscient Super-bookie knows everything. In particular, he knows with certitude the truth-values of every proposition which can make the object of a bet. When buying a bet from him, the only chance of a human fallible agent is to assign a probability of either 0 or 1 to any proposition, and to be correct about it. However, for fallible agents, such a doxastic practice would be irrational. Rationality forbids the ascription in all situations of extreme probability values to the given propositions. Thus, our agent is sure to incur a loss. (ii) The Prescient Super-bookie is the one who has privileged information about the agent’s probability distribution. More exactly, he knows not only the current probability distribution, but also what changes the agent will make in his degrees of belief over time. Here is what could happen if this was the case:

...suppose the prescient bookie knows that one’s probability for p today is .7, and also knows that tomorrow it will be .6. In such a case, a series of bets guaranteed to net one a loss is easy to construct: the bookie offers, and you accept, a bet today that pays $10 and costs .7($10) = $7, if p is true, and buys a bet from you that pays $10 if p is true, and costs .6($10) = $6. Then you and your bookie exchange a $10 bill whether or not p is true, resulting in a net profit for the bookie of $1. If this bookie knows your probability will be higher tomorrow than today, he employs the same strategy, this time offering and buying bets on the negation of p. (Kvanvig 1994: 332–3)

Therefore, the agent again is guaranteed a net loss if he buys bets from the Prescient Super-bookie. The possibility to block such a Dutch Book strategy seems to consist in obeying a strong diachronic condition which precludes any doxastic modifi- cation whatsoever. Christensen labels it the “Calcification” condition. No doubt, Calcification cannot be a reasonable requirement. There are numerous cases in which rationality urges us to modify our credences. For that reason, as Christensen puts it, “we do not think that the beliefs an agent holds at dif- ferent times should cohere in the way an agent’s simultaneous beliefs should.” (Christensen 1991: 241). A Dutch strategy would indicate the inconsistency of someone’s set of beliefs only if those beliefs ought to be consistent. Often, in light of new relevant information, we do over time arrive rationally at beliefs contradictory to (not to mention different from) the earlier ones. It actually seems that the rational thing to do is simply not to bet against Super-bookies. If an agent did so in spite of information about the extraor- dinary powers of the bookie, then we say that he acted stupidly. Rationality urges us to be suspicious about such bookies. The point is also expressed by Christensen:

30 There is, after all, no Evil Super-bookie constantly monitoring everyone’s credences, with an eye to making Dutch Book against anyone who falls short of probabilistic perfection. Even if there were, many people would decline to be at “fair odds”, due to suspiciousness, or risk aversion, or re- ligious scruples. In short, it is pretty clear that Dutch Book vulnerability is not, per se a practical liability at all! (Christensen 1991: 237)

Kvanvig makes the important observation that Paul is in fact a kind of pre- scient bookie. What Paul knows is the ampliative rule which Peter follows. This also requires having some crucial knowledge about the future, knowledge which has produced the set of bets that incurred Peter a net loss. Had Peter followed a different ampliative rule from the one he actually did, then the set of bets offered by the bookie would have failed to be a Dutch Strategy. For instance, as Kvanvig calculates, if, after finding out that E obtains, Peter had raised his probability of H not to .9, but to .88. then bet IV would have failed to assure the bookie a net win. Of course, as Kvanvig (1994: 347) points out, in that case a different set of bets could have been offered so as to ensure a net win for the bookie. But in order to generate this different set of bets, the bookie would have needed to know Peter’s different ampliative rule. The conclusion is that since the Dutch strategy conceived by Peter’s friend qualifies him as a Prescient Super-bookie, we may conclude that (3) is false: there is nothing about van Fraassen’s Dutch Book to imply that Peter was irrational. What van Fraassen needs is a Dutch Book strategy which does not rely on privileged knowledge about the future decisions of the follower of the Preacher’s rule. Naturally, we should do justice to van Fraassen’s remark that Peter is sabotaging himself even by his own lights. Yet the remark is not specific about what is necessitated in order to equate being subject to a Dutch strategy with irrationality. I believe it is transparent that the additional requirement on rationality that he has had in mind is his principle of Reflection.12 Here is how he has formulated it:

To satisfy the principle, the agent’s present subjective probability for proposition A, on the supposition that his subjective probability for this proposition will equal r at some later time, must equal this same number. (van Fraassen 1984: 244)

12Here is where I believe that van Fraassen turned to the Reflectivity principle: in dis- cussing the notion of rationality in a practical context, he states that “a minimal criterion of reasonabless if that you should not sabotage your possibilities of vindication beforehand.” (van Fraassen 1989: 157). It emerges from the context that he has in mind an epistemic kind of vindication.

31 The principle be can formalized as follows:

Reflection : P0(A | P1(A) = r) = r, where P0 is the agent’s present probability function, and P1 her probability function at some future time. In intelligible words, the principle demands that “if I am asked how likely it is to rain tomorrow afternoon on the supposition that tomorrow morning I’ll think rain 50% likely, my answer should be “50%”. (Christensen 1991: 232). Nonetheless, Christensen constructs several cases to show intuitively that rationality actually urges irreflectivity (i.e. violation of the Reflection principle) in van Fraassen’s sense. He imagines, for example, that a psychedelic drug, which he calls LSQ, has the power to induce those under its influence the strong conviction that they can fly. It can be also assumed that the drug does not diminish in any sense the capacity to reason and to understand. If the agent believes that he has just taken a dose, and someone asks him, “What do you think the probability is that you’ll be able to fly in one hour, given that you’ll then take the probability that you can fly to be .99?”, he must, following the Reflectivity principle, answer “.99”. Obviously, this is absurd. Our agent knows about both the properties of LSQ, and the fact that the drug cannot confer the power to fly. Therefore, rationality impels him to violate Reflection. Other common-sense examples of rationally requiring irreflectivity involve propositions such that my coming to have a high degree of belief in them would in itself tell against their truth. An example is the proposition that I have no degrees of belief greater than .90. Christensen then asks,

What credence should I accord this proposition, on the supposition that I come to believe it to degree .95? Reflection says “.95”; elementary probability theory says “0”. This seems to be a case where Reflection cannot be a “new requirement of rationality, in addition to the usual laws of probability” – it is inconsistent with those very laws. (Christensen 1991: 237)

On balance, I do not see so far how vulnerability to Dutch Book strategies generally implies irrationality. Finally, I follow Kvanvig in raising one last counterargument against van Fraassen’s Dutch Book strategy. We saw, on the one hand, that when presented with bet IV, Peter assigned, in of Preacher’s rule, a probability of .9 for the fifth toss to come out ace. However, Peter applied this rule only after having learned that E is the case. That is to say, he calculated P (E), P (¬E), and P (¬H & E) by usual probability calculus. But he should have done the same with respect to the fifth toss. He should have calculated P (H |

32 E) = P (H & E)/P (E), where P (H & E) was to be obtained as the average of (.1)5,..., (.9)5 (which is equal to .22). Given the already known value of P (E) = .25333, Peter would have obtained for P (H | E) the value .87. Instead, he applied the Preacher’s rule going in to the fifth toss, which induced him to believe that the probability of getting a fifth ace is .9. The moral is properly formulated by Kvanvig:

Van Fraassen’s argument therefore trades on having Peter following IBE advice after learning E, but ignoring that advice prior to learning E. That, however, is unfair to IBE; if IBE is problematic, what must be shown is that a consistent application of an IBE strategy is subject to Dutch Book difficulties. (Kvanvig 1994: 338)

Peter lost because he committed himself to values inconsistent with those dictated by the Preacher’s rule. He should not have accepted bets whose costs had been generated by calculations other than the ampliative procedure that he usually followed. We can now summarize the counterarguments to van Fraassen’s Dutch Book strategy against IBE. We saw first that there are clear cases of Dutch strategies where no irrationality is involved, namely when the agent bets against Super-bookies. We further saw that van Fraassen’s Reflection prin- ciple, which he conceives as a consistency constraint on the agent’s diachronic probability distribution, can and should in some situations be violated as a matter of rational behavior (e.g., the LSQ case and the case of “self-blocking” propositions). Finally, we have indicated that van Fraassen’s argument against IBE is not exactly fair. This should be enough to warrant the conclusion that van Fraassen’s Dutch strategy against IBE has missed its target.

2.5 Arguments against the ability of IBE to link empirical success with truthlikeness

Recall that Putnam’s ‘no miracle argument’ invokes the approximate truth of scientific theories as the best explanation for their empirical success. One of the most adamant critics of this argument is (1981; 1984; 1996). In his 1981 article, “A Confutation of Convergent Realism”, Laudan criticizes the following two theses, which he ascribes to realism:

(T1) If a theory is approximately true, then it will be explanatorily successful. (T2) If a theory is explanatorily successful, then it is probably approximately true. (Laudan 1981: 30)

33 (T1) asserts that there is what Laudan calls a ‘downward path’ from approx- imate truth to empirical success, whereas (T2) asserts the existence of an ‘upward path’. Laudan criticizes each of these two theses. Yet, we shall im- mediately see that his criticism is not justified.

2.5.1 The downward path In arguing against the downward path, Laudan relies on the conviction that there is no acceptable notion of ‘approximate truth’.

Virtually all the proponents of epistemic realism take it as unproblem- atic that if a theory were approximately true, it would deductively follow that the theory would be a relatively successful predictor and explainer of observable phenomena. Unfortunately, few of the writers of whom I am aware have defined what it means for a statement or theory to be ‘approximately true’. Accordingly, it is impossible to say whether the alleged entailment is genuine. This reservation is more than perfunctory. Indeed, on the best account of what it means for a theory to be approxi- mately true, it does not follow that an approximately true theory will be explanatorily successful. (Laudan 1981: 30–31)

By the ‘best known account of what it means for a theory to be approximately true’, Laudan means Popper’s account of verisimilitude, which he subsequently rejects.13 Laudan further criticizes Newton-Smith’s view that the concept of approximate truth can be legitimately invoked even in the absence of a ‘philosophically satisfactory analysis’. According to Laudan, the problem is that the intuitive notion of approximate truth lacks the minimal clarity needed to ensure that it would explain science’s empirical success. Finally, Laudan contends that even in the presence of an articulated se- mantic account of truthlikeness, the realist would have no epistemic access to it. He wouldn’t know whether his theory is actually truthlike or not. Nonethe- less, on the basis of Niiniluoto’s account of truthlikeness (see A.2), the answer to this point is straightforward: properly construed, truthlikeness has both a semantic and an epistemic component.

2.5.2 The upward path Laudan declares himself ready to assume, for the sake of argument, the truth of (T1). He maintains that the truth of (T2) does not follow, i.e. that the explanatory success of a theory cannot be taken as a rational warrant for its approximate truth. To this purpose, he lists an impressive number of past

13The same is done in the Appendix to this book, yet I also show there that there are clearly better accounts of verisimilitude than the Popperian one.

34 theories which although empirically successful, are nowadays known to be not approximately true:

- the crystalline spheres of ancient and medieval astronomy; - the humoral theory of medicine; - the effluvial theory of static electricity; - “catastrophist” geology, with its commitment to a (Noachian) deluge; - the phlogiston theory of chemistry; - the caloric theory of heat; - the vibratory theory of heat; - the vital force theories of physiology; - the electromagnetic ether; - the theory of circular inertia; - theories of spontaneous generation. The list, which could be extended ad nauseam, involves in every case a theory that was once successful and well confirmed, but which contained central terms that (we now believe) were nonreferring. (Laudan 1981: 33)

This list has provided a lot of work for realist philosophers. One of the replies has come from McAllister (1993). He argues that many theories of the past that were highly valued, were neither approximately true, nor empirically successful. McAllister denies that the past theories cited by Laudan had high degrees of empirical success. He presents his argument in terms of properties of theo- ries (such as the “ of being mathematical, the property of according with the data of a certain set, and the property of being susceptible of con- cise formulation.” (1993: 208)), and properties of properties of theories (“for instance, it may be a property of one possible property that different theories can possess it to different degrees, or that its presence in a theory is difficult to ascertain, or that it reveals itself in a theory only once the theory has been applied in the design of experiments.” (1993: 208)). In particular, McAllis- ter is interested in the properties diagnostic of high measures of a theory’s empirical success, among which he situates consistency with known data, ex- planatory power, the ability to generate novel predictions, simplicity, etc. The discovery of the relevant properties of empirical success is itself a task grad- ually achieved by science, greatly relying on empirical research. Accordingly, science has ceased to value several properties of theories, such as consistency with the Bible, with , with energetism and the like. Closer to contem- porary science, the advent of quantum showed that – a theoretical property intrinsic to the Newtonian paradigm – is not necessarily

35 required as a property which theories must have in order to get closer to the truth. In the light of this, McAllister concludes that

...the judgments made in the remote past about the [empirical success] measures of theories are in general not as reliable as those which take account of the later discoveries about the properties of theories. ...There- fore, the theories deemed successful in the were deemed to be so on the basis only of a set of criteria constructed in the light of imperfect knowledge about the properties of the properties of theories. (McAllister 1993: 211–2)

I agree that McAllister’s argument is able to block Laudan’s claim concerning a number of theories from his list. However, the argument in itself is problem- atic. First, it relies on a notion of empirical success (his term is ‘observational success’) which he defines as synonymous with ‘empirical adequacy’. But as already known, empirical adequacy consists in the truth of the empirical conse- quences derivable from a theory. Strict truth is certainly too strong a demand for empirical success. Second, the higher standards that contemporary science has imposed on the relevant properties of empirical success can be, if only incidentally, satisfied by some theories of the past. Thus, McAllister cannot exclude the eventuality that some of the theories on Laudan’s list are in fact successful even by today’s lights. To conclude, I believe that we should admit the possibility that some of Laudan’s theories survive McAllister’s argument. A different response to Laudan’s argument is given by (1993). He protests that Laudan’s argument “depends on painting with a very broad brush” (1993: 142), in the sense that Laudan’s examples, though admittedly empirically successful theories, are shown not to be approximately true by appeal to their idle or inessential parts. Here is Kitcher’s diagnosis of the examples from Laudan’s list:

Either the analysis is not sufficiently fine-grained to see that the sources of error are not involved in the apparent successes of science of past science or there are flawed views about reference; in some instances both errors combine. (Kitcher 1993: 143)

To illustrate this, Kitcher focuses on a central example in Laudan’s list, namely the electromagnetic ether in nineteenth century optics. Laudan insists on the crucial role that ether played in explaining the phenomena of reflection, refraction, interference, double refraction, diffraction, and polarization, as well as in making predictions as surprising as the bright spot at the center of the shadow of a circular disc, in Fresnel’s approach (Laudan 1981: 27). This is

36 precisely the point that Kitcher contends: ether did not play a crucial role in nineteenth century electromagnetic theories. Kitcher argues convincingly that in Fresnel’s theory ether was nothing but an idle of a successful problem-solving employed for optical phenomena. In particular, Fresnel’s problem-solving schema concerned questions of the form “What is the intensity of light received at point P ?”, whose answer involves Huygens’s conception of the wavefront as a source of secondary propagation, and the method of integration over the entire length of the wavefront. This is a mathematical technique still employed by contem- porary physics. By contrast, Fresnel’s considerations about the constitution of transversal electromagnetic waves played practically no role in the success of his theory. In the terminology Kitcher proposes, ether is a presuppositional posit within scientific practice, i.e. an entity that apparently has to exist if the instances of the schemata are to be true.14 This is to be contrasted with the working posits, i.e. with the putative referents of terms that occur in problem- solving schemata (Kitcher 1993: 149).

The ether is a prime example of a presuppositional posit, rarely employed in explanation or , never subjected to empirical measurement (until, late in the century A. A. Michelson devised his famous to measure the velocity of the earth relative to the ether), yet seemingly required to exist if the claims about electromagnetic and light waves were to be true. The moral of Laudan’s story is not that theoretical positing in general is untrustworthy, but that presuppositional posits are suspect. (Kitcher 1993: 149)

Therefore, as Kitcher concludes, a finer-grained approach to Laudan’s list indicates that the theoretical terms essential to successful problem-solving schemata prove to be referential. More generally, as Kitcher states, an em- pirically successful theory is indeed approximately true, provided that its the- oretical postulates are indispensable to the derivation of the empirical conse- quences. I take it that McAllister’s and Kitcher’s arguments – along with others, more or less successful15 – succeed in eliminating most of the theories on Lau- dan’s list. With respect to the few of them which possibly survive these ar- guments, recall that a reasonable realist does not assume that none of the well-established theories has ever been later refuted. As already mentioned, modern scientific realism ought to be a selective doctrine, capable to cope with

14Kitcher’s notion of a presuppositional posit is thus clearly reminiscent of Vaihinger’s ‘useful fictions’. 15Worthwhile noticing are Harding and Rosenberg’s (1981), and Psillos’s (1999) arguments to the effect that non-referring theories can, nevertheless, be approximately true.

37 the fact that not all scientific theories are to be taken realistically (see chapter 7 for more considerations on a selective scientific realism). In light of all this, I conclude that Laudan’s objections do not succeed in showing that it is illicit to associate approximate truth with empirical suc- cess. Accordingly, they cannot speak against realist’s right to rely on IBE in accounting for the empirical success of science.

2.6 The Success of Methodology Argument

Boyd’s abductive argument Let us begin with a bit of history of the philosophy of science. Twentieth century positivists asserted that the ultimate foundation of science consists in simple observation reports. Theoretical terms, which do not purport to refer to observables, were ascribed meanings by way of specifiable experimental procedures – known as operationalism. For instance, the meaning of ‘electric force’ was identified with the specific operations performed on the instruments in the lab in order to measure its occurrences. This serene way in which logical empiricism relied on observation reports was most influentially contended by (1962). Kuhn pointed out that observation is greatly dependent on theoretical background knowl- edge: observation is theory-laden. Kuhn’s view on the theory-ladenness of observation went so far as to suggest that observers with radically different theoretical backgrounds live in different worlds. He imagined Tycho Brahe and Kepler both observing the Sun. Kuhn claimed that given their opposed views about the nature of the solar system, they literally saw different things: while Kepler saw a stationary Sun around which the Earth revolves, Brahe saw the Sun rotating around the Earth. We’ll take a closer look at this case in chapter 4, where some excesses of Gestalt psychology will be discussed. Suf- fice it now to state that the theory-ladenness of observation entails that there is no neutral ground on which to perform crucial experiments. Accordingly, theory-ladenness makes theory- a theoretical matter.16 Richard Boyd (1981, 1984) has argued that theory-ladenness is not in the way of our endeavor toward knowing the true nature of the world. On the con- trary, he includes it as a crucial ingredient for an explanation of the method- ological success of science. The first step of his reasoning is the admission of the omnipresence of theory-ladenness in all aspects of scientific methodology.

16As we shall see in chapter 3, when discussing social constructivism, sociologists of science have taken over Kuhn’s ideas in denying that nature can ever have a say in the theory-choice process. Instead, they have stressed the relevance of social and political factors which actually belong outside the laboratory.

38 Background theories are needed in order to formulate predictions, to devise rel- evant experimental set-ups, to assess , and to choose between competing theories. Next, he takes note of the uncontroversial “instrumental reliability” of scientific methodology, meaning of its success in leading to em- pirically successful theories. What is the explanation of this success? Boyd’s answer to this question is as follows:

According to the distinctly realist account of scientific knowledge, the reliability of scientific method as a guide to (approximate) truth is to be explained only on the assumption that the theoretical tradition which defines our actual methodological reflects an approximately true account of the natural world. (Boyd 1984: 211)

According to Boyd, scientific methodology is based in a dialectical way on our theories, and those theories are approximately true. Thus, a method which employs say, causal processes, turns out to be correct because the theoretical descriptions of those causal processes are approximately true. No doubt, this is a good explanation for the fact that the method under consideration succeeds in generating the expected effects. Moreover, as the argument continues, most of the concrete situations in scientific life show this explanation to be the best one. Boyd accordingly takes the approximate truth of scientific theories to be the best explanation for the success of scientific methodology. As in the case of the explanation of empirical success, the methodological success of science is also explained by an IBE. Yet, the explananda of the two IBEs are different. Boyd’s argument is a meta-IBE having the reliability of methodology as explanandum, and the approximate truth of the theories involved in this methodology as explanans. The first-order IBE is called, as we have seen, to explain the empirical success of scientific theories. Most objections against Boyd’s explanation of the methodological success of science turn around some alleged deficiencies of IBE. It has been objected that several times in the history of science, the appeal to IBE led to theoretical failures – we saw an example in section 2.3, when Kitcher’s analysis of Fresnel’s theory of waves was cited. But this is compatible with a reasonable, fallibilist realist doctrine. A bona fide scientific realist does not overlook the record of historical failures. However, he gains confidence from the fact that most of the time has successfully been used in science. As Psillos indicates, the fact that I failed to find my keys does not show that searching for them by retracing my steps is an unreliable method for finding lost keys (Psillos 1999: 80). More serious refutations are attempted by Arthur Fine (1984, 1986, 1991), one of the most insistent critics of abductive reasoning. It is worthwhile taking

39 them in turn, since some of them have found many supporters among the antirealists.

2.6.1 Fine against IBE

I. The circularity objection

The main criticism that Fine raises against the IBE-based arguments of the realist is that these are viciously circular. The accusation extends of course, to both first-level IBE and meta-IBE: First, in explaining the empirical success of science, realists typically use IBE to infer the approximate truth of theories. But as Fine contends, there could be an instrumentalist inference to the best explanation, not to the ap- proximate truth, but to the instrumental reliability of theories. Thus, as the objection goes, a realist IBE begs the question of approximate truth versus instrumental reliability; approximate truth can be derived only insofar as it is presupposed in the argument’s premises. Second, with respect to the explanation of the methodological success of science, the premises of Boyd’s argument are the theory-ladenness of scientific methodology and its indisputable instrumental success. By meta-IBE (i.e. IBE at the methodological level), Boyd takes us from science’s methodological success to the approximate truth of theories involved in methodology. This dialectical intertwinement of theory and methodology further explains how we come to possess approximately true theories. Since these have been acquired by first-order IBE, it follows that IBE is reliable. Thus, the conclusion of an IBE-reasoning demands prior reliance on IBE’s dependability. In light of this, Fine concludes that the realist commits the fallacy of assuming “the validity of a principle whose validity is itself under debate”. (Fine 1986a: 161). In retort to this criticism, it is useful to begin with some considerations on the nature of circular arguments. The term circular applies to an argument in which the content of a premise is identical with the content of the conclusion. Following Psillos (1999: 81–82), we note that the mere of a premise to the conclusion is not a sufficient for the circularity to be vicious. An ar- gument is viciously circular when apart from the identity of one premise to the conclusion, the premise is also a ground for deriving the conclusion. That comes to saying that the conclusion itself figures among the reasons of its derivation. But this is not the case with familiar of the type ‘a & b, therefore b & a’. This is, of course, a circular argument, yet not a vicious one, since the conclusion expresses the commutativity of logical conjunction without assuming it in the premise. As Psillos recalls, Braithwaite’s (1953) distinguished between premise-

40 circular and rule-circular arguments. The former ones are only a different name for viciously circular arguments: they typically claim to prove the truth of sentences which in fact are presupposed in the premises. The latter ones have conclusions drawn by means of inference rules which are themselves part of the conclusions. According to Braithwaite (1953: 274–8), a rule-circular ar- gument is not vicious – neither is its conclusion one of its premises, nor is the argument such that one of the grounds offered for the truth of the conclusion is the conclusion itself. We can now argue against Fine’s objection concerning IBE for the empirical success of theories. Fine contends that the empirical success of science can be accounted for by means of an instrumentalist IBE, i.e. an inference to the instrumental reliability of scientific theories. Thus, the realist IBE, by which the approximate truth of theories is concluded, is unwarranted. The idea is that if you can do the same with less, then it is irrational to do it with more. The realist does not rely on any a priori warrant for the theories’ approx- imate truth. This one emerges as an empirical matter of fact. The favored hypothesis surpasses its rivals because of the overwhelming explanatory superi- ority of the causal mechanisms on which it draws. Certainly, the realist admits that IBE sometimes leads to false conclusions. But the admission of this falli- bility would be pointless if, as Fine would have it, the realist reconstructed the history of science by assuming without argument that each successful theory was approximately true. It can be concluded that the realist’s first-order IBE (of the form of Putnam’s ‘no miracle argument’) is not premise-circular. What about Boyd’s explanationist17 argument? Is it premise-circular? I do not think so. The reason is that the best explanation of the methodological success of science consists in the approximate truth of the background theories. The approximate truth of these theories is not assumed in the premises of the argument, but emerges in its conclusion. The conclusion is derived as a matter of empirical fact, by examining the fair competition of this explanation with other contenders. We saw that antirealists suggest different explanations for the methodolog- ical reliability of science. Van Fraassen (1980) for instance, maintains that scientific methodology is the outcome of an evolutionary process in which only those methods become established which have survived the ‘epistemic jungle.’ However, we have seen that such an answer cannot be accepted as a satisfactory explanation. Scientific realism allows the question ‘why?’ to be legitimately asked in situations where its contenders of the instrumentalist ilk do not. The best answer resides in the approximate truth of descriptions of entities and

17Boyd’s defence of realism is sometimes called explanationist because it is based on the claim that the realist thesis that scientific theories are approximately true is the best expla- nation of their empirical success.

41 processes that include the unobservable realm. It can be thus concluded that Boyd’s argument is not premiss-circular, either. Let us now check the realist’s IBE with respect to rule-circularity. A brief examination indicates that realist IBE is rule-circular, concerning both the ex- planation of empirical success, and the explanation of methodological success of science. If we ask, ‘By what means is science methodologically so success- ful?’, then given methodology’s theory-ladenness, the best answer is that the background theories involved in methodology are approximately true. This conclusion has been established by means of a meta-IBE. In their turn, those approximately true theories were arrived at by means of first-order inferences to the best explanation. Therefore, meta-IBE relies on the reliability of first- order IBE. The reasoning also applies when starting with first-order IBE: the success of science is best explained by the approximate truth of scientific the- ories. The success of science in constructing approximately true theories is best explained by the reliability of scientific methodology. Given the theory- ladenness of scientific methodology and the intertwinement between theory and methodology, first-order IBE relies on the dependability of meta- IBE. Thus, the conclusions of either of the realist IBE-based arguments are drawn from premises established by means of IBE, too. What is needed is not any prior knowledge of IBE’s reliability, but only IBE’s being reliable. This takes us to a more general epistemological debate, namely the one between internalism and externalism.18 Alvin Goldman’s (1986) reliabilism, a well-known version of externalism, turns out to be a great help for the realist supporter. Reliabilism about justification asserts that a belief is justified if and only if it is the outcome of a reliable psychological process, meaning, a process that produces a high proportion of truths. Clearly, random guessing does not produce a high proportion of truths. Not so with visual observation under normal circumstances. Experience shows that it tends to deliver true statements about the environment. Accordingly, statements delivered by ran- dom guess are not justified, whereas statements yielded by visual observation are typically so. The realist claims that IBE is a reliable psychological process whose reliability has been abundantly verified in day-to-day life, as well as in science. Particularly, IBE has led scientists to produce a high proportion of approximately true statements. In the issue of IBE’s rule-circularity, the insight offered by reliabilism is the following: the relevant fact for the correctness of the conclusion of an IBE- instance is whether IBE is reliable, irrespective of whether we know that or not. Hence, we are not urged to defend IBE in order to successfully apply

18Typically, internalism holds that only what is easily accessible to the subject (his internal states) may have a bearing on justification. Externalism removes this accessibility constraint: justification of beliefs requires more than the internal states of the epistemic subject.

42 it. This is the reason why, from a reliabilist perspective, the circularity of IBE-based realist arguments is not of a vicious sort. To be sure, reliabilism is not an uncontroversial epistemology. Its most fundamental problem seems to consist precisely in the severance of reliability from epistemic justification. As Pollock and Cruz (1999) point out, justifica- tion comes from the correct reasoning of the believer: “If one makes all the right epistemic moves, then one is justified regardless of whether her belief is false or nature conspires to make such reasoning unreliable.” (1999: 113). There is also the internalist idea that we can reply to sceptical doubts about the possibility of knowledge or justified beliefs only insofar as we rely on the resources of reflection, without assuming anything which feeds sceptical doubts (e.g. the external world). These are intuitions which indeed compel reliabilism to considerable improvement. I shall not delve any further into this debate. In any event, whether the rule-circularity of IBE-based arguments is vicious depends on the justification theory one adopts – point also made by Psillos (1999: 85). Regardless of reliabilism’s fate, there are some clear cases where rule- circularity is not vicious. In psychology, we cannot inspect the reliability of memory without relying on memory. In , it seems that one cannot prove modus ponens without ultimately making use of modus ponens. Induc- tive inferences cannot be vindicated without appeal to . Naturally, we won’t say that all these cases instantiate viciously circular rules. They rather bring to mind Neurath’s metaphor of rebuilding a boat while floating on the sea. That concerns the antirealists as well, for their inferential procedures are not thoroughly free of rule-circular rules. The moral is aptly formulated by Psillos (1999), drawing on Carnap (1968): In one sense, no inferential rule carries an absolute rational compulsion, unless it rests on a framework of intuitions and dispositions which take for granted the of this rule (truth preservation in the case of deductive reasoning, learning from experience in the case of inductive reasoning, searching for explanations in the case of abductive reasoning). (Psillos 1999: 88–9) Accordingly, the realist IBE-based arguments are no worse than the argu- ments of their critics. Therefore, the rule-circularity of IBE cannot be deemed vicious, unless we raise our justification standards so high that no epistemic procedure can meet them. Consequently, Fine’s circularity objection fails.

II. The stringency objection In a further argument, Fine urges that a of scientific thinking employs methods more stringent than those of scientific practice. He

43 takes inspiration from Hilbert’s program of proving the consistency of math- ematical theories only by methods which are not employed in the theories themselves.

Metatheoretic arguments must satisfy more stringent requirements than those placed on the arguments used by the theory in question, for oth- erwise the significance of reasoning about the theory is simply moot. I think this maxim applies with particular force to the discussion of realism. (Fine 1986: 114)

It is certainly relevant that Hilbert’s mathematical program failed. The coup de graˆce was provided by G¨odel’s second incompleteness theorem, which claims that if a given comprehensive formal system is consistent, then it cannot prove its own consistency. It was by all means unreasonable to require a to live up to standards of rigor which mathematics fails to satisfy. Therefore, I do not believe that this criticism of Fine is to be taken seriously.

III. The pragmatic objection Finally, Fine contends that instrumentalism can explain everything realism can (including the empirical and methodological success of science). He formulates this thesis as a metatheorem:

Metatheorem I. If the phenomena to be explained are realist-laden, then to every good realist explanation there corresponds a better instrumentalist one. (Fine 1986a: 154)

His argument relies on the substitution of the realist conception of truth by a pragmatic one in the realist’s explanations. With respect to IBE, Fine coun- ters the realist’s IBE with an instrumentalist one, which as already seen, takes the instrumental reliability of a theory as the best explanation for its empir- ical success, not its approximate truth. However, we saw in subsection 2.1.3 that this substitution leads to tautologies, and as such does not explain at all. Moreover, we have shown in detail that realism has merits that instrumental- ism cannot equal. In particular, realism can offer causal explanations, while instrumentalism either rejects them altogether, or is content with superficial accounts, in observable terms. Accordingly, Fine’s criticisms against the IBE-based realist arguments are harmless.

44 Chapter 3

The Experimental Argument for Entity Realism

The experimental argument emerges in a natural manner from the paradig- matic story of atoms, entities that evolved from the status of useful figments of imagination to the one of real and unassailable constituents of the material world. Section 3.1 relates fairly amply some aspects of the atomic story. We shall then pass on to a more philosophical register in section 3.2, at the end of which we shall present the entire argument succinctly.

3.1 Atoms – from fictions to entities

The Daltonian theory At the beginning of the eighteenth century, John Dalton imparted new life to the ancient Greek atomic philosophy by turning it into a modern scientific theory. In his work New System of Chemical Philosophy (part I, 1808; part II, 1810), Dalton turned to the notion of atoms to provide a physical picture of how elements combine to form compounds.1 The phenomenon he set out to explain was the combination of elements in fixed proportions: the combination of, say, oxygen and hydrogen always takes place in a proportion of seven to one by weight, irrespective of the proportion in which the elements are mixed. Dalton suggested that associated with each chemical element is a kind of . When chemical elements react together to form new compounds, what occurs is that the atoms of the elements cluster together to form molecules. If the

1Although Dalton called his theory ‘modern’ to differentiate it from Democritus’s philos- ophy, he retained the Greek term atom to honor the ancients. In fact, as Nye (1976: 247) points out, Dalton’s conception of specific elementary atoms differing in weight has more affinity with the Aristotelian-Averroestic notion of minima naturalia than with the qualita- tively identical Democritean atoms.

45 compounds consist of identical clusters of this sort, then one would expect the elements to react together in a certain proportion, namely, the same proportion as the total masses of the different kinds of atom in the molecule. The empirical success of Dalton’s hypothesis quickly became evident. How- ever, in spite of acknowledging its usefulness, many of Dalton’s contemporaries rejected the ontological implications of the theoretical mechanism posited by Dalton. Benjamin Brodie, for example, maintained that since everything a chemist could observe is the reacting together of certain quantities of sub- stance, the chemist ought to confine his reasoning strictly to the observable realm (see Brock and Knight, 1967). Faithful to his empiricist credo, Brodie created a system of rules to translate chemical equations into algebraic ones, which could be solved without any consideration of the unobservable proper- ties of the substances involved. As Alexander Bird (1999: 123) notes, most of Brodie’s contemporaries actually found it more convenient to take Dalton’s hypothesis as a heuristic model than to try to make out Brodie’s sophisticated algebra. Nonetheless, they agreed that the observational evidence could not ground belief in the physical existence of atoms. Among the philosophers of the time, identified the scientific spirit with a cogent empiricist attitude. He regarded causes and hypotheses – including the atomic hypothesis – about hidden entities and mechanisms as vestiges of a metaphysical state of thought close to theology:

What scientific use can there be in fantastic notions about fluids and imaginary ethers, which are to account for phenomena of heat, light, electricity and magnetism? Such a mixture of facts and dreams can only vitiate the essential ideas of physics, cause endless controversy, and in- volve the science itself in the disgust which the wise must feel at such proceedings. (Comte 1913: 243)

Obviously, at that stage in the development of the atomic theory, the atomic hypothesis played only an explanatory role. The existence of atoms and of mechanism explaining the law of definite proportions was inferred rather as a matter of good explanation for the phenomenon of combination in fixed proportions. However, this phenomenon could easily be saved without appeal to any hidden mechanism whatsoever, or by appeal to alternative mechanisms.2

2It is interesting to mention that Dalton also became committed to the faulty assumption that the simplest hypothesis about atomic combinations was true: he maintained that the molecules of an element would always be single atoms. Thus, if two elements form only one compound, he believed that one atom of one element combined with one atom of another element. For example, describing the formation of water, he said that one atom of hydrogen and one of oxygen would combine to form HO instead of H2O. Dalton’s mistaken belief that atoms join together by attractive forces was accepted and formed the basis of most

46 As Mary Jo Nye (1976) has documented, considerable experimental evi- dence against the Daltonian atomic hypothesis accumulated from the study of specific heats, and from spectroscopic studies. These facts pointed toward a complex internal structure of atoms. Indeed, around 1860s, the spatial concep- tion of molecules and atoms grew in importance in chemistry. Apart from that, a strong positivist criticism against the atomic hypothesis arose from support- ers of phenomenological thermodynamics (Ostwald, Berthelot, Duhem, and others), who condemned it not only for losing its heuristic value, but also for useless commitment to . We shall not follow this winding path here. Instead, let us confine this presentation to the study of a phenomenon that proved to be of decisive importance for the establishment of atomism.

The Brownian motion A great victory of modern atomic theory was the explanation of the ‘Brownian motion’, a phenomenon first observed in 1827 by the Scottish biologist Robert Brown: fine grains of pollen suspended in water manifest an erratic, continuous movement. Early explanations attributed this movement to thermal convec- tion currents in the fluid. Nevertheless, when observation showed that nearby particles exhibited totally uncorrelated activity, this path was abandoned. A promising approach arose no earlier than the 1860s, when the kinetic theory of matter was rediscovered3 by Joule and developed mathematically by Maxwell, Boltzmann, and Clausius. Theoretical physicists had become interested in Brownian motion and were searching for an explanation of its of 19th-century chemistry. Nonetheless, as long as scientists worked with masses as ratios, a consistent and empirically successful chemistry could be developed because they did not need to know whether the atoms were separate or joined together as molecules. Therefore, even if the hypothesis of atoms’ existence turned out to be correct, the mechanism posited by Dalton was rather inaccurate. In fact, the history of 19th century chemistry is replete with such mistakes perpetrated by dominant characters of the discipline. Another example is the incorrect hypothesis by J¨ons Jacob Berzelius according to which all atoms of a similar element repel each other because they have the same electric charge. Berzelius immense prestige blocked Avogadro’s (1811) hypothesis for more than fifty years, which claimed that atoms of elemental gases may be joined together rather than existing as separate atoms. 3The kinetic theory, which relates the individual motion of particles to the mechanical and thermal properties of gases, was proposed in 1738 by Daniel Bernoulli. In spite of its accuracy, the theory remained practically unknown for almost a century. Among the reasons was the popularity of chemistry, where the phenomena were accommodated by different (phenomenological) models; the dominance of the caloric theory of heat, which lasted till the mid 1850s; and the reputation of Newton’s theory of gases, which claimed that gas atoms repel each other. The theory was revitalized at the beginning of the eighteenth century by Herapath (1820) and later on by Waterston (1845), but it gained acceptance in the scientific community one decade later, through the studies of Joule.

47 characteristics – the particles appeared equally likely to move in any direction; further motion seemed totally unrelated to past motion; the motion never stopped; small particle size and low viscosity of the surrounding fluid resulted in faster motion. The suggestion of the kinetic theorists was that the cause of the Brownian motion lay in the ‘thermal molecular motion in the liquid environment.’ Ac- cording to the theory, the temperature of a substance is proportional to the average kinetic energy with which the molecules of the substance are moving or vibrating. It was therefore natural to infer that this energy might be imparted to larger particles which could be observed under the microscope. In fact, this line of reasoning led in 1905 to bring forward his quantitative theory of Brownian motion. Using statistical mechanics Einstein proved, that for such a microscopic particle, the random difference between the pressure of molecular bombardment on two opposite sides would cause it constantly to wobble back and forth. A smaller particle, a less-viscous fluid, and a higher temperature would each increase the amount of motion one could expect to observe. Drawing on such considerations, Einstein established a quantitative formula for the probability of a particle to move a certain distance in any direction. In 1908, Jean-Baptiste Perrin used the Brownian motion to determine Avo- gadro’s number (that is, the number of molecules as the number of atoms in 12 grams of the carbon isotope 12C). In order to measure the vertical distribu- tion of Brownian particles, Perrin made an analogy between the tiny grains in water and the molecules in the atmosphere: the lower density of the air at high altitudes comes from the balance between the gravitational force, pulling the molecules down, and their thermal agitation, forcing them up. With the aid of an ultramicroscope, Perrin laboriously counted particles of gamboge at dif- ferent altitudes in his water sample. He estimated the size of water molecules and atoms as well as their quantity, thereby confirming Einstein’s equation. This was the first time that the size of atoms and molecules could be reliably calculated from actual visual observations. It thereafter became almost unanimously accepted among scientists that Perrin’s work raised the status of atoms from useful fiction to observable entities.

Perrin’s Les Atomes In his 1913 book Les Atomes, Perrin listed beside his own ascertainments of Avogadro’s number by three methods based on Brownian motion (namely, on the already mentioned vertical distribution of Brownian particle, as well as on their translational and rotational ) ten other distinct, previously used methods (Perrin 1913: 161). These included measurements of alpha decay, of

48 X-ray diffraction, of black-body radiation, and of electrochemistry. As will be seen, the decisive argument for the reality of atoms came from this variety of independent determinations of Avogadro’s number. The ex- istence of an impressive number of accurate independent determinations of the same magnitude proved to be decisive for the adoption of the belief in the existence of atoms. Psillos (2000: 21), among others, registers Poincar´e’s enthusiastic declaration of conversion to atomism:

The brilliant determinations of the number of atoms by Mr Perrin have completed the triumph of atomicism. What makes it all the more con- vincing are the multiple correspondences between results obtained by totally different processes. Not too long ago, we would have considered ourselves fortunate if the numbers thus derived had contained the same number of digits. We would not even have required that the first figure be the same; this first figure is now determined; and what is remarkable is that the most diverse properties of the atom have been considered. In the processes derived from the Brownian movement or in those in which the law of radiation is invoked, not the atoms have been counted directly, but degree of freedom. In the one in which we use the blue of the sky, the mechanical properties of the atoms no longer come into play; they are considered as causes of optical discontinuity. Finally, when radium is used, it is the emission of projectiles that are counted. We have arrived at such a point that, if there had been any discordances, we would not have been puzzled as how to explain them; but fortunately there have not been any. The atom of the chemist is now a reality. . . (Poincar´e1913 [1963]: 91)

Perrin himself expressed even more precisely the weight to be placed on the variety of methods:

Our wonder is aroused at the very remarkable agreement found between values derived from the consideration of such widely different phenom- ena. Seeing that not only is the same magnitude obtained by each method when the conditions under which it is applied are varied as much as possi- ble, but that the numbers thus established also agree among themselves, without discrepancy, for all the methods employed, existence of the molecule is given a probability bordering on . (Perrin 1913: 215-216)

It appears impossible to understand this “remarkable agreement” without em- bracing the belief that atoms exist. What else than a cosmic coincidence would make all the predictions of the atomic theories turn out accurate, even if no atoms existed?

49 3.2 The common cause principle

Wesley Salmon (1984; 1997a) convincingly argues that Perrin’s argument for the existence of atoms can be seen as an instance of the common cause princi- ple. Following the line of Russell and Reichenbach, Salmon defines it as follows: “When apparently unconnected events occur in conjunction more frequently than would be expected if they were independent, then assume that there is a common cause.” (Salmon 1997a: 110). The idea behind the principle is fairly intuitive: if, for instance, all the light bulbs in a hall go off the same moment, we don’t conclude that they all burned out at once – that would indeed be an unbelievable coincidence – but rather that there is a problem with their common electricity source. Likewise, it would be an incredible coincidence if all the independent determinations listed by Perrin delivered identical results. Let us take a closer look at the formal features of the common cause prin- ciple, as formulated by Salmon (1984; 1997a), drawing quite often on Reichen- bach (1956). Consider two types of events, A and B, occurring respectively, with probabilities P (A) and P (B). We say that A and B are statistically in- dependent if and only if the probability of their joint occurrence P (A & B) is simply the product of their individual occurrences:

P (A & B) = P (A) · P (B).

If, on the contrary, P (A & B) is different from the product of each proba- bility in part, then A and B are statistically relevant to one another. We usually expect events which are statistically relevant to one another to also be explanatorily relevant to one another. Nonetheless, there are many cases in which statistical correlations are not straightforwardly explanatory. For instance, the falling of the barometer does not explain an incoming storm, as the latter in its turn, does not explain the former. In fact, we expect that both events are caused by a third one, namely, specific meteorological conditions. In other words, both the fall of the barometer and the storm are explained by a common cause. In general, if A and B are two types of events statistically relevant to one another, an event C is their common cause if the following conditions formu- lated by Reichenbach are fulfilled (cf. Salmon 1984: 224):

P (A & B | C) = P (A | C) · P (B | C) (1) P (A & B | ¬C) = P (A | ¬C) · P (B | ¬C) (2) P (A | C) > P (A | ¬C) (3) P (B | C) > P (B | ¬C) (4)

50 These formulas can be applied to arbitrary many types of statistically related events – in particular, to all thirteen determinations of Avogadro’s number cited by Perrin. However, for simplicity reasons, we’ll follow Salmon in con- fining their application to only two of the methods: Brownian motion and alpha radiation. Accordingly, let A be the measurements based on Brownian motion, and B the measurements based on alpha decay. A and B yield val- ues for Avogadro’s number lying within the range of acceptability of Perrin’s listed methods (i.e. between 4 × 1023 and 8 × 1023). Next, let C stand for the experiments actually conducted under the specification of initial conditions. For the experiments on alpha radiation, C “would involve specification of the correct atomic weight for helium and the correct decay rate for the radioactive source of alpha particles. A mistake on either of the scores would provide us with an instance of ¬C.” (Salmon 1984: 224). As to the Brownian motion ex- periments, accurate values for the size and density of the gum mastic particles are presupposed, as well as for the density of the liquid. Given that these two sets of experiments are completely different, (1) is satisfied. If they were conducted under incorrectly assigned initial conditions, it would be reasonable to admit that the identity of results is just a matter of coincidence. Since there is no influence of one lucky correct result upon the other lucky correct one, (2) seems to be satisfied, as well. Further, the chances of correlated experimental results are surely greater when the initial conditions are satisfied than when the results are obtained under initial condi- tions from which they should not have been expected to arise. Hence, (3) and (4) are also satisfied. So, the relations (1)–(4) seem to be satisfied in a fairly straightforward manner by our case. This is indeed a strong reason to accept that the high statistical correlation between A and B arises out of the special set of background conditions designated by C. There are other interesting formal properties of the Reichenbach-Salmon analysis of common cause. It follows from the above definition of statistical independence that an event A is statistically relevant to an event B if and only if P (B) 6= P (B | A). In case of positive statistical relevance, we obtain

P (B | A) > P (B) and P (A | B) > P (A).

A simple calculation shows us further that

P (A & B) > P (A) · P (B).

As we have seen, in order to explain this improbable coincidence, we look for a common cause C, so that

51 P (A & B | C) = P (A | C) · P (B | C).

This last relation informs us, that given C, A and B are again rendered sta- tistically independent. This is the property of screening-off, which proves to be essential in distinguishing genuine causal processes from pseudo-causal pro- cesses. In Salmon’s words, “the statistical dependency is swallowed up in the relation of causal relevance of C to A and C to B.” (Salmon 1997a: 112). To say that the common cause C screens off event B from event A, means that in the presence of C, B becomes statistically irrelevant to A:

P (A | B & C) = P (A | C). (5)

Now, van Fraassen (1980: 111–126) is among the inflexible critics of Salmon’s account of causal explanation. His first objection is that the determination of Avogadro’s number by different observations does not point to an event C causing both A and B:

. . . physics can explain this equality [of the determinations of Avogadro’s number] by deducing it from the basic theories governing both sorts of phenomena. The common cause Salmon identifies here is the basic mech- anism – atomic and molecular structure – postulated to account for these phenomena. (van Fraassen 1980: 123)

So van Fraassen maintains, the commonality of A and B comes from the fact that both physical theories employ the same ‘basic mechanisms’, and not from any underlying and determinable common cause. In terms of the conditions (1)–(4) above, this objection concerns (2), the possibility that we may be mistaken about our theoretical assumptions. We assumed with respect to both A and B that the working substance consists of atoms and molecules, yet, as the objection goes, this assumption may be false. As a matter of fact, Salmon (1984: 225) took this contention into considera- tion. His answer is that our theoretical hypothesis brings us into the position to assert what the outcome would be, were the hypothesis wrong. Thus, we could counterfactually ascribe values to P (¬C) in equality (2). In the case of Brow- nian motion, it is imaginable that water is a continuous medium in which the chaotic movement of particles is the result of internal microscopic vibrations related to temperature. Perrin might have obtained the same measurement re- sults if he had worked under this alternative theoretical assumption. Similarly for the alpha radiation experiments: conceivably, radium does not consist of atoms. Yet, microscopic particles of matter – which scientists call alpha parti- cles – could be emitted by radium at a statistically stable rate. We could also

52 collect and measure the emitted alpha particles, so as to obtain one mole of helium (4 grams). Again, a determination of Avogadro’s number can appar- ently succeed without appeal to the atomic hypothesis. In other words, the phenomena studied by Perrin can be saved by many different hypotheses about the constitution of matter. It is precisely for this matter that in Salmon’s apt words, “no single experimental method of determining Avogadro’s number, no matter how ingeniously and beautifully executed, could serve to establish decisively the existence of molecules.” (Salmon 1984: 226). However, given the great number of physically and conceptually independent determinations of this magnitude, it is virtually unimaginable (apart, perhaps, from algorith- mic rivals of the kinetic-molecular theory, as we shall see in chapter 5) to construct a non-corpuscular theory able to account qualitatively and quanti- tatively for all types of conducted experiments. By regression to the common cause of these phenomena, the basic theoretical mechanisms mentioned by van Fraassen could not, in the case of Avogadro’s number determination, be anything incompatible with the modern atomic theory of matter. Let us return to the screening off relation. A crucial distinction in the Reichenbach-Salmon approach of causation is the one between causal processes and pseudo-causal processes. For instance, when a car moves along a road on a sunny day, the car is continuously accompanied by its shadow. Nevertheless, while the events in which the car occupies successive space points constitute a genuinely causal process, the successive points occupied by the shadow merely constitute a pseudo-causal process. The spatio-temporal trajectory described by the shadow is in fact fully dependent on the spatio-temporal trajectory of the car, whereas the reverse does not hold. The implications of this distinction extend far beyond the level of this intuitive description. They are particularly cogent in , which implies that information can be transmitted only through causal processes. Pseudo-causal processes are filtered out by the screening off condition (5). For example, given the statistically close dependence between event A (the determination of Avogadro’s number through the Brownian motion method) and event B (the determination of Avogadro’s number by the alpha decay method) in the presence of the common cause C (the experiments actually performed under specified initial conditions), B becomes statistically irrelevant to A. In different words, the statistical relevance of B to A is absorbed into the statistical relevance of C to A. We can now move on to the main point of the experimental argument for entity realism: the causal relevance of the common cause C upon event A can be proven by transmitting what Reichenbach called a mark through the causal process from C to A. An accurate and concise characterization of mark transmission is given by Salmon, in what he calls the mark transmission

53 principle (MT):

Let P be a process that, in the absence of interactions with other pro- cesses, would remain uniform with respect to a characteristic Q, which it would manifest consistently over an interval that includes both of the space-time points A and B (A 6= B). Then a mark (consisting of a modification of Q into Q0), which has been introduced into process P by means of a single local interaction at point A, is transmitted to point B if P manifests the modification Q0 at B and at all stages of the process between A and B without additional interventions. (Salmon 1984: 128)

It is at this point opportune to digress a bit in order to take a closer look at Salmon’s theory of causation. The notion of mark-transmission does actu- ally found Salmon’s distinction between causal and pseudo-causal processes. Only the former are capable of transmitting a mark. The reason why Salmon emphasizes the idea of transmission is that, in a genuinely causal process, the mark ought to be present at each space-time point following the single local in- teraction at which the process was marked. Otherwise, pseudo-processes could also be taken as transmitting a mark by way of suitable posterior interactions. Psillos (2002: 114–6) notices, first, the presence of a counterfactual element in MT’s definition, element which is again required in order to filter out pseudo- processes: unlike causal processes, the pseudo-causal ones depend for their uniformity on external constraints, in the absence of which that uniformity would not be in place. Second, and more problematically, the definition makes reference to interactions. However, the latter concept seems to be itself causal, thus threatening to throw the whole account by Salmon into circularity. More exactly, the trouble is that Salmon’s concept of causal interaction is defined in terms of the concept of intersection, which stands for the geometric intersection of the spatio-temporal lines corresponding to the relevant processes. Moreover, the intersecting processes are required to be marked and to be able to transmit the mark beyond the intersection point. However, the concept of a mark is a causal concept. As Phil Dowe (2000: 72) indicates, Salmon’s definitions of causal interaction and of marking seem to be depend on each other. Aside from the circularity issue, a different question concerns mark-transmission as an adequate theory of causation. Psillos (2002) raises the question whether MT is a necessary and sufficient condition in order for a process to be causal. He mentions Kitcher’s (1985) argument according to which neither is the case. On the one hand, for example, a car dented as a result of a crash will project a shadow which has acquired and transmits a permanent mark. Hence, a pseudo-process seems to be capable of transmitting a mark, which shows MT not to be a sufficient condition for genuine causality. On the other hand, the condition is not necessary, either. In order to separate a causal process from

54 “spatiotemporal junk”, Salmon requires the former to display a certain fea- ture for a certain time interval. Yet this disqualifies from being causal the short-lived processes in elementary physics, in which subatomic particles are generated and annihilated. Salmon might retort by denying that cases such as the dent in the shadow of a crashed car is a modification of a characteristic – hence appropriate for marking – feature of the shadow. Nonetheless, as Psillos point out, the notion of characteristic Q that intervenes in MT’s definition suffers from vagueness:

[once] we start thinking about [the characteristic property] in very ab- stract philosophical terms, it is not obvious that we can say anything other than this characteristic being a property of a process. Then again, new problems arise. For at this very abstract level, any property of any process might well be suitable for offering the markable characteristic of the process. So we seem to be in need of a theory as to which proper- ties are such that their presence or modification marks a causal process. (Psillos 2002: 119)

These difficulties made Salmon admit that the ability to transmit a mark is a mere “symptom” of the presence of a causal process (Salmon 1997a: 253). He came to embrace a version of conserved quantity (henceforth CQ) theory of causation. In Dowe’s (2000: 91) definition, a conserved quantity is “any quan- tity that is governed by a conservation law”. Salmon (1997b) further insists that the conserved quantity (such as mass, energy, momentum, etc.) must be transmitted, the reason being, again, to ground the distinction between pseudo-processes and genuinely causal ones. As Salmon (1997a: 260) confesses, this turn to the conserved quantity ac- count is meant to eliminate counterfactual talk from the concept of causality. Psillos (2002: 126) has serious doubts about success in this respect. Without delving any further into the niceties of the CQ theory, I embrace Salmon’s version as the most promising current account, its merits being mainly that it is considerably less prone to absurd counterexamples, and that it fits better in our general physicalist view. Now it is time to focus on the aim of this section and see how this discussion of causation provides us with the means to make sense of Cartwright’s (1983) slogan, “If you can spray them, they’re real.”4 The idea in this dictum was best developed by (1983). In Hacking’s view, the capacity to experiment on a microphysical entity is no conclusive reason to believe that it exists; complete assurance develops with our ability to manipulate that

4The slogan is usually attributed to Hacking, yet Hacking (1983) himself ascribes it to Cartwright.

55 entity, usually in order to experiment on something else. Here is how Hacking formulates his experimental argument for realism:

We are completely convinced of the reality of electrons when we regularly set out to build – and often enough succeed in building – new kinds of device that use various well-understood causal properties of electrons to interfere in other more hypothetical parts of nature. (Hacking 1983: 266)

My thesis is that the manipulation of microphysical entities – which Hack- ing takes to be the decisive proof of their existence – is an instantiation of transmission of a conserved quantity from an unobservable physical event to an observable one. I want to support this claim by starting with another bit of history of science. Hacking (1983) presents a study of the historical evolu- tion of the concept of ‘electron’ from the status of an hypothetical construct to the status of designator for materially existing, ascertainable, and manip- ulable physical entities. He traces the evolution of the concept step by step, from Lorentz theory, where electrons entered the scene merely as a good ex- planation, through Millikan’s determination of the charge of an electron, to Uhlenbeck and Goudsmit’s assignment of the electron’s angular momentum, and up to the contemporary ability to polarize electrons and scatter them in different proportions in the weak neutral currents experiments. This story tells us why virtually all contemporary experimenters came to believe in the reality of electrons. Compare this picture with the one of the neutral bosons (Z0), which are still regarded as mere hypothetical entities, in spite of being experimentally detected. The detection of Z0 was indirect, based on the scattering of antineu- trinos on electrons in a huge bubble chamber. The interaction between these lepton species is mediated by an exchange of bosons, whose presence is inferred from calculations of the electronic track. Since neutral bosons cannot be di- rectly detected, they are called virtual particles. No doubt, the primary reason why neutral bosons are deemed virtual – hence not real – is that scientists lack any possibility to manipulate them. By contrast, there are innumerable ways in which to construct instruments that rely on the causal properties of electrons in order to produce effects of unequalled precision. To be sure, the reasoning is not that instruments are built and that there- after, from their reliability, the reality of electrons is inferred. As Hacking emphasizes, this would be the wrong time-order. Instead, the thinking is as follows: “we design apparatus relying on a modest number of home truths about electrons, in order to produce some other phenomenon that we wish to investigate.” (Hacking 1983: 266). In other words, the right order is from causal properties of electrons to the phenomena in which they are involved. Manipulability consists in disposing of these causal properties in order obtain

56 expected effects. I surmise that this is an acceptable understanding of trans- mitting a conserved quantity. More will be said about manipulability in the next section (3.3). Let us now draw the philosophical lesson from the last two sections and see how far it supports my above conjecture on manipulability:

(1) Atoms were introduced in modern science by Dalton, as hypothetical entities with a strictly explanatory role. At that stage of atomic theory, the purely instrumental use of atoms was thoroughly rational.

(2) Initially accepted mostly as an heuristic device, the atomic hypothesis had been for decades confronted with criticism, mainly of an experimen- tal nature, although the experimental counter-proofs were often mixed with questionable theoretical assumptions. After several revisions and refinements, the atomic theory took the form of the modern kinetic- molecular theory of matter.

(3) The kinetic-molecular theory offered the framework for quantitative re- sults concerning, for example, Brownian motion, better that those of any other rival theory. Assuming the atomist framework, Perrin showed that the same physical quantity (Avogadro’s number) can be measured by an impressive number of physically different methods. Thus, a common cause analysis establishes the existence of atoms, as well as some of their causal properties. It is at this point that virtually all members of the sci- entific community – with the notable exception of Ernst Mach – declared conversion to atomism.

(4) The test of causal relevance of atoms as the presumed common cause of several observed phenomena is the transmission of a conserved quantity along causal chains. The experimental manipulation of microphysical particles instantiate such transmissions of conserved quantities, being fulfilled in the instruments of modern physical science.

Of course, not all acts of physical manipulation allow regressions to a common cause. As will be immediately seen, contemporary experimental science is also being carried out in ways that make the common cause principle inapplicable. Nonetheless, the fact that we can often confidently identify common causes is sufficient to legitimate causal talk and to confer evidential weight to classes of unobservable entities.

57 3.3 Manipulability, entities, and structure

3.3.1 Entity realism and theory realism

Let us add some considerations about the proper understanding of the experi- mental argument and about the distinction between entity realism and theory realism. As already mentioned, entity realism was introduced in 1983, by Ian Hacking and Nancy Cartwright, based on the idea that although there are arguments to believe that many theoretical entities exist,

. . . one can believe in some entities without believing in any particular theory in which they are embedded. One can even hold that no general deep theory about the entities could possibly be true, for there is no such truth. (Hacking 1983: 29)

I agree that one can indeed maintain a distinction between arguing for the existence of theoretical entities and arguing for the belief in the theories about those entities. However, I think that Hacking’s latter claim exaggerated. It has often been remarked that an important motivation for entity realism comes from laboratory practice. This is undoubtedly correct. The belief of experimenters in theoretical entities primarily comes from the ability to ma- nipulate them. As already noted, one problem is that philosophers usually understate the qualitative step from detection and measurement to manipu- lability. In many historical cases (e.g. the atomic theory), the existence of theoretical entities was established by regression to a common cause. Ac- cordingly, the manipulation of these entities is best understood as a causal transmission of a conserved quantity from the common cause to the observ- able effects. What a common cause analysis actually gives us is a relatively small number of causal properties of the posited entities. These properties come with the theoretical presuppositions of the experimenters, but are not to be identified with theories. The reason is cogently exposed by Hacking:

Even people in a team, who work in different parts of the same large experiment, may hold different and mutually incompatible accounts of electrons. That is because different parts of the experiment will take dif- ferent uses of electrons. Models good for calculations on one aspect of electrons will be poor for others. Occasionally a team actually has to select a member with a quite different theoretical perspective simply in order to get someone who can solve those experimental problems. ...There are a lot of theories, models, approximations, pictures, formalisms, meth- ods and so forth involving electrons, but there is no reason to suppose that the intersection of these is a theory at all. (Hacking 1983: 264–5)

58 The idea that experimentalists just apply the ready-to-wear products of theo- reticians is simply false. The causal properties of theoretical entities are ma- nipulated by group-specific mathematical instruments, which often turn out to be incompatible. Moreover, there is nothing to warrant that the approaches of all groups involved in a modern experiment overlap so as to form a theory. The point is also emphasized by Peter Galison (1987), whose sociological study of modern physical science documents that contemporary experimental practice has an appreciable degree of independence of theory, for experimentalists have independent stratagems for judgement which do not rely upon the details of a theory. The point of these reflections is that an argument for entity realism is not yet an argument for theory realism. The truths about the causal prop- erties of theoretical entities identified through common cause analyses are in fact confirmed theoretical presuppositions of the experimentalists. But the contemporary theory of electrons – quantum electrodynamics – extends far beyond what is being used and needed in laboratory practice. True, experi- menters need some theoretical description of the entities they manipulate, yet this does not imply that they use comprehensive theories about them. Most of the time, their theoretical hypotheses, as well as those embedded in the ex- perimental apparatus itself, do not cohere so as to constitute unitary theories. I do not share with the advocates of entity realism the view that belief in the established theories about theoretical entities is unwarranted. On the contrary, I have already argued that the truthlikeness of these theories is the best explanation of the empirical success of science. I merely emphasize that, in general, the confirmation that a theory acquires from an experimental success is not sufficient to establish confidently its truthlikeness.5 When dealing with entities shared by several theories and models, it may well be the case that the theory under consideration needs confirmation from more than one piece of evidence. Let us take the example of and quantum field theory, both tremendously successful theories. All their predictions have been con- firmed with astonishing degrees of precision. It is nowadays virtually unimag- inable that one of them can be falsified. However, their explanatory and pre- dictive success cannot be accounted for merely by way of belief in the existence of subatomic particles. These particles do also occur in models and theoret- ical frameworks which, though empirically successful, are hardly compatible with quantum mechanics. Besides, as already remarked in 2.1.3, any substan- tial theory includes statements of different ontological weights: apart from

5One experimental success can be sufficient in case the outcome is very improbable. This will boost the confidence in the relevant theory beyond the cogent truthlikeness threshold.

59 statements about genuinely referential entities, there are also structures for representation, explanatory frameworks, and expository devices. In quantum mechanics, the Hilbert spaces have a key representational role, but they are not themselves referential entities. As Campbell (1994) points out,

Vectors in a Hilbert space are used to specify the critical attributes of a quantum system, and to make possible a mathematical computation of its evolution. So the Hilbert space plays a critical, perhaps indispensable, role in the theory. Yet the theory also holds that there are no such spaces, at least not in the robust, real sense in which there is the space-time in which the quarks and electrons play out their drama. (Campbell 1994: 29)

We really do not expect all of these non-referential parts of quantum mechan- ics to be involved in the experimentation on, say, electrons. Recall Hacking’s picture of a multitude of models, approximations, pictures, formalisms, meth- ods and techniques, some of them incompatible with others, although each is known to be serviceable in its specific domain of application. Experimentation gives us knowledge about the causal properties of atoms, electrons, and the like, but it cannot exclude the possibility that one and the same causal talk about these entities be the outcome of some theory with completely different representational and expository devices. This patchwork aspect of experimental practice seems to be a salient fea- ture of contemporary physics, in which the complexity and sheer size of ex- perimental apparatuses demands the cooperation of specialized groups which can barely work under a unitary theoretical framework. Admittedly, things were different in an earlier era of science, when both theories and experiments were simple and more directly related to one another. But even then there was a noticeable difference between the argumentation for entities and the one for theories. Look again at the case of atoms. We’ve seen that, in the first step, the atomic hypothesis had been introduced by IBE: atoms had reason- ably been treated as heuristic fictions of useful devices. In a second step, after reaching a stage where Avogadro’s number could be determined with high ac- curacy through several different methods, the existence of atoms was inferred by common cause analysis. That was an abductive step as well, yet not a step to a given theory as a best explanation, but rather to a modest number of theoretical statements accounting for specific observational phenomena. In contemporary experimental physics, the physical dimensions, the com- plexity and the costs of experiments have led – as Galison (1987) shows – to qualitative changes in the argumentation used to establish the reality of theoretical entities. For instance, it became virtually unfeasible to vary the parameters of the experiment during its execution. Consequently, the crucial decisional stage shifted to the analysis of raw data, after the completion of

60 the experiment. Crucially important became the means to rid the data from background noise and systematic events that mimic the wanted effect. Some- times, the number of relevant events is discouragingly low. For example, in the 1973 experiments of weak neutral currents, a team of more than 50 physi- cists at CERN inspected more than 1.4 million pictures of tracks produced in a bubble chamber by a beam of muon-neutrinos. They were looking for the trace of an electron put in motion by a virtual neutrino. The experimenters found no more than three relevant occurrences of the expected phenomenon. Nonetheless, that was apparently enough to consider the experiment as having attested to the discovery of weak neutral currents.

Another problematic fact is that the repetition of important experiments becomes more and more unfeasible. Under these circumstances, the data for a proper common cause analysis become more and more rare. In fact, it is barely possible nowadays to give a general methodological rule about when and how experiments end, in Galison’s sense. What is clear is that, as Cush- ing (1990: 248) puts it, “there are various levels of constraint (and of theory commitment) that interlock and that must at times be (mutually) adjusted to produce a stability issuing in the verdict that an effect has been ‘seen’.”

In any event, what I have just argued is that the relative independence of experimentation from theory should be accepted in the sense that experimen- tal confirmation – through manipulability – bestows the right to causal talk. In its turn, talk about causes sets essential constraints upon further theory construction. Thus, I take entity realism to be foundational with respect to strong scientific realism (i.e. realism about entities and theories).

For the sake of clarity, let us summarize the reasoning: consider again an empirically successful theory, T . Along with appropriate auxiliary assump- tions, T entails a class of observational consequences, O. It is usual that elements of O are thought to be straightforwardly endorsed by experimental tests, and that T is thereby being confirmed. However, unless one embraces a simplistic view of confirmation, the existence of the entities posited by T does not yet entail that the relations, which according to T are held between these entities, are in place. Not every aspect of T ’s theoretical descriptions of an entity X is vindicated by the empirical success of one of T ’s calculational schemata employing X (see 2.3.2), or by the confirmation of X’s existence. Often, conceptual refinement and supplementary confirmation are needed in order for T to pass the truthlikeness threshold. Apart from that, the step from belief in X to belief in T relies on the vindication of the realist argu- ments against the underdetermination of T by the empirical data; this will be a major topic of chapters 4 and 5.

61 3.3.2 On structural realism Another approach related to this topic is the so-called structural realism of John Worrall (1989). Worrall’s motivation is to do justice to the pessimistic meta-induction argument – the argument that since the observable posited by past scientific theories turned out nonexistent, it is likely that the unobserv- ables of today’s science are also nonexistent. He agrees that the pessimistic meta-induction is correct in pointing out that there is a radical discontinuity at the level of unobservables. However, Worrall argues for a level of continuity and in the dynamic of theories. According to him, the mathematical content of a superseded theory is typically retained and embedded in the for- malism of the successor theory. In other words, Worrall advocates a continuity at the level of theories’ mathematical structures, irrespective of the content of the concepts filling out these structures. Presupposed is a distinction which Worrall draws between the nature and the structure of an entity. The latter is given by the mathematical equations defining the entity, whereas the former just cannot be quantitatively described. In an important sense, as structural realism seems to claim, the only firm knowledge we have is about structures: in trying to find out more about structures, all we can get is more structure. I am rather expedite in rejecting structural realism. For one thing, I do not share its uneasiness either about the pessimistic meta-induction argument, or about Kuhn’s (1962) conception of the incommensurability of scientific , which also inspired Worrall. Second, and more importantly, our argumentation showed in sufficient measure that we can believe in unobserv- able entities on the grounds of their causal properties. Therefore, I believe that one of structural realism’s main (skepticism about unobservable entities) is misconceived. Worrall wanted to offer us ‘the best of two worlds’: the Kuhnian world of radical discontinuities at the theoretical level, and continuity at a level in- between empirical laws and theoretical accounts of mechanisms and causes (cf. Worrall 1989: 111). Yet, the former world does not cause us any anxiety, since we reject Kuhn’s conception of incommensurability. As to the latter one, we already have its best: scientific realism.

62 Chapter 4

The Underdetermination Argument: The Theoretical/Observational Distinction

In the introductory chapter, scientific realism was defined as the doctrine claiming that most of the essential unobservable entities posited by the well- established scientific theories do exist independently of our minds, language, and theories. Moreover, the theoretical descriptions of those entities’ properties and of the relations among them are approximately true. Typically, scientific realists are also committed to the view that we can ascertain that science’s unobservables exist and that the theories about them are approximately true. This view belongs, as we have noticed when discussing the horizontal varieties of realism (see Introduction), to the epistemic dimension of scientific realism. To reiterate from earlier discussion, there are several degrees of epistemic commitment that a realist may hold. What we deem to be the typical scientific realist position is in fact situated in the middle of a spectrum ranging from the view that our best theories are rigorously true, to the view that we are only rationally warranted to assert that our theories are approximately true. As seen in chapter 2, the belief in the strict truth of scientific theories is unwarranted: science essentially relies on idealizations, simplifications, and approximations, facts which make its sentences, strictly speaking, false. Be- sides, past theories, even when empirically successful, turned out to include idle parts, erroneous calculations, as well as clearly false ontological assump- tions. Recognition that knowledge is fallible and acceptance of a concept of closeness to truth is a lesson learnt by most realists. Yet, one may be rationally warranted in believing that the best scien- tific theories are approximately true even if, in fact, these theories are not approximately true. Leplin (1997) calls this position minimal epistemic re- alism. Minimal epistemic realism circumvents a significant objection levelled against ‘standard’ scientific realism, namely the ‘pessimistic meta-induction’ –

63 the view that, since past theories turned out to be false, it is highly probable that, in the future, we shall also come to regard our theories as false and not approximately true. However, the cost for easily coping with pessimistic meta- induction is rather high for minimal epistemic realism: it cannot account for the methodological success of science. In other terms, it cannot answer prop- erly the question, why are the methods of science so successful at generating empirically successful theories. We saw in section 2.4 that the best explanation of this success relies on those theories’ being approximately true. Taking one step further towards scepticism, we reach a situation in which any possible ground for rational belief in a scientific theory is denied. While it may be that the theory can well be true from God’s perspective, even if it was, we as mortals could never acquaint the grounds to believe it. Obviously, this step takes us beyond the borderline between realism and antirealism. To claim that we can never have grounds to believe in any theory is the very definition of epistemic antirealism. The rejection of the latter comprises the subject of the current and the next chapter. Epistemic antirealism is famously exemplified by van Fraassen’s constructive empiricism, which we labelled as a form of agnostic instrumentalism – i.e., instrumentalism agnostic with respect to the referentiality of theoretical entities. Accordingly, a substantial part of the current chapter will be devoted to a discussion of various aspects of constructive empiricism. But there are also agnostic forms of phenomenalism and of the philosophy of as-if which, while admitting the possibility that the unobservables posited by science may exist, decline any reason to believe in them. What these doctrines share is the empiricist credo that all knowledge is ultimately based on sense experience. Empiricists typically assume that beliefs about sense experience are justified by directly ‘reading off’ the contents of sense experience. Thus, if an object is visually presented to me as a green car, I merely read off this appearance in forming the belief that there is a green car in front of me. The scope and accessibility of these sense experience- contents is a contentious matter. Phenomenalism, for example, claims that one’s perceptual beliefs receive support solely from one’s own sense experience. Constructive empiricism by contrast, appears to claim that sense experience- contents are public, shared by all those perceiving the sensory stimuli from a given environment. It is beyond my purpose to go any further into this topic. What is relevant for the discussion of epistemic antirealism is the common empiricist conviction that the leap from beliefs about sense experience to beliefs in unobservables is epistemically illegitimate. What is the motivation for such a scepticism? Why do empiricists take be- lief in unobservables to be so shaky? How can empirical evidence be deemed incapable of guiding ? The answer can be devised in terms of

64 the doctrine of underdetermination of theory by all possible evidence. We’ll immediately see that there are different formulations of underdetermination, increasingly threatening to the realist theses. All these formulations have the claim in common that indefinitely (possibly infinitely) many incompatible the- ories entail any given set of evidence. That corresponds to saying that no theory can simply be deduced from evidence. There is no unique upward path, from evidence to the theory entailing it; there are, as the arguments goes, indef- initely many such upward paths. Before entering into technical details, some terminological preliminaries are required. We label, respectively, E the given evidence, T1,...,Tn a set of rival theories, and R the rules employed to choose between these theories. Let EC(T ) stand for the empirical content of theory T , i.e. the set of observational sentences entailed by T . Thus, we say that theories T and T ’ are empirically equivalent if and only if EC(T ) = EC(T 0). The latter phrase can be spelled out as follows: given a neutral observation language LO, T and T ’ are empirically equivalent in case they have the same deductive connections to the evidential basis, E, formulated in LO. Thus equipped, the core of the underdetermination thesis may be expressed as follows: if both T and T ’ deductively entail E, then the evidence E and the rule of choice R cannot provide sufficient rational grounds for preferring any of the two theories to its rival. As the incoming analysis will show, several clarifications are in order. First, is one to take E as the existent body of evidence, or as all possible evidence? If only the existent body, then shouldn’t it be expected that some future empirical discoveries may drastically change one’s preference for one theory in favor of its empirical equivalents? And if all (logically and nomologi- cally) possible evidence, then possible for whom? Only for humans, with their sensory perceptual limitations, or for any imaginable rational being? Second, we have defined the empirical content of a theory as its set of observable consequences. It is known that such derivations require without exception the presence of auxiliary theoretical statements (i.e., information about the initial conditions, about the instruments in the laboratory, about the mathematical and logical methods, etc.). But, as Laudan and Leplin (1991) argue, these auxiliary assumptions vary over time in two respects: they are defeasible and augmentable. Accordingly, the set of observational consequences of a theory varies over time. This fact urges a diachronic treatment of the relation between theory and empirical evidence. Third, it is relevant to know whether a given empirically successful theory has one empirically equivalent rival, a finite number of rivals, or an infinite number of rivals. Assume, for example, that the upward path from evidence to theory proves indeed to be blocked. It is one thing to have only two empirically equivalent rivals T and T ’, given that our subjective probabilities for each of

65 them are p(T ) = p(T 0) = .5. It is another thing to have a finite (no matter how large) number of empirically equivalent rivals, T1,...,Tn, since our subjective probabilities for each of them are p(T1) = ... = p(Tn) = 1/n. Finally, it is another thing altogether to have an infinite number of rivals, being that the respective subjective probabilities are zero. Naturally, each of these eventualities leads to a different formulation of the empirical equivalence thesis. Thus, depending on whether we take the notion of empirical evidence in a restrained sense (as given evidence) or in a broad sense (as possible evidence), we obtain two extreme versions of the underde- termination thesis (henceforth UD): first, a weak version (WUD), which is no threat to scientific realism:

WUD: For any body of evidence, there are indefinitely many mu- tually contrary theories, each of them logically entailing that evi- dence.

WUD has been quite similarly formulated by Laudan (1996), Newton-Smith (2000), and Devitt (2003). Second, there is a strong version of the UD thesis (SUD), which the realist has to take pains to block:

SUD: Any theory T is radically underdetermined in that all pos- sible evidence could not justify it over its empirically equivalent rivals. (Devitt 2003)

It is an important task of this chapter to argue that there are no reasons to believe SUD. Let us now systematize the debate over UD. Formally, the UD argument runs in the form of the following modus ponens:

(i) The thesis of empirical equivalence (EE): For any given theory, there are indefinitely many empirically equivalent rivals;

(ii) The entailment thesis (EE → UD): Empirical equivalence entails under- determination.

Accordingly, the realist can adopt one of the following lines of counterattack:

• She can show (i) to be incoherent or unsustainable.

• She can accept that (i) is coherent and instead try to show that it is false.

• She can admit (i) but, nonetheless, deny (ii).

66 These realist strategies structure our agenda in this chapter. Throughout the current chapter, we argue in detail against the possibility of a principled distinction between theory and observation, on which epistemic antirealism essentially relies. If correct, the outcome of this chapter undergirds the first realist counterattack strategy. Chapter 5 will be dedicated to the analysis of the other two strategies. Some realist philosophers maintain that UD is not intelligible since it relies on the thesis of empirical equivalence (EE) which, to be formulated, presup- poses a neat dichotomy between the observable and the theoretical. The ob- jection is that such a dichotomy is untenable. Let us spell this out. EE claims that any theory T has empirically equivalent rivals. As already seen, two the- ories, T and T ’, are empirically equivalent if and only if EC(T ) = EC(T 0), i.e. if both T and T ’ entail the same body of evidence, E. But to specify a body of evidence means to formulate a set of observational sentences. To see that both T and T ’ imply the same E, we need to specify E in a neutral observation language, LO. Now, the neutrality of LO can be taken to be relative to a specific epistemic context. Thus, what counts as an observation depends on the specific cognitive interests of the scientists. Depending on which issues an experiment is designed to settle, different empirical assumptions are taken for granted. The point has been made, among others, by Fodor (1984):

We can’t test all our beliefs at once. It is perfectly reasonable of working scientists to want to mark the distinction between what’s foreground in an experiment and what is merely taken for granted, and it’s again perfectly reasonable of them to do so by relativizing the notion of an observation to whatever experimental assumptions are operative. (Fodor 1984: 25–6)

However natural in scientific life, this background-relative manner of distin- guishing observation from theory is not what the empiricist antirealist needs. Recall that her scepticism is grounded on the alleged irrationality of the leap from beliefs about sense experience to beliefs in unobservables. These epis- temic scruples are conspicuously offended by such a relativization, because to admit that observation hinges on scientists’ theoretical background and epis- temic interests is already to admit that the leap to theoretical beliefs has been committed. Thus, what the epistemic antirealist needs is a narrow construal of the notion of observation, as to include only sense experience. As we shall see, the antirealist has serious trouble in doing justice (as she must) both to the fact that observation is thoroughly theory-laden, and to the idea that there is an observational language neutral with respect to all theories. But before taking this line of argumentation, let us take a detour to

67 the so-called received view of theories and the support it offers to instrumen- talism. This provides a good setting for the subsequent examination of the theory/observation distinction.

4.1 The theoretician’s dilemma

In subsection 2.1.3 we have confronted realism with instrumentalism on the ex- planatory dimension, and argued that the former is preferable because, unlike the latter, it can offer causal explanations. The focus now shifts to instrumen- talism’s deeper epistemic roots and to the reasons why it has been such an appealing philosophy of science. The instrumentalist impetus came from the conviction that science can well get along without appeal to theoretical unobservable entities in achieving its aims of making predictions and establishing connections among observables. These purposes have been thought to be attainable only by means of laws and mechanisms couched exclusively in an observable vocabulary. Indeed, many empirical generalizations can be formulated in observable terms. Here is an example offered by Hempel (1958: 43): (1) Wood floats on water; iron sinks in it. For many practical situations, that is a correct phenomenological description. Nonetheless, it suffers from evident deficiencies: “it refers only to wooden and iron objects and concerns their floating behavior only in regard to water. And, what is even more important, it has exceptions: certain kinds of wood will sink in water, and a hollow iron sphere of suitable dimensions will float on it.” (Hempel 1958: 43). These shortcomings can, for this specific instance, be easily remedied by introducing the concept of specific gravity of a body x, defined as s(x) = w(x)/v(x), where w and v are, respectively, x’s weight and volume. A generalization of (1) is thus obtained as follows: (2) A solid body floats on a liquid if its specific gravity is less than that of the liquid. (1958: 44) This statement correctly predicts the floating behavior of any solid body upon any liquid. Yet, the key concept of the generalization, s, though defined in terms of observable characteristics of x, is not itself observable. Why use s? Why not formulate empirical generalizations directly and exclusively in observable terms? No doubt, (2) is equivalent to (2’): (2’) A solid body floats on a liquid if the quotient of its weight and its volume is less than the corresponding quotient of the liquid. (1958: 46)

68 So theoretical terms seem to be superfluous. They can be used as economi- cal abbreviations, relating several observational characteristics. It would have been uncomfortable to use in all applications of Newtonian gravitation theory the formula ‘the product of the masses of two bodies divided by the square of the distance between them’ instead of using the concept of ‘gravitational force’; and it would have plainly been impracticable to do so with the quantum- mechanical concept of ‘wave function’ (granted that it can be defined in ob- servational terms). But are theoretical terms nothing but economical devices? And assuming a computation power high enough to allow their substitution with observables to be carried out in all situations, could theoretical terms be entirely avoided? The force carried by this question is expressed in Hempel’s famous theoretician’s dilemma:

If the terms and principles of a theory serve their purpose [i.e., they establish definite connections among observable phenomena] they are un- necessary, ...and if they don’t serve their purpose they are surely unnec- essary. But given any theory, its terms and principles either serve their purpose or they don’t. Hence, the terms and principles of any theory are unnecessary. (Hempel 1958: 49–50)

The conclusion is in apparent harmony with the instrumentalist philosophy. In order to approach it, a closer look at the concept of theory that has supported this view is useful. Logical set out to develop an observational language for science, LO, which contains, apart from the logical vocabulary, only observational terms – individual constants and predicates. The observational terms directly des- ignate observable entities and properties, while all other terms are explicitly definable in LO. Thus, the full language of science, L, which also includes theoretical terms such as ‘electron’, ‘energy’, ‘gene’, etc., is reducible to or 1 translatable into LO.

1 The precise nature of LO was a disputed matter among logical positivists. The divergence emerged as a debate over protocol sentences – the elementary form in which the results of scientific experimentation are recorded – in the 1930s within the . Carnap had expressed in his Aufbau (1928) the phenomenalist view that such propositions must express private complexes of sensations. For the ‘left-wing’ oriented Neurath, this view was incompatible with the public and intersubjective character of science. Consequently, he advocated the view that protocols are accepted by the scientific community as reporting the results of publicly accessible observations. Protocol sentences, as Neurath maintained, must be expressible in the physicalistic language of unified science, which makes them, like all other sentences, revisable in principle. This view was strongly opposed by the ‘right-wing’ oriented Moritz Schlick, who took it as an abandonment of the secure epistemic basis of empiricism in favor of a consensualist epistemology. Carnap attempted to mediate the debate by holding that the issue at stake was a matter of choosing a formal language; such a choice, Carnap maintained, is conventional.

69 Within this view, a theory is a set of sentences, T , in the language L. If T is the set of logical consequences of a set of axioms A, then T is formulated axiomatically. Now, the instrumentalist assumption is that the language of science is reducible to LO. Theoretical terms, understood as instruments for the purpose of systematizing and making more economical the observational talk, are taken to be definable by observational terms. This is the position from which the ‘theoretician’s dilemma’ argument was levelled. Nonetheless, not all theoretical terms can be reduced to observational terms. As Carnap (1936–7) showed, dispositional terms like ‘magnetic’ or ‘fragile’ are not explicitly definable by observational terms. Carnap proposed the following definition of ‘magnet’: x is a magnet ≡ x attracts every iron object in its vicinity The definition is in observational terms, but the criterion it provides for an object being a magnet cannot be formulated by a finite number of observational confirmations, since we would need to make sure that any piece of iron brought nearby the presumed magnet would be attracted. This would demand the verification of an infinite number of cases, surely an unfeasible task. In Carnap’s terminology, only molecular sentences are capable of conclu- sive observational verification or falsification. To understand what a molecular sentence is, the notion of an atomic sentence requires introduction: an atomic sentence is a sentence ascribing an observable property to a type of objects. Next, molecular sentences are formed from a finite number of atomic sentences only by means or truth-functional connectives. A sentence containing quan- tifiers is not observationally verifiable and falsifiable. Accordingly, there are two kinds of definitions of scientific terms: “those with molecular and those with non-molecular definiens. In both cases, all extra-logical terms belong to the observational vocabulary; but in the former, the definiens contains truth- functional connectives as the only logical terms; in the latter, it also contains quantifiers.” (Hempel 1958: 62). Employing this terminology, the following rule can be established: only definitions with molecular definiens provide observational criteria of applica- tion for the terms they define. Definitions with non-molecular definiens cannot give criteria based on a finite set of observational confirmations. This approach led to the so-called received view of theories, according to which the ‘pure theory’ is uninterpreted and connected to the observable part via interpretive sentences2, while the observational terms are directly con- nected to the empirical world through sensory . The observational 2The link between the theoretical and the observational terms has received a multitude of names. Reichenbach (1928), for example, used the notion of coordinative definitions when referring to the relation between pure and physical geometry; Margenau (1950) spoke of

70 terms are completely interpreted, that is to say, their extension is fixed. The theoretical terms are partially interpreted, meaning, their extensions are not uniquely fixed by the interpretive sentences. Hence, since the meaning of a the- oretical term is only partially specified, the term cannot be eliminated from all contexts in which it may occur. Returning to Hempel’s ‘theoretician’s dilemma’, one possible way to indi- cate how theoretical terms could be dispensable is by replacing theories in- volving theoretical entities with functionally equivalent theoretical systems couched exclusively in observational vocabulary. One such attempt was based on Craig’s theorem, which proves that for any theory T involving both the- oretical and observational terms, there is an axiomatized system TC which uses only T ’s observational terms and is functionally equivalent to T in the sense of deductively entailing the same set of observational sentences. Let us sketchily describe Craig’s procedure: all theorems of T expressible in obser- vational terms are arranged in a sequence containing, for any sentence, all its logical equivalents expressible in T ’s observational vocabulary. As such, this sequence will be highly redundant. Craig prescribed a procedure for eliminat- ing many duplications, yet each theorem in the observational vocabulary of T is present in the remaining sequence in one of its equivalent formulations. Finally, all the sentences in the remaining sequence are made postulates of the functionally equivalent theoretical system TC . Hence, T ’s observational theorems are axiomatized in TC by making every one of them a postulate of TC . By axiomatizing a set of sentences, we normally select a small subset of them as postulates, from which the others can be logically derived. Obviously, Craig’s procedure “fails to simplify or to provide genuine insight”, as Craig (1956: 49) himself pointed out. In fact, the set of postulates generated for TC by Craig’s procedure is always infinite. The difficulties in eliminating the theoretical terms of a theory T with Craig’s procedure concerns not only deductive systematization, but also induc- tive inferences. Hempel (1958: 78–9) shows that simple inductive inferences from a set of observable characteristics to a new one, while straightforward by means of an appropriate theory T , cannot be performed with TC . Thus, TC cannot, in general, replace T . Another method of obtaining a functional equivalent theoretical system in observational terms was provided by Frank Ramsey in 1929. His procedure consists in replacing the theoretical predicates ‘M1’, . . . , ‘Mk’ of theory T by predicate variables ‘w1’, . . . , ‘wk’, and quantifying them existentially. In the resulting associated with T , all the occurring terms belong rules of correspondence, and Carnap (1956) used the term correspondence rules. The list also includes the terms dictionary, operational definitions, semantical rules, epistemic correlations, and rules of interpretation (cf. Nagel 1961: 93).

71 to the observational vocabulary:

TR = (∃w1) ... (∃wk)T (w1, . . . , wk,O1,...,Om)

As an illustration, let us construct the Ramsey sentence associated with Archimedes’s law, formulated above as (2) (A solid body floats on a liquid if and only if its specific gravity is less than that of the liquid.) We label s(x) the specific gravity of a body x, and sL the specific gravity of the respective liquid. Archimedes’s law is then (x)((s(x) ≥ sL) ⊃ F (x)), where F means ‘x floats on L’. We make the substitution S(x) ≡ s(x) ≥ sL. Thus, Archimedes’s laws can be written as (x)(S(x) ⊃ F (x)). The corresponding Ramsey sentence becomes

(∃w1)(∃w2)(x)((w1x ⊃ w2x)(w1x ⊃ F x)).

Does Ramsey’s sentence support instrumentalism? Not until it clarifies the nature of the predicate variables w. There is actually no guarantee that these entities are observable or characterizable in observational terms. Niiniluoto (1999: 113) points out that Carnap was willing to accept them as mathe- matical entities. Nonetheless, Feigl (1950) who was concerned to prevent the postulation of fictitious unobservable entities, urged that the entities involved in a Ramsey sentence be restricted to classes defined by physical properties. Thus, a Ramsey sentence cannot actually be the instrumentalist magic tool of eliminating theoretical terms. To conclude, theoretical terms are logically and methodologically indis- pensable. Therefore, the instrumentalist ought to abandon the idea that she can just do away with the unobservables. Accordingly, all she can hope for is a neat distinction between the observable and the unobservable. Despite this, the 1960s saw vigorous attacks against the ‘received view’ of theories on the grounds

...that the correspondence rules were a heterogeneous confusion of mean- ing relationships, experimental design, measurement and causal relation- ships; that the notion of partial interpretation associated with more lib- eral requirements on correspondence rules was incoherent; that theories are not axiomatic systems; that symbolic is an inappropriate for- malism; and that theories are not linguistic entities. (Suppe 1989: 161)

Nonetheless, at the top of the list of grounds is the untenability of the theoret- ical/observational distinction. This forms the subject for the following three sections of this chapter.

72 4.2 Van Fraassen’s observable/unobservable distinction

Bas van Fraassen is one of the most adamant epistemic antirealists. His book The Scientific Image (1980), a key piece in the contemporary real- ism/antirealism debate, came out at a time when scientific realism had reached its heyday. Van Fraassen’s philosophy of science, constructive empiricism, differs from ‘classical’ instrumentalism. He admits that theories have a truth value. Ac- cording to him, the truth value of a theory is epistemically inaccessible. Yet, this does not get in the way of science, for its goal is not truth, but empirical adequacy. In van Fraassen’s own words,

Science aims to give us theories which are empirically adequate; and ac- ceptance of a theory involves as belief only that it is empirically adequate. (van Fraassen 1980: 12; italics in original)

Van Fraassen conceived this statement as an alternative to scientific realism, which he defines as the doctrine claiming that science aims to give a literally true story of the world and that acceptance of a scientific theory involves the belief that it is true (1980: 8). His view is that the aim of science can be fulfilled without giving such a literally true story, and that acceptance of a theory may properly involve less than the belief that it is true. Van Fraassen defines empirical adequacy in a way reminiscent of the Duhemian slogan of ‘saving the phenomena’: a theory is empirically adequate if its statements about the observable things and events are true. A more rigorous definition is deployed in semantic terms: a theory T is empirically adequate if the world of phenomena is isomorphic to a submodel of T (van Fraassen 1980: 64). Since the concept of ‘observation’ has a key role in van Fraassen’s epistemology, a lot hinges on constructive empiricist’s ability to draw a coherent borderline between the observable and the unobservable. What exactly is the distinction supposed to distinguish? According to van Fraassen, “the term observable classifies putative entities (entities which may or may not exist). A flying horse is observable – that is why we are so sure that there aren’t any – and the number seventeen is not.” (1980: 15). Thus, observation is not related to any ontologically privileged realm of entities. Yet, according to the empiricist credo, we are permitted to ascribe reasonable de- grees of belief only to sentences about observables. How do we know where to confine belief-ascription? The observability criterion van Fraassen uses is ex- plicitly anthropocentric: observable is what can be detected by unaided human senses. With respect to what is unobservable, we ought to be agnostic.

73 A multitude of arguments have been mobilized against the possibility of drawing a coherent and serviceable distinction on this basis. Some of these arguments turn out to be inoffensive; others seriously damage the credibility of constructive empiricism, without destroying it; finally, there is an argu- ment that, we believe, succeeds in proving that van Fraassen’s distinction is incoherent. Let us take them in turn.

4.2.1 Maxwell’s continuity argument In his famous paper ‘The Ontological Status of Theoretical Entities’, Grover Maxwell (1962) attacked the received view’s distinction between a theoreti- cal and an observational language. He formulated his arguments specifically against the semantic antirealism of . But they can also be di- rected toward the epistemically antirealist distinction between observable and unobservable entities. An important clarification is in order here. The positivist semantic distinc- tion between the observational and the theoretical languages is a sharp one. Observational terms acquire their meanings by ostension, and observational sentences have definite truth-values. Theoretical terms are non-referential; they are either to be eliminated by reduction to observational ones, or to re- ceive a meaning in virtue of their systematic role within a theory. Theoretical sentences have no truth-values. In any event, there is no ambiguity about whether a term or a sentence has a meaning or not. A term either refers or it does not; a statement has a truth value or it has not. By contrast, the epis- temic distinction between the observable and the unobservable is, as Maxwell argues, a vague one. In light of these considerations, the classical ‘theory/observation’ distinc- tion appears to be equivocal: it is not apparent whether it is to be taken in a semantic or in an epistemic sense. However, the equivocation is not dan- gerous; it has been committed by instrumentalists, who typically take the two distinctions as being the same. The reason is that the basis of the semantic observational/theoretical distinction is the epistemic notion of observability. That is, the referentiality relation of observational terms is grounded in their direct ascertainableness through sense experience. This motivates the ‘classi- cal’ instrumentalist to equate the observable with the non-theoretical and the theoretical with the unobservable. However, more sophisticated versions of instrumentalism will not put into question the reference of theoretical terms or the truth-values of theoretical statements. Their quarrel with realism is exclusively epistemic, about what one is warranted to believe. Faithful to the empiricist credo, van Fraassen relies on the observable as the secure epistemic basis of knowledge, while rejecting, as shall immediately be shown, a semantic

74 distinction between the theoretical and the non-theoretical. This being said, we can return to Maxwell’s argument. It begins by showing that the observable/unobservable distinction is a matter of degree, so that no precise borderline can be drawn. The following extensive quote presents a celebrated gradual series of objects, beginning with theoretical unobservables and ending with easily perceptible entities:

Contemporary valence theory tells us that there is a virtually contin- uous transition from very small molecules (such as those of hydrogen) through “medium-sized” ones (such as those of fatty acids, polypeptides, proteins, and viruses) to extremely large ones (such as crystals of the salts, diamonds, and lumps of polymeric plastic). The molecules in the last mentioned group are macro, “directly observable” physical objects but are, nevertheless, genuine, single molecules; on the other hand, those in the first-mentioned group have the same perplexing properties as sub- atomic particles (de Broglie waves, Heisenberg indeterminacy, etc.). Are we to say that a large protein molecule (e.g., a virus) which can be “seen” only with an electron microscope is a little less real or exists to somewhat less extent than does a molecule of a polymer which can be seen with an optical microscope? Although there certainly is a continuous transition from observability to unobservability, any talk of such continuity from full-blown to non-existence is, clearly, nonsense. (Maxwell 1962: 9)

There could hardly be a more informed and concise way of giving voice to the common-sense intuition that observability is a matter of degree, while existence and nonexistence are separated by a sharp ontological dichotomy. It indicates how the ontological status of an entity cannot be determined by its observational status. This is consonant with Devitt’s (1984: 4) Third Maxim, urging us to settle the ontological issue of realism before any epistemic or semantic issue. Van Fraassen precedes his answer to the continuity argument by a few pre- liminaries. He notes, in the spirit of our considerations about the semantic and epistemic aspects of the theory/observation distinction, that the very ti- tle of Maxwell’s paper contains the problematic notion of ‘theoretical entity’. Entities, van Fraassen contends, are observable or unobservable, whereas only terms or concepts can be theoretical. Consequently, he separates the discus- sion into two parts: on the one hand, he argues that the question should be, “can we divide our language into a theoretical and a non-theoretical part? On the other hand, can we classify objects and events onto observable and unobservable ones?” (van Fraassen 1980: 14). As to the first issue, van Fraassen agrees with Maxwell against the received view that there is no way to split scientific language into a theoretical part and a

75 non-theoretical one. He explicitly endorses the omnipresent theory-dependency of language:

All language is thoroughly theory-infected. If we could cleanse our lan- guage of theory-laden terms, beginning with the recently introduced ones like ‘VHF receiver’, continuing through ‘mass’ and ‘impulse’ to ‘element’ and so on into the prehistory of language formation, we would end up with nothing useful. The way we talk, and scientists talk, is guided by the pictures provided by previously accepted theories. ...Hygienic recon- structions of language such as the positivists envisaged are simply not on. (van Fraassen 1980: 14)

However, we’ll see that van Fraassen’s admission of the theory-ladenness of scientific language on the one hand, and his strict view of observability on the other hand, generate a tension which will prove to be fatal to constructive empiricism. In fact, constructive empiricism is not affected by the way Maxwell has drawn the conclusion of his argument (that the ontological status of an entity is not determined by its epistemic status). The reason is that van Fraassen does at no place claim that only observables exist. Once again, his distinction is epistemic, not ontological or semantic. He phrases it concisely:

...even if observability has nothing to do with science (is, indeed, too anthropocentric for that), it may still have much to do with the proper epistemic attitude to science. (van Fraassen 1980: 19)

Constructive empiricism merely claims that only reports about observable en- tities are believable. This conflicts in no respect with Maxwell’s gradual series, since believability is also a matter of degree. In other words, constructive em- piricists think that all other things being equal, the closer a claim is to the observable – i.e., detectable by unaided senses – end of the gradual series, the more believable it is. I take this answer to be perfectly acceptable. But others believe it is problematic. As Kukla (1998: 131) points out, some supporters of the Bayesian theory of confirmation consider that the above answer would “dramatically reduce the difference between constructive empiricism and realism.” Kukla cites Foss (1984), according to which

If the constructive empiricist embraces the “Bayesian” solution..., then when he accepts a theory he will have various degrees of belief that each of the various theses of the theory is true. This position does not amount to being “agnostic about the existence of the unobservable aspects of the world described by science.” ... For the Bayesian sort of constructive empiricism does not suspend belief, but has quite a definite degrees of belief about each scientific thesis. (Foss 1984: 85–6)

76 In greater detail, the problem for the Bayesian constructive empiricism is as follows: if it is accepted that believability comes in degrees, then belief in reports about unobservables will decrease to a very low level. No theoret- ical entity can completely be deprived of a certain degree of observability, and accordingly, of a certain degree of credence. But if we admit that state- ments about unobservables have nonzero probabilities – no matter how low – then the conclusion is inescapable that there are circumstances under which these statements will become . Hence, Foss concludes, “a degree- of-observability-Bayesian constructive empiricist would satisfy van Fraassen’s own definition of a scientific realist.” (Foss 1984: 86). Of course, Foss is aware that van Fraassen does not ascribe degrees of probability to statements about unobservables. In fact, van Fraassen tells us that observability is a vague concept. As any other vague predicate of natural language, observability does not raise problems of use, but only logical difficulties.

A vague predicate is usable provided it has clear cases and clear counter- cases. Seeing with the unaided eye is a clear case of observation. (van Fraassen 1980: 16)

The idea is that observability can be disposed of as a dichotomous concept, yet with vague boundaries. So, as Kukla indicates, a Bayesian constructive empiricism can be saved by reformulating Maxwell’s continuum of degrees of observability as a progressive approach to a vague boundary:

Instead of molecules increasingly less observable as they get smaller, it becomes increasingly uncertain whether they are observable. (Kukla 1998: 131)

Foss would presumably refuse to admit that this move makes the Bayesian solution more acceptable. The closer to the vague boundary an entity is sup- posed to be, the less likely it is that the entity is observable. The predicament appears to be the same: constructive empiricism ascribes smaller and smaller degrees of beliefs to a series of claims that get closer and closer to the vague boundary between the observable and the unobservable. Formerly the de- creasing degrees of credence were assigned on the basis of decreasing degrees of observability. They are now assigned on the basis of decreasing probabilities that the event is observable. Nonetheless, I agree with Kukla that there is an important difference be- tween the two cases:

If what makes the probability fall off is the entity’s degree of observ- ability, then it’s reasonable to suppose that, as Foss does implicitly, the

77 decreasing function from entities to probabilities never quite hits zero probability: you can always take a little bit more observability and lose a little bit more credence. But suppose instead that what makes the probability fall of is the entity’s being more and more deeply immersed in the vague boundary between observability and unobservability. Then it’s still reasonable to say that the probability gets smaller and smaller as it becomes increasingly uncertain whether the entity is observable – but only until you get to entities on the other side of the boundary. Once you get safely to the other side of a vague conceptual boundary, everything becomes clear again – the denizens of the other side are unambiguously not a part of the concept. (Kukla 1998: 132)

All that matters, as van Fraassen urges, is that observability possesses clear cases and counter-cases. We know of the probabilities in the latter case that they fall on a zero platform nearby the vague border. In the former case, the probabilities decreased continuously without reaching ground zero. This shows van Fraassen’s reformulation to be successful: the Bayesian solution can be an answer to the continuity argument while at the same time not inflating constructive empiricism into realism. Van Fraassen’s dichotomy guarantees that there is a class of theoretical claims which the constructive empiricist ought not to believe, while there is no such class for the scientific realist. Certainly, the vague character of observability causes troubles for the ap- plication of the notion of empirical adequacy. If, following the constructive empiricist, to accept a theory is to believe what it says about the observable, what does acceptance of a theory mean as to those entities whose observabil- ity is so vague as to be in doubt? But this is a problem whose solution is not sought here.

4.2.2 The technological argument A common realist retort to the anthropocentric distinction between observables and unobservables is that the boundary shifts with the progress in the avail- able instruments of observations. The point was implied by Maxwell’s series of gradually observable objects: if observability/unobservability is a contin- uum, then the view that observability is detection by unaided senses involves an untenable limitation. The same objection is raised explicitly against van Fraassen by (1985). Churchland’s argument invokes the pos- sibility of genetic mutations to the effect of drastically extending the human sensory abilities. (As will be seen in 4.3, the argument has also been raised against Fodor’s distinction between observation and inference.) We have already pointed out that van Fraassen’s epistemology demands a concept of observability free from contextual dependencies. For that reason, he

78 cannot just equate observability with current detectability, since the bound- aries of the latter extend as the technology advances. He de-contextualizes observability by equating it to detectability by the unaided senses, but the toll for this move is that he has to accept a slew of implausible consequences. Maxwell mentioned the possibility of human mutants able to detect ultravi- olet radiation with the naked eye (Maxwell 1962: 11). In a fictional mood, Churchland (1985) imagines human mutants and extraterrestrial en- dowed with electron-microscope eyes. These beings ascribe high degrees of credence to sentences about which the ‘normal’ constructive empiricist is ag- nostic. The question that should now be raised is whether these rational creatures can be excluded from the scientific community on the basis of van Fraassen’s observability principle. As a matter of fact, van Fraassen foresaw this objection:

It will be objected ... that ... what the antirealist decides to believe about the world depends in part on what he believes to be his, or rather the epistemic community’s, accessible range of evidence. At present, we count the human race as the epistemic community to which we belong; but this race may mutate, or that community may be increased by adding other animals (terrestrial or extra-terrestrial) through relevant ideological or moral decisions (“to count them as persons”). Hence the antirealist would, on my proposal, have to accept conditions of the form

If the epistemic community changes in fashion Y , then my beliefs about the world will change in fashion Z.

(van Fraassen 1980: 18)

Concerning the concrete form of the function relating eventual epistemic changes to changes in the world, van Fraassen defers to “relevant ideological or moral decisions”. However, what ideological and moral decision-makers can offer are conventional, hence ultimately arbitrary decisions. This is precisely what Maxwell and Churchland criticize. After all, as Kukla (1998: 134) indicates, the requirement that we agree on what is observable does not entail that all of us have the same sensory capacities. Blind or deaf scientists are not – or should not be – less than full members of their scientific communities. What is required in the event that scientist A can observe some phenomenon perceptually prohibited to scientist B is that B be willing to credit A’s observational reports. At this point, the scenario splits into two parts. On the one hand, if there is a significant overlap of their sensory capacities, A and B can agree on the conditions of mutual scientific credit. On the other hand, if no sensory overlap is in place, things are more complicated. It seems to me that in this latter

79 case there is no guarantee that A and B can ever become members of the same scientific community. What would be required, as Kukla suggests, is the existence of a manifold of beings whose perceptual abilities overlap so as to constitute a continuum at whose ends are situated A and B, respectively. Each sentient being in this continuum would have a sensory spectrum that overlaps with the one of his ‘perceptual neighbor’. Imaginably, perceptual epistemic credit could be the outcome of complex transmission of epistemic credit along this chain. Yet, unlike Kukla, I do not see any warrant that such a ‘sensorial chain’ could ever be constructed. I contend that Kukla relies too heavily on a far-fetched piece of science fiction, the image of an intergalactic cosmopolis populated by beings harmoniously complementing each other’s abilities. Cer- tainly, this is not a logical impossibility. However, we don’t know whether this is physically possible. We cannot be sure that for any two rational beings, A and B, no matter how large the gap between their sensorial endowments, epistemic agreement can be reached. It may actually be the case that the physiologies of A and B are so different that there can be no effective contact between their . But doesn’t our latter point speak in favor of van Fraassen’s distinction? It appears as if different sensorial endowments engender different sciences, which is certainly absurd. It is one thing to claim that any posited theoretical entity may be detected by a sentient – existent or created by genetic – being; and it is a different one to admit, as van Fraassen himself does, that the limits of observability can be shifted. We don’t know whether it is physically possible to extend the limits of sensorial detectability so as to reach any unobservable entity in the universe. But, comparatively, it is a modest step to admit that those limits will shift from the current state, as we often have a fairly clear idea about the direction of that shift. This is all that is needed to place van Fraassen in a difficult position. For if we know now that the limits of observability can shift as to make possible tomorrow the detection by unaided senses of what we now think of as an unobservable entity, then we should now have a definite, nonzero degree of belief in that entity. In any event, van Fraassen admits openly the eventuality of enlarging the scientific community:

Significant encounters with dolphins, extraterrestrials, or the products if our own genetic engineering may lead us to widen the epistemic commu- nity. (van Fraassen 1985: 256)

However, in order to mitigate the problem of the arbitrariness of the decision as to how large the scientific community actually is, van Fraassen cannot al- low a lot of tolerance about the borders of the latter. He must stick to his anthropocentrism, no matter how implausible its consequences. And indeed

80 he does, as he produces a rather species-chauvinist answer to the question of ‘what do we do with all these intelligent beings which candidate to adhere to our cognitive community?’ They – dolphins, extraterrestrials, etc. – he says,

are, according to our science, reliable indicators of whatever the usual combination of human with electron microscope reliably indicates. What we believe, given this consequence drawn from science and evidence, is determined by the opinion we have about science’s empirical adequacy – and the extension of “observable” is, ex hypothesi, unchanged. (van Fraassen 1985: 256–7)

The extension of the scientific community thus envisaged consists not in ad- mitting new members with equal rights, but only in equipping the extant ones with new instruments. Since the interest here is not in a discussion of the ethical implications of van Fraassen’s epistemology, it has to be conceded that, whatever the short- comings of chauvinism, it is not an incoherent epistemic position. Accordingly, an observable/unobservable distinction drawn in terms of a species’ sensorial limitations can survive the technological argument.

4.2.3 The phenomenology of science There is one important part of van Fraassen’s philosophy of science which turned out to be particularly contentious. It is described by van Fraassen (1980) as the ‘phenomenology of scientific activity’ and depicts the way he thinks that scientists get along with the mere acceptance of their theories:

The working scientist is totally immersed in the scientific world-picture. And not only he – to varying degrees, so are we all. ... But immer- sion in the theoretical world picture does not preclude “bracketing” its ontological implications.... To someone immersed in that world picture, the distinction between electron and flying horse is as clear as between racehorse and flying horse: the first corresponds to something in the ac- tual world, and the other does not. While immersed in the theory, and addressing oneself solely to problems in the domain of theory, this objec- tivity of electron is not and cannot be qualified. But this is so whether or not one is committed to the truth of the theory. It is so not only for someone who believes, full stop, that the theory is true, but also for ... someone who ... holds commitment to the truth of the theory in abeyance. For to say that someone is immersed in theory ... is not to describe his epistemic commitment. ... It is possible even after total im- mersion in the world of science ... to limit one’s epistemic commitment while remaining a functioning member of the scientific community. (van Fraassen 1980: 80–3)

81 There are several problems with this story, bearing different degrees of fatality. In the first place, it will be considered whether one can make psychological sense of the distinction between accepting and believing a theory. Thereafter, we shall explore the difficulties of van Fraassen’s maintaining that on the one hand, scientific language is thoroughly theory-laden, and on the other hand, an observable/unobservable distinction can be objectively drawn. Let us begin with the epistemic attitudes involved in constructive empiri- cism. There are two anti-instrumentalist objections to discuss. We label the first one, developed by (1985), the argument from mental sta- bility. Musgrave is puzzled by constructive empiricist talk about detecting various scientific entities without thereby believing in their existence:

[van Fraassen] talks of detecting an electron in a cloud chamber. Can one say truly that one has detected an object without also believing it to be true that the object really exists? Later he describes how Millikan measured the charge of the electron (75–7). Did not Millikan think it true, and does not anyone who accepts Millikan’s results think it true, that electrons exist and carry a certain charge? Can one say truly that one has measured some features of an object without also believing that the object really exists? (Musgrave 1985: 206)

This sounds like reasonable questions. Van Fraassen frames his answer in terms of a pragmatic analysis of lan- guage: the way modern physics depicts the world is assimilated to a language; the better one “speaks” it, the better one finds out one’s way around in the world. This language provides one with a perceptual and intellectual frame- work through which the world is perceived and conceived. Of course, there is no reason why van Fraassen should not admit that there is a multitude of such languages, that we are, each of us, natives of some common-sense picture of the world, with categories different from – though not necessarily incompatible with – those of science. Various religious “languages” would constitute such world-pictures. However, with respect to the vast corpus of experimental data, only contemporary science accounts thoroughly for it. If other world-pictures are compatible with it, then they are embedded in it. Yet, as van Fraassen would have it, nothing compels the “speakers” of such a language to embrace the ontological implications of the world-picture in which they are thereby im- mersed. The success of this view speaks not for the truth, but only for the empirical adequacy of a certain world-picture. For Musgrave this line of reasoning is “nothing but sleight-of-hand endorse- ment of philosophical schizophrenia.” (Musgrave 1985: 207). He maintains that the trick consists in converting belief in the reality of scientific entities into belief in the theory of those entities. Yet, scientists do have a notion

82 of, say, an electron which became independent of specific theories about elec- trons. Thus, on the one hand, no scientist takes all theories about electrons to be true. On the other hand, scientists can believe in electrons even if no theory of electrons is thought to be literally true. (We have argued fairly extensively on this line in chapter 3, when we have discussed the arguments for entity realism.) What does indicate this relative independence of some observable entities of the theories about them? It implies that we do not need to “immerse” ourselves, under any circumstance, in the scientific world-view, in order to have correct beliefs about unobservables. We do not need to “bracket” the ontological consequences of these beliefs. Van Fraassen’s notion of immersion in the scientific world-picture appears rather dubious. Were we to accept it, Musgrave points out, we should expect scientists to get immersed into the scientific world-picture whenever they go to work, time in which “the of electron is and cannot be qualified”; and we should then expect them, at the end of the workday, to emerge out of the world-picture, regaining the to fully believe in (or to be indifferent about the epistemic status of) electrons. Along with Musgrave (1985: 207), we admit that “split-minded scientists like this are possible, although they might not be desirable. Be it as it may, in the absence of an empirical study to show that such a split-mindedness impedes scientists to perform good science,3 all that Mus- grave’s argument from mental stability can achieve is raise doubts about the plausibility of separating acceptability from believability. Admittedly, it suc- ceeds in this respect, but this does not result in anything dramatic for the phenomenology of scientific activity.

4.2.4 The incoherence arguments Horwich’s argument There is a second argument addressing van Fraassen’s phenomenology of sci- ence, which aims to show that it is incoherent. Paul Horwich (1991) argues that there is no ascertainable difference between believing and accepting a theory. Hence, there is no difference in the doxastic attitudes presumed to correspond to the observable and, respectively, the unobservable realms.4 Recall that acceptance of a theory consists in believing only its observable consequences. Horwich sets out to show that we could not accept a theory without also believing it. His starting point is the suggestion that any attempt

3Things become more complicated when those supposed to carry out such a study are themselves split-minded. 4Our account reflects the influence of Kukla’s (1998: 106–10) persuasive analysis of Hor- wich’s argument.

83 to formulate a psychological theory of beliefs would lead us to defining beliefs in terms in which instrumentalists describe acceptance:

If we tried to formulate a psychological theory of the nature of belief, it would be plausible to treat beliefs as states with a particular kind of causal role. This would consist in such features as generating predictions, promoting certain utterances, being caused by certain observations, en- tering in characteristic ways into referential relations, playing a certain part in deliberation, and so on. But this is to define belief exactly in the way instrumentalists characterize acceptance. (Horwich 1991: 3)

Horwich considers this a prima facie case for identifying belief with acceptance. He then argues against four possible considerations that could be called on to support the distinction of belief from acceptance:

(i) the idea that true believers believe that they believe, while those who merely accept do not;

(ii) the observation that even realists sometimes use a theory for practical purposes without believing it;

(iii) the idea that, unlike instrumentalism, realism is committed to a corre- spondence theory of truth;

(iv) the appearance that an increase in the usefulness of a theory leads to a decrease in its believability.

We shall consider only sketchily the first three points, and enter into the details of the fourth one, because the failure of Horwich’s argument is related to it – or so will be argued. (i) cannot be a serious basis for distinguishing between belief and acceptance, since one’s psychological states need not reflect straightforwardly one’s epistemic attitudes. That is, one can be confused or mistaken about one’s beliefs to the effect of denying those beliefs, even when holding them. (ii) is also an insufficient basis for the distinction, since local uses of instrumentalism by no means preclude an overarching realist approach (see our discussion of selective realism in chapters 2 and 7). (iii) is no threat at all to Horwich’s thesis, for scientific realism is not committed to a correspondence theory of truth (see the introductory chapter). As far as (iv) is concerned, it summarizes an argument by van Fraassen (1985). He suggests that theoretical unification results in a theory of increased acceptability – simpler and pragmatically more useful – but also decreased be- lievability – because additional claims make it more likely to go wrong. Hence, “belief and acceptance should respond differently to theoretical unification, and so cannot be the same thing.” (Horwich 1991: 7).

84 Horwich’s criticism against van Fraassen’s argument is that it fails to spec- ify any property that applies to belief but not to acceptance, or conversely. Both beliefs and acceptance, Horwich maintains, are to be evaluated from a double standpoint: epistemic and pragmatic. Epistemic evaluation takes place in relation to the available evidence, according to epistemic norms. Pragmatic evaluation takes place in relation to practical consequences, according to prag- matic values. It is in the latter sense that Pascal recommends us to believe in God. In these terms, van Fraassen’s argument is that theoretical unification urges us, from an epistemic viewpoint, to decrease the credibility of the total theory and, from a pragmatic viewpoint, to increase the acceptability of the to- tal theory. There appears thus a tension between the epistemic imperative and the practical one. But this is precisely the point contended by Horwich. He argues that belief and acceptance co-vary under the epistemic and pragmatic dictates. commends that we increase our credence in the uni- fied theory, and epistemic reason commends that we increase the acceptability of the unified theory. Therefore, “belief and acceptance will respond in the same way to theoretical unification.” (Horwich 1991: 7). Unlike Horwich, van Fraassen seems to deny that pragmatic rationality is involved in the assessment of beliefs. That would imply that, after all, accep- tance and belief respond differently to pragmatic considerations. Yet, as Kukla (1998: 107–8) convincingly argues, the point of dispute is only terminological. If we are capable of both epistemic and pragmatic evaluations, we should be capable to undertake them either separately or jointly, at our convenience. If, on the one hand, belief can be evaluated exclusively in epistemic terms, van Fraassen can hold his point. If, on the other hand, pragmatic considerations are also needed in evaluating beliefs, then all that constructive empiricists need is a reformulation of their notion of ‘belief’ limited to epistemic considerations. As Kukla suggests, a notion of epistemic belief – which is just belief purged from its pragmatic component – is everything that van Fraassen requires to save the coherence of his distinction:

There can be no doubt that we’re capable of epistemic belief, since the cognitive resources needed for epistemic-believing are a proper subset of those needed for forming and evaluating garden-variety beliefs. Nor can there be any doubt that epistemic belief is different from acceptance, for the latter is affected by pragmatic considerations whereas the former is not. But the distinction between epistemic belief and acceptance is enough to formulate the constructive empiricist viewpoint in a coherent manner: constructive empiricists maintain that there may be good rea- sons for accepting scientific theories, but never for epistemic-believing more than their empirical consequences. (Kukla 1998: 107–8)

In his argument, Horwich claims that the difference between epistemic and

85 pragmatic beliefs does not correspond to any ascertainable difference between states of mind. He expects constructive empiricism to be able to indicate a behavioral difference resulting from believing and, respectively, accepting a theory. However, in line with Kukla (1998: 109), I don’t see that the identity of behavioral dispositions must correspond to the identity of mental states. Believing a theory and accepting it may well be different mental states, even if their corresponding behavioral dispositions are the same. One may follow, for instance, the prescriptions of a religious doctrine either from innermost piety, or as in Pascal’s wager, from cunning calculation. Even if God almighty will know the difference, we, who rely on one’s behavioral dispositions, may never be able to identify one’s real credence. Similarly, one can believe a theory for epistemic reasons, or merely accept it pragmatically. In both eventualities, there may be no hint as to what is actually going on in that scientist’s mind. As argued in 2.3, the constructive empiricist has difficulties in justifying her cognitive state. In particular, I do not believe that she is able to explain where the pragmatic benefits of a theory – on which its acceptance relies – come from. But this is quite a remote issue, with no relevance to the coherence of the constructive empiricist distinction between belief and acceptance. Consequently, Horwich’s argument that the coherence of constructive em- piricism depends on whether belief and acceptance are different mental states, fails to reach its purpose. The inability to establish differences in the be- havioral dispositions of believers and accepters does not entail that they are, respectively, in identical cognitive states. All that Horwich’s argument shows is that van Fraassen may have a terminological problem when using ‘belief’ in a strictly epistemic sense, i.e. deplete of its pragmatic side. But this can easily be explained away, so that constructive empiricism remains unscathed.

Musgrave’s argument Alan Musgrave (1985) brings forth another argument purporting to reveal the incoherence of van Fraassen’ distinction between belief and acceptance. Suppose we have scientific means to tell that the class A of phenomena is observable by humans, whereas the class B is not. Assume that we use the- ory T , the “final physics and biology.” Now, constructive empiricists accept T solely as empirically adequate – true as to its observable statements. But what should we do with a sentence like “B is not observable by humans”? On pain of contradiction, the statement cannot be about observables. If it is not about observables, then it can only be accepted, not believed. In fact, Musgrave maintains, there is nothing that constructive empiricists can consistently say about unobservables: “anyone who claims to have observed something about unobservables contradicts himself.” (1985: 207). Therefore, as Musgrave con-

86 cludes, constructive empiricism seems to be founded on a spurious distinction. Van Fraassen has a smart answer to this objection:

Musgrave says that “B is not observable” is not a statement about what is observable by humans. Hence, if a theory entails it, and I believe the theory to be empirically adequate, it does not follow that I believe that B is not observable. The problem may only lie in the way I some- times give rough and intuitive rephrasings of the concept of empirical adequacy. Suppose T entails that statement. Then T has no model in which B occurs among the empirical substructures. Hence, if B is real and observable, not all the phenomena fit into a model of T in the right way, and then T is not empirically adequate. So, if I believe T to be empirically adequate, then I also believe that B is unobservable if it is real. I think that is enough. (van Fraassen 1985: 256)

In other words, if T is empirically adequate, Musgrave’s argument demon- strates that constructive empiricists must believe that the class B of phenom- ena either does not exist, or it is unobservable. This is logically equivalent to saying that if B exists, then it is unobservable. Surely constructive empiri- cists can consistently claim that B is unobservable, provided that B exists. If so, that means that they can entertain beliefs about entities whose exis- tence is forever beyond our ken, if those entities exist. In other words, the constructive empiricist cannot believe in unobservable entities, but she has no trouble in believing in unobservables-if-they-exist. There is, strictly speaking, no incoherence in the sense pointed out by Musgrave. However, there is an important price that constructive empiricism must pay for this manoeuver. The constructive empiricist is not allowed to believe in unobservables unless they exist – all right so far. But she must also be justified in this belief, meaning, she ought to know about those unobservable’s existence. This urges either a God’s eye view, or the employment of some ampliative procedure. However, both these epistemic steps are precluded by the empiricist doctrine of relying solely on sense experience. Accordingly, T – the theory entailing that “B is not observable” – cannot be known to be empirically adequate by employing procedures relying solely on observation. Therefore, though indirectly, Musgrave’s argument inflicts a severe injury to constructive empiricism.

Friedman’s argument The last incoherence argument to be considered belongs to Michael Friedman (1982). This argument, which I take to be fatal to constructive empiricism, turns out to be fairly simple. It was presented by Friedman in a short book review on van Fraassen’s The Scientific Image.

87 Friedman objects that van Fraassen is telling us, on the one hand, that we can only believe in the observational consequences of our successful theories and, on the other hand, that those consequences can only be expressed in a theory-laden language. Here is how Friedman puts it:

Suppose that I, speaking the language of contemporary physics, assert the empirical adequacy of that theory: viz.

The observable objects are embeddable in the world picture of modern physics.

(i.e., the observable objects behave as if they were a subpart of the world of physics). But “the observable objects” are themselves characterized from within the world picture of modern physics: as those complicated systems of elementary particles of the right size and “configuration” for reflecting light in the visible spectrum, for example. Hence, if I assert that observable objects exist, I have also asserted that certain complicated systems of elementary particles exist. But I have thereby asserted that (individual) elementary particles exist as well! I have not, in accordance with van Fraassen’s “constructive empiricism”, remained agnostic about the unobservable part of the world. (Friedman 1982: 278)

In different words, the problem is that if, following constructive empiricism, one believes the observable consequences of a theory T , and if T has the existence of entity X as an observable consequence, it follows that one must believe in the existence of X. Suppose for instance, that it is a consequence of quantum theory (in conjunction with many other convenient auxiliary assumptions) that the table in front of me is a structure composed of more than 1025 atoms. Consequently, since I believe quantum theory to be empirically adequate, I believe in the existence of this particular structure of 1025 atoms meaning, I believe that a structure of 1025 atoms exists. So, the belief logically follows that atoms exist. Hence, by following constructive empiricism, I have logically come to believe in the existence of unobservable entities, thus contradicting constructive empiricism. Therefore, constructive empiricism is incoherent. There are two elements whose conflict results in this poser: the fact that van Fraassen resolutely admits the theory-ladenness of any scientific description, while also taking the language of contemporary physics to be the ultimate lan- guage for describing the empirical world. As we have seen, van Fraassen takes observation to mean detection by the unaided senses. Yet, he maintains that we cannot describe a priori the limits of observability. Science itself should tell what is observable and what not. This implies that every observation report is theory-contaminated, so that it cannot serve as a theory-neutral platform. Van Fraassen is aware that there is a circularity here, but he maintains that it’s not of a vicious kind. He calls it the hermeneutic circle:

88 To delineate what is observable ... we must look to science ... for that is an empirical question. This might produce a vicious circle if what is observable were itself not simply a fact disclosed by theory, but rather theory-relative or theory-dependent. It will already be quite clear that I deny this; I regard what is observable as a theory-independent question. It is a function of facts about us qua organisms in the world, and these facts may include facts about the psychological states that involve con- templation of theories – but there is not the sort of theory-dependence that could cause a logical catastrophe here. (van Fraassen 1980: 57–8)

Van Fraassen suggests that human beings’ minds and organisms are to be understood as a part of nature. They ought to be discovered in nature rather than helping to establish nature. This is why the fact that natural events are described within the of modern science should not wipe out the distinction between what is observable and what not. As he puts it,

Everything in the world has a proper classification within the conceptual framework of modern science. And it is this conceptual framework which we bring to bear when we describe any event, including an observation. This does not obliterate the distinction between what is observable and what is not – for that is an empirical distinction – and it does not mean that a theory could not be right about the observable without being right about everything. (van Fraassen 1980: 58)

Indeed, the point is not that the distinction between observable and unob- servable is obliterated. We are not claiming that van Fraassen’s “hermeneutic circle” is vicious. There are clear cases and counter-cases of observability, and a serviceable concept of the observable can be achieved. It may also be admit- ted that what is observable is (up to some psychological details to be discussed in the next section) a theory-independent issue. However, the problem is that scientific theories accepted as empirically adequate entail observable consequences. These consequences are, on the one hand, to be believed, and on the other hand, fully theory-dependent. If we have reason to disbelieve such a theory, then we implicitly have reason to disbelieve what it says it is observable. Van Fraassen correctly points out that a “theory could be right about the observable without being right about everything”. Nonetheless, if we refuse to believe in (or are agnostic about) that theory, then this is a strong reason for us to refuse to believe in (or be agnostic about) what it says that it is observable. In order to escape Friedman’s problem, van Fraassen needs a theory-neutral concept of observation. However, since he himself tells us that there is no theory-neutral language, we can only conclude that constructive empiricism is indeed incoherent.

89 A tentative way out for constructive empiricism was put forth by Newton- Smith. He suggested that if we placed ourselves into van Fraassen’s perspec- tive, we have to admit current scientific theories are just stories:

...stories to which it may be pragmatically useful to adhere – but stories for which there are underdetermined rivals (albeit unknown to us at this time). Observable objects are embedded in the world-picture of modern science. On that picture these observable objects have such-and-such con- stituents of elementary particles. But . . . there is another picture which give these observable objects quite different constituents. So given this plurality of pictures and no rational reason to select one picture from the others, best to regard the “world picture of modern physics” as just one fairy tale among other and not to believe it. (Newton-Smith 2000, personal correspondence)

I agree with Newton-Smith that this position would circumvent Friedman’s problem. But this is rather a species of non-eliminative instrumentalism, and not van Fraassen’s position. Van Fraassen is strongly committed – in the sense of acceptance, not belief – to the epistemically privileged language of modern science. The latter is by no means “one fairy tale among others.” As van Fraassen has emphasized, contemporary physics is the only world picture able to account consistently for the vast corpus of experimental data that we have. It is precisely this uniqueness of its epistemic commitment that leads construc- tive empiricism to conflict with the theory-ladenness of language. For example, if we had an empirically successful alternative to quantum theory which would imply that the table in front of me is composed of 106 micro-entities-of-a-non- atomic-kind (and not, as we have assumed that quantum theory implies, of 1025 atoms), then Friedman’s problem would be blocked by the underdetermi- nation of belief in atoms and, respectively, belief in those micro-entities-of-a- non-atomic-kind, from the same observation. But van Fraassen takes quantum theory at face-value. He unmistakably speaks of atoms as table’s constituents, and not of other logically possible fictions. That engenders a fatal tension. As Friedman puts it,

The point is that one cannot take the language of modern physics to be the one and only language for describing the world and, at the same time, call into question the denotation of its basic terms. (Friedman 1982: 278)

In terms of our example, if the only acceptable scientific description of the table in front of me is as a structure of 1025 atoms and if we believe that, then we just cannot circumvent the conclusion that atoms exist. The final conclusion is that constructive empiricism cannot cope with Fried- man’s argument and, as such, has been shown to be relying on an incoherent distinction between observable and unobservable.

90 4.3 Fodor’s theory/observation distinction

The distinction between the observational and the theoretical has not always been drawn from an antirealist perspective. It has in fact, served various – sometimes even contradictory – purposes. Thus, while van Fraassen wants his observable/ unobservable distinction to strengthen antirealism, Fodor wants his distinction5 to serve his realist views. Fodor’s distinction was motivated especially by his will to reject Kuhnian , according to which beliefs are warranted only relative to a paradigm, i.e. a fundamental picture of the world shaping our concepts, methods, and ra- tionality standards. According to Kuhn (1962/1970), scientific revolutions take place which consist of profound changes of paradigm, thus leading to world- perspectives so transformed that the meanings of the words become radically different. For example, the concepts of mass used in the Newtonian and, re- spectively, Einsteinian mechanics are so different as to be incommensurable. Consequently, communication between the adherents of the two paradigms is virtually impossible. Certainly, this is a philosophy that the scientific realist strongly opposes. The realist view is that there are scientific hypotheses that ought to be believed absolutely, not just relatively to a paradigm or another; that the best scientific theories are approximately true, not just pragmatically useful instruments; and that science in general is a progressive, truth-oriented enterprise, not just an idiosyncratic, socially and psychologically determined activity. Fodor’s strategy to undermine the Kuhnian relativism is to identify a theory-neutral basis on which to settle any epistemic incongruities. As we’ll immediately see, he sets out to establish an absolute concept of observation based on empirical investigations of the mechanisms of perception. Before proceeding to discuss his arguments, it should be first noted that as a matter of fact, scientific realism does not need a theory/observation distinction in order to dismiss epistemic and semantic relativism. Scientific realism does not need any theory-neutral ground in order to show that trans-paradigmatic epistemic consensus can be achieved. In subsection 2.4.1 it was argued, in line with Boyd (1984), that the methodology of science, though theory-laden, is a reliable guide to the construction of approximately true theories. Second, by choosing to fight the relativist with a theory/observation distinction, Fodor runs the risk of supplying the epistemic antirealist with weapons. But let us now examine Fodor’s arguments.

5Both van Fraassen’s and Fodor’s distinctions are epistemic, and the observational part can be taken to mean the same in both of them.

91 4.3.1 Against Meaning Holism Fodor is looking for a theory/observation distinction based on feature of per- ception psychology: he maintains that there is a class of beliefs that are fixed by perceptual processes, and that the fixation of this class is theory-neutral. Yet he notes that the possibility of fixing a class of beliefs is seriously threat- ened by the doctrine of semantic holism, which claims that the meaning of a sentence is determined by its place in a web of beliefs or sentences comprising a whole theory. Meaning holism has been motivated by reflections on confirmation theory. As Quine (1953) stated, claims about the world are confirmed not individu- ally, but in conjunction with theories of which they are a part. In his words, “Our statements about the external world face the tribunal of sense experience not individually but only as a corporate body” (Quine 1953: 41). It follows that one cannot come to understand scientific claims without understanding a significant part of the theory in which they are embedded. The holistic doctrine takes also support from considerations on the nature of learning: in learning the concepts of ‘force’, ‘mass’, ‘kinetic energy’ and ‘momentum’ in Newtonian physics, one does not dispose of pre-existent def- initions of these concepts. Rather, they are learnt together; their meanings support each other as parts of procedures for solving concrete problems. As to the theory/observation distinction, it is apparent that holism, if cor- rect, makes it virtually impossible to be drawn: if every statement is semanti- cally linked to any other in its theoretical network, then the minimal context within which the meaning of a theoretical postulate is fixed is the whole the- ory. Of course, this extends to observational statements as well, which thus get their meanings from their theoretical network. Accordingly, Fodor’s view that the contents of observational sentences are determined by sensations seems to be flawed. The point is aptly expressed by Paul Churchland (1979) (quoted by Fodor 1983: 23):

...the meaning of the relevant observation terms has nothing to do with the intrinsic qualitative identity of whatever sensations just happen to prompt their non-inferential application in singular empirical judgements. Rather, their position in semantic space appears to be determined by the network of sentences containing them accepted by the speakers who use them. . . . the view that the meaning of our common observation terms is given in, or determined by, sensation must be rejected outright, and as we saw, we are left with networks of belief as the bearers or determinants of understanding... (Churchland 1979: 12–3)

According to Churchland, an observation sentence has no privileged status in the theoretical network. Moreover, depending on the theoretical context,

92 any sentence can be observational, as any sentence deemed observational is, in actuality, theoretical. I think that meaning holism conflicts with our ordinary conception of rea- soning. First, if any belief depends on all others in the network, it is highly unlikely that two believers could ever come to share the same belief. Second, according to holism, the admission of a sentence in the theoretical network influences what one infers. But then the following absurd situation emerges:

If I accept a sentence and then later reject it, I thereby change the infer- ential role of that sentence, so the meaning of what I accept would not be the same as the meaning of what I later reject. But then it would be difficult to understand on this view how one could rationally – or even irrationally! – change one’s mind. (Block 1998: 488)

If what I accept influences the inferences I make, how can I ever reject a sentence that I formerly accepted? That is, how can I reason as to change my mind? Analogously, it follows from meaning holism that no two people can ever agree or disagree, and that no translation from one language into another can be achieved. It is beyond our purposes to delve in any detail into the complex discussion of holism. Let us only note that semantic holism is usually contrasted with semantic atomism and molecularism. Semantic atomism is the view embraced by Fodor, according to which sentences have meanings independently of any re- lation to other sentences. More plausibly, semantic molecularism, as promoted by Devitt (1995), asserts that some small parts of the theoretical web are in- volved in establishing the meaning of sentences. Both of these views have been triggered by arguments showing that meaning has also an external-referential aspect, as indicated by the causal theories of meaning. Thus, Fodor argues, “it may not be true that all the semantical properties of sentences are determined by their location in the theoretical networks in which they are embedded; it may be that some of their semantic properties are determined by the character of their attachment to the world.” (1983: 31). In any event, we can safely grant Fodor that there can be a class of beliefs which are to be considered observational on grounds of their relations to sen- sory experience, regardless of what theories the believer espouses. Whether this possibility is actual is largely a matter of empirical investigation. This will be the task of the next subsection.

4.3.2 Psychological arguments In psychological terms, what is at stake is the possibility of an empirical dis- tinction between perception and cognition. The empirical impetus for this view

93 came from the so-called ‘New Look’ school in the perceptual psychology of the 1950s. Jerome Bruner and his scholars performed a series of experiments aim- ing to show that one’s background theories influence perception. As Gilman (1992: 293) indicates, Bruner et al. (1951) assumed that perception is a three- stage process: (1) an organism has a ‘hypothesis’ about its environment; (2) the organism receives an ‘input of stimulus information’; (3) the hypothesis is confirmed or disconfirmed in view of the received information. A hypothesis thus confirmed constitutes a perception. From these assumptions, New Look psychologists derived the following theorem:

The smaller the quantity of appropriate information, the greater the prob- ability of an established initial hypothesis being confirmed, even if en- vironmental events fail to agree with such hypotheses. (Bruner et al. 1951: 218)

Support for this statement was at first taken from Bruner’s experiments on color perception. Two decades later, Albert Hastorf studied “The Influence of Suggestion on the Relationship Between the Stimulus Size and Perceived Distance” (1970). If correct, the results of these experiments imply that sup- porters of different theories literally perceive different things, so that there is no way to have a theory-neutral observation. The conclusion was taken to under- gird Kuhnian relativism, with its known claims about the incommensurability of paradigms and the irrationality of theory choice. However, the findings of New Look psychology have been subject to strong criticism. Gilman (1992), for example, argues that the experiments of the New Lookers are questionable both from the standpoint of their design and from the interpretation of their data. He shows that all their findings can just as easily be explained by the simpler and more plausible hypothesis that theory-dependence was manifest only at the level of (linguistically) reporting the results, with no implication about a theory-dependence of perception. Be- sides, as Gilman suggests, it is inappropriate to draw dramatic philosophical conclusions from such a shaky empirical basis. Fodor makes a persuasive case for the claim that perception does in fact remain unaffected by background beliefs. In discussing the optical illusion produced by the M¨uller-Lyer figures, he points out that despite the fact that we have a theoretical explanation of how the illusion is produced, it doesn’t vanish:

The M¨uller-Lyer illusion is a familiar illusion; the news has pretty well gotten around by now. So, it’s part of the “background theory” of any- body who lives in this culture and is at all into pop psychology that

94 displays [of the M¨uller-Lyer illusion] are in fact misleading and that it always turns out, on measurement, that the center lines of the arrows are the same length. Query: Why isn’t perception penetrated by THAT piece of background theory? ...This sort of consideration doesn’t make it seem at all as though perception is, as it’s often said, saturated with cognition through and through. On the contrary, it suggests just the reverse: that how the world looks can be peculiarly unaffected by how one knows it to be. (Fodor 1984: 34)

This is a point which Fodor generalizes for all known perceptual illusions:

To the best of my knowledge, all standard perceptual illusions exhibit this curiously refractory character: knowing that they are illusions doesn’t make them go away. (Fodor 1984: 34)

Therefore, as Fodor concludes, these facts show that perception is not influ- enced by background theory. In terms of his theory, perception is “informa- tionally encapsulated”. To understand what that means, let us take a brief look at his modular theory of perception. Sensory stimulations go to a percep- tual module P , whose operation mode is innately specified. Next, the output of P goes to the central system C, which is stocked with concepts and theories. Finally, the output of C is composed of beliefs – or statements in Mentalese, the language of mind. The details of the structure of these modules are not of interest here. Two elements are relevant for the debate: first, the perceptual module P is encapsulated: only a restricted range of information is capable of influencing the output of perceptual processes. Second, the operation modes of these modules are endogenously specified. The encapsulation of perceptual modules entails that

bodies of theory that are inaccessible to the modules do not affect the way the perceiver sees the world. Specifically, perceivers who differ profoundly in their background theories – scientists with quite different axes to grind, for example – might nevertheless see the world in exactly the same way, so long as the bodies of theory that they disagree about are inaccessible to their perceptual mechanisms. (Fodor 1983: 38)

Fodor does not maintain that perception is not influenced at all by cogni- tion. Some terminological care is required in order to understand properly his position. He relies on a distinction between ‘sensation’ and ‘perception’. Fodor explicitly admits that there is a lot of problem-solving involved in perception. However, he disagrees with Kuhn and Churchland about how close to the pri- mary level of sensorial input problem-solving begins, and about how susceptible to modification the problem solving mechanisms are. Kuhn (1962/1970), for example, suggests that theory is involved at the fundamental level of sensory

95 perception and that new theories alter perception radically. There is accord- ing to him, no perceptual module P . Yet as Gilman (1992) remarks, Kuhn’s psychological vocabulary differs from the standardized one:

First, ‘stimuli’ are identified with the impressions taken by sensory appa- ratus and not with external physical causes of those impressions. Second, ‘sensations’ are not merely stimuli transduced, but stimuli that have been processed in some determinate fashion. Stimuli cannot be subjected to ‘awareness’, but ‘sensations’ can. Note: there does not seem to be a sen- sation/perception distinction here; that is, it seems that ‘sensation’ and ‘perception’ are used interchangeably. (Gilman 1992: 293)

Kuhn apparently settles a good deal of the issue not through argumentation, but on a terminological basis. Yet, we ought to note that Gilman’ terminol- ogy draws itself upon the assumptions of the theory in which it is embedded. Gilman calls ‘stimulus’ the external cause of perception, and ‘sensations’ the output of sensory transducers. All right thus far. But he then defines ‘percep- tion’ as the output of perceptual modules, thus presupposing the correctness of the modularity theory (see Gilman 1992: 293). A strong criticism directed against Fodor’s theory has been levelled by Churchland (1988). Churchland proceeds in two steps: first, he challenges the correctness of several fundamental claims of the modularity theory. Second, assuming for the sake of the argument that it were right, he sets out to show that modularity cannot entail the theory-neutrality sought by Fodor. Let us take these two objections in turn.

Network vs. modular theories of perception

Churchland (1988) claims that there is enough experimental evidence to show that optical perception can be diachronically modified. He draws on this evi- dence to support his network theory of perception. Churchland mainly turns to inverted lenses experiments. If one puts on spectacles which invert the retinal image, one will first have difficulties in getting around. After about a week, one starts reorienting oneself in the visual field. After removing the spectacles, another few days are needed for one to return to the original state. This fact is enough, as Churchland maintains, to elucidate the plasticity of visual perception. However, Fodor denies that inverting lenses experiments in any way boost the idea of a theory-penetrability of vision. All that it is shown, Fodor suggests, is that visual-motor skills can be ‘re-calibrated’ by a process of learning. Therefore, as Fodor claims, there is no proof of the theory-penetrability of perceptual modules.

96 Next, Churchland cites neurological evidence in support of his network theory of perception. He claims to have identified ‘descending pathways’ whose existence is taken to indicate that

the wiring of the brain relative to its sensory periphery certainly does not suggest the encapsulation and isolation of perceptual processing. As with the psychological data discussed earlier, it strongly suggest the opposite arrangement. (Churchland 1988: 178)

Two methods of charting these neural descending pathways are cited: extra- cellular dye injection, and monitoring of the path or radioactive amino acids. Churchland also cites several studies conducted on birds and on human sub- jects. Nevertheless, Gilman attacks every piece of evidence that Churchland has brought forward. Gilman denies the relevance of the cited studies, on grounds of the peculiarities of the subjects. He also dismisses the conclusions that Churchland has drawn from studies on the efferent pathways. The materials cited by Churchland, he continues, do not undergird the “downward penetra- tion of theories or beliefs.” (Gilman 1992: 302). Churchland also reiterates the discussion of optical illusions, adding new examples (e.g. the Kanisza illusion). But Fodor is little impressed by this new case. He maintains that the descriptions of the procedures needed in order to wipe out some of the illusions do not support the claim that theory penetrates perception:

One doesn’t get the duck/rabbit (or the Necker cube) to flip by “changing one’s assumptions”; one does it by (for example) changing one’s fixation point. (Fodor 1988: 190)

More recent evidence in favor of the impenetrability of the visual system by theoretical and cultural influences is put forth by Gilman (1994). He discusses Goodman’s (1976) argument claiming that natural resemblance accounts of vision fail because there is no principled basis on which to establish what aspect of a scene, and to what degree of detail, is to be copied when we want to represent it. Gilman’s retort is that both a physiological and a computational analysis of the visual system converge at showing that a representation worth its name copies

those aspects of a scene which fix the geometry of light/dark contrasts to a degree when presented with the picture as it would when presented with the original scene. (Gilman 1994: 95)

All in all, this is an intricate empirical debate, and we are not in a position to draw any definite conclusion as to the winning part. However, it is fairly

97 safe to state that Churchland has not conclusively shown the modular theory to be false. In any event, Churchland is willing to grant, for the sake of the argument, that perception is encapsulated. From this it does not follow, he maintains, that observation is theory-neutral. This is the most interesting part of the debate.

Theory neutrality or universal dogmatism? In a sense, Fodor admits that perception is partly determined by background information. The reason is that, “as a matter of principle, any given pattern of proximal stimulation is compatible with a great variety of distal causes” (Fodor 1984: 30). In other words, perceptual outputs should, in absence of sup- plementary information, be underdetermined by sensory inputs. But, Fodor maintains, they are not so underdetermined. Thus, the questions

How is it possible that perception should ever manage to be univocal (to say nothing of veridical)? Why, that is, doesn’t the world look to be many ways ambiguous, with one “reading” of the ambiguity corresponding to each distal layout that is compatible with the current sensory excitation? (Fodor 1984: 31) can be responded to only by accepting that background information plays a role in perceptual processes. As we have mentioned, according to modular theory, this information is of an innate kind. Perceptual modules are not penetrable by all background knowledge, and by no means by information descending from the central system. The information influencing perception is endogenously specified. Churchland raises the following objection: assuming that perception is governed only by endogenously specified information, what follows is that per- ception has features common in all humans, not that human perception is objective. As Churchland states,

Let us suppose ... that our perceptual modules ... embody a systematic set of ... assumptions about the world, whose influence on perceptual processing is unaffected by further or contrary information.... This may be a recipe for a certain limited consensus among human perceivers, but it is hardly a recipe for theoretical neutrality... What we have is a universal dogmatism, not an innocent Eden of objectivity. (Churchland 1988: 169–170)

I believe that this objection sounds more threatening than it actually is. The reason is that the “limited consensus” which Churchland derisively grants is sufficient to support the argument against incommensurability. If the mod- ularity theory is correct about the endogenously specified knowledge being

98 common to humans, there is then, as Kukla (1998: 117) indicates, a one-to- one mapping between the sensory input and the perceptual output; there is a mode of perception common to all human beings, regardless of their cultural and theoretical background. It must be admitted that this is a strong conclu- sion. It entails that an observational language can be built on that perceptual consensus.

Kukla’s irrelevance objection Kukla (1998) raises a very interesting objection against the modularity the- ory. He argues that, whether true or false, Fodor’s theory is ineffective in the incommensurability issue. Kukla suggests that our epistemological situa- tion does not depend on whether we perceive the world through encapsulated modules or through theory-penetrated modules. A few terminological preliminaries are in order before discussing Kukla’s objection. Adversaries of modularity theory maintain that there are direct links between the sensory transducers and the central system. Thus, “the acquired theories can contribute to the operations of assembled (i.e., not hardwired and not innate) perceptual systems (henceforth AP s)...” (Kukla 1998: 119). The notion of an assembled perceptual system (AP ) can be used to construct variants of the modularity theory different from – or even opposed to – Fodor’s theory. Here are some models:

(a) The central system C gets direct inputs from the sensory transducers, inputs which bypass the endogenous perceptual module P . Thus, C has information available from both P and the acquired AP s.

(b) C has a top-down control over P – as Churchland seems to maintain.

(c) There is no P at all – as Kuhn seems to maintain. The outputs of AP s comprise all the perceptual information available to C.

As with Kukla (1998: 119), let us call (a) the mixed model. Thus, the list of modular models to discuss is as follows: Fodor’s model of encapsulated modules, the mixed model (a), Churchland’s model (b), and Kuhn’s model (c). Not surprisingly, Fodor and Gilman reject the notion of an assembled per- ceptual system. Gilman argues that there is no reason to posit the existence of AP s in the first place, since one can account for the outcome of the perceptual system in terms of the innate perceptual module P alone:

Recall the Kuhnian example of someone who “literally” sees the tracks of electrons, alpha particles, and so forth as opposed to droplets. Or the

99 well-known example of the scientist who always sees a pendulum as a pendulum. Surely none would deny that such people can explain what they are noticing to people not conversant with the relevant concepts (or to people who are sceptical about the evidence) by pointing to the droplets as droplets. Surely a scientist is capable of turning to a scientist’s child and saying: “look there, you see how that weight is swinging at the end of the line...” when the child says that he does not know what a pendulum is. (Gilman 1992: 307)

As Gilman’s argument goes, the physicist and the child have in fact the same perceptual experience, coded in the format of P . The difference in compe- tence between them then would originate in the competence of employing a theoretical language founded on the observational language. This reasoning of Gilman is not conclusive. The reason is that the ex- istence of AP s would in no respect alter the picture described in the latter quote. As long as both the physicist and the child own identical perceptual modules P , they share a common body of , regardless of whether the two also have assembled perceptual systems or not. As Kukla puts it, “we enjoy the befits of an endogenously specified P in addition to assembled AP s.” (1998: 120). On the basis of that common perception, a neutral observational language can be built. This observation language could then serve as a trans- lation basis for different theoretical approaches, thus precluding any threat of incommensurability. If these considerations are correct, it follows that Fodor and Gilman failed to show that the mixed model (a) is not in place. Now to the examination of (b) and (c), that is, Churchland’s and Kuhn’s perceptual models. The first one claims that P is completely controlled by the central processor C, while the latter plainly rejects P ’s existence. Thus, both of these models seem to include the claim that all observations are influenced by the theoretical background, to the effect that there is no theory-neutral observational basis. I agree with Kukla that with respect to the incommen- surability issue, (b) and (c) do not fare better than the mixed model. To see that, let us consider two scientists, S1 and S2, whose perceptual information comes only from assembled perceptual systems. Let us suppose that S1 pos- sesses the assembled perceptual system AP1, while S2 does not. It follows that S2 can neither confirm, nor disconfirm the theoretical claims of S1. However, as Kukla claims,

if S1 has managed to assemble AP1, there’s no reason why S2 can’t as- semble it as well. In fact, there’s no reason why anyone can’t assemble AP1. If everyone did assemble AP1, then the epistemic situation would be identical to the situation that results from our sharing the endogenously specified perceptual module P . (Kukla 1998: 121)

100 Assembling a relevant perceptual system may require a discouraging amount of effort to be invested – one might need, for instance, to become an expert in theoretical physics – which points to an asymmetry between having P and assembling an AP . Yet, the incommensurability due to the absence of an AP is not of an insurmountable kind. It actually has an equivalent in the situation where owners of an endogenously specified perceptual module P would, for whatever reason, fail to use P . These are both cases of remediable incommen- surability. They do not have any of the dramatic consequences announced by Kuhn, such as S1 and S2 “living in different worlds.” Kukla infers that the mixed and the Kuhnian models cannot be distin- guished by their epistemological consequences. But I believe there might be a more substantial difference between possessing the innate P and assembling an AP . In the former case, P would assure at any moment a common pool of human perceptions in which epistemic agents could settle their theoretical conflicts. Hence, incommensurability would be excluded. In the latter case, AP can be assembled or disassembled – at will or involuntarily. As such, the incommensurability of any two ‘world-pictures’ would threaten to depend on the momentary psychology of S1 and S2, respectively. This, in turn, would en- gender the issue of bringing about credibility within a scientific community. It is a corollary of our discussion of the technological argument, that among hu- man scientists, such credibility negotiations are feasible. However, it demands considerable effort and time, and this allows a de facto incommensurability to threaten at any moment. By the time a concrete incommensurability situa- tion is eliminated, other cases can emerge. Not so from Fodor’s standpoint: theory-neutrality comes for free, so that incommensurability is unable to arise. Be it as it may, this empirical debate still contains too many unsettled aspects in order to allow for a definite conclusion with respect to the status of Fodor’s theory/observation distinction. Nonetheless, I hasten to prevent the conclusion that this debate in cognitive psychology is not philosophically relevant. Were it to be conclusively settled, it would show whether there is a serviceable concept of a theory-independent observation, founded on percep- tion psychology, or whether sense experience is inescapably theory-dependent. In either case, it would be a triumph of the naturalistic method in the philos- ophy of science.

4.4 Kukla’s observable/unobservable distinction

Given the failure of van Fraassen’s dichotomy and the unsettled dependence of Fodor’s distinction on empirical matters, Kukla has proposed a different way of splitting the observable from the unobservable.

101 Kukla (1998: 143–50) returns to a linguistic distinction between observa- tional and non-observational sentences. He supposes that theories are formu- lated in such a way that their singular consequences are about the occurrence or non-occurrence of events. He defines OX as the proposition that an event of type X occurs. For instance, if X is the decay of an A-particle, then OX claims that an A-particle decays. Next, Kukla defines E(T, “X”) as the event that theory T refers to as “X”. Thus, if T is true, then E(T, “X”) = X, which means, “if current particle physics is true, then it is also true that the event that current particle physics refers to as “the decay of an A-particle” is the decay of an A-particle.” (Kukla 1998: 143). Kukla further defines an atomic observation sentence as a sentence of the form OE(T, “X”), claiming that an event takes place to which theory T refers as X. He is not clear about what exactly constitutes an act of observation, as he actually doesn’t feel this need, since his distinction is linguistic, not epistemic. As such, his notion of an observation sentence is entirely theory- laden. Thus we have, on the one hand, the couple X–OX (the event X, and the proposition OX stating its occurrence), and on the other hand, the couple E(T, “X”)–OE(T, “X”) (the event that T refers to as “X”, and the proposition in T that states it.) Note that Kukla already presupposes ontological realism, since he admits the occurrence of physical events independently of our theories. Besides, he seems to adopt a correspondence theory of truth, since he takes T ’s statement about X to be true if and only if the event it identifies in the world is X. However, on the epistemic dimension, his notion of observability gets along quite well with agnosticism about the entities involved in the observable events. To say that an event takes place to which T refers as X presupposes, indeed, no epistemic commitment to the claim that X exists. As a matter of fact, Kukla is noncommittal on this score. But, as will be shown, his distinction does not fare better than van Fraassen’s and Fodor’s distinctions, whether it is taken realistically or anti-realistically on the epistemic dimension. Specifically, we’ll see that it alternates between incoherence and triviality, according to its background epistemic commitments – realism, respectively antirealism. Let us proceed by confronting it with the technological argument, which has been discussed in 4.2.2. Recall that the argument attacks any definitive observable/unobservable distinction on grounds that the boundary shifts with the progress in the available observation instruments. Being a distinction of statements, Kukla’s distinction is not committed to any such definitive border- line. Thus, it has no difficulty with the eventuality that the observable realm expands indefinitely into the unobservable one. Unlike van Fraassen, Kukla does not need to make any chauvinist move in order to block the expansion of the admissible limits of the observable. However, Kukla’s distinction has

102 a price to pay for coping so easily with the possibility of electron-microscope eyes: on the one hand, it allows any event to be observable. As Kukla himself states,

“A virus of type A disintegrates” isn’t an observation sentence, even for beings who have electron-microscope eyes with which to see the putative virus disintegration “directly”. “What current virological theory refers to as ‘the disintegration of a virus of type A’ is happening” is an observation sentence even if we don’t have electron microscope eyes. In fact it’s an observation sentence even if we don’t have electron microscopes. (Kukla 1998: 145)

On the other hand, an event like “the cat is on the mat” belongs, according to Kukla’s distinction, to the unobservable realm. This is as counter-intuitive as to be plainly wrong. Thus, Kukla’s distinction excludes too much and admits too much at the same time. Kukla seems to be aware of this deficiency, yet he simply recommends the empiricist to liberalize her observability criterion and give in Churchland’s suggestion that, with the progress of technology, any event becomes potentially observable. As to the unobservability of events commonly taken as observable, Kukla reminds us of the theory-dependence of the common-sense concepts involved in such sentences. Neither of these replies is really adequate. For one thing, as pointed out in 4.2.2, our present physics cannot warrant the claim that technology will progress as to allow a direct check of any event in the universe. For another, the kind of theory-dependence involved in common-sense claims like “the cat is on the mat” is different from the theory-dependence involved in claims like OE(T, “X”). The latter kind depends solely and entirely on T , while the former involves a theoretical background previously and independently tested. Hence, it is unacceptable to muddle the observational statuses of sentences like “the cat is on the mat” and “current virology claims that a virus disintegrates”. In any event, it is still not clear what is the philosophical problem to which Kukla’s distinction is supposed to be a solution. The fact that it was conceived to overcome some of the difficulties of the previously discussed distinctions – which, as argued, it does not – is not enough to make it attractive. Besides, its biggest trouble is yet to come. Kukla is confident that his distinction is unscathed by Friedman’s argu- ment, which proved to be fatal to van Fraassen’s distinction. As a reminder of Friedman’s argument, here are its premisses: (1) theory T entails an ob- servational consequence, O; (2) O can only be expressed in a theory-laden language; (3) T has no serious empirical equivalent, i.e., T is the only theory which can account for O. Then the conclusion follows that belief in O (urged

103 by empiricism) entails belief in some of T ’s theoretical posits, which makes agnosticism about T untenable. Accordingly, as already concluded in 4.2.4, such a constructive empiricism is incoherent. Does Kukla’s distinction really fare better on this score? I do not think so. It is incumbent on his distinction to admit that all language is theory-laden. As such, the proposition OX that an event X occurs is theory-laden, that is, OX is a consequence of some theory T . OE(T, “X”) is by definition an observable consequence of T . Kukla’s implicit formalization of this situation is

T → (OX → OE(T, “X”)).

He admits the implication, given T , of O(T, “X”) from OX, while rejecting the reverse implication:

If it’s a consequence of theory T that a particular setup will reveal an electron track, then it’s a consequence of T that the setup will produce an event that T refers to as an “electron track”. But the fact that a particular setup produces an event that T refers to as an “electron track” doesn’t entail that the event is an electron track. Thus Friedman’s problem is averted. (Kukla 1998: 144)

But wait a minute: if T is true, then, by definition, OE(T, “X”) ≡ OX, i.e., if T is true, T claims that an even of type X occurs if and only if X occurs. Therefore, if T is true, contra Kukla, the fact that an experimental setup pro- duces an event that T refers to as an “electron track” does entail that the event is an electron track. If T is false, then no relation can be established at all between OX and OE(T, “X”). This shows, as Wolfgang Spohn (2001, personal correspondence) rightly points out, that the situation is actually de- scribed by

T → (OX ↔ OE(T, “X”)).

It follows that one cannot both believe T ’s observable consequences, as de- fined by Kukla, and remain agnostic about T ’s non-observational claims. In order to avoid incoherence, Kukla has to abandon his epistemic antirealism. However, from a realist standpoint, Kukla’s distinction is utterly trivial. It says solely, that if T is true, then the observable is whatever T refers to as observable. The realist believes his theories to be (approximately) true, hence he takes the observable to be whatever his theories (approximately) refer to as observable. This is definitely no news to the realist. To summarize, Friedman’s

104 argument shows Kukla’s observable/unobservable distinction to be either in- coherent when antirealist, or fully trivial when realist. It is now time to draw an overall conclusion with respect to the distinc- tions that have been inspected. It was shown that van Fraassen’s distinction proved in its constructive empiricist framework to be incoherent; that Fodor’s distinction hinges on unsettled empirical matters; and that Kukla’s distinc- tion oscillates between incoherence and triviality. Moreover, even if Fodor’s dichotomy turned out to be right, the theory-neutral level of observation it would provide would be of no avail to the epistemic antirealist, given Fried- man’s argument. Certainly, this fact does not entail that talk of the observable is generally misplaced. There is a loose sense in which we can speak of ‘observation’ relative to certain epistemic contexts, in the sense of a distinction between what is foreground and what is taken for granted in, say, an experimental situation. This loose talk of observability is also reminiscent of Hempel’s late positivist notion of ‘antecedently understood’ terms, terms which, though “not known to be explicitly definable by means of [strict] observation terms, [might yet] be taken to be well understood in the sense that they are used with a high degree of agreement by different competent observers.” (Hempel 1958: 73). This certainly does justice to our common-sense usage of observability, yet it cannot serve as a precise instrument which the epistemic antirealists need in order to boost their underdetermination reasoning.

105 Chapter 5

Against the Underdetermination Thesis

It has been argued in the preceding chapter against the possibility of a prin- cipled distinction between the theoretical and the observational. This under- mines the very coherence of the empirical equivalence thesis. In the current chapter we shall pursue the other two lines of counterattack against the under- determination argument: granting the coherence of the empirically equivalence thesis (EE), we set forth to prove that those formulations which would engen- der a version of underdetermination threatening to scientific realism are false (see 5.2). Next, in section 5.3 we’ll investigate and dismiss the possibility that even if EE were true, it would not entail underdetermination (UD). Let us begin by discussing and rejecting an important argument which, if valid, would straightforwardly entail a strong thesis of underdetermination.

5.1 Against algorithmically generated empirical equiv- alents

A compelling reason to assume the thesis of empirical equivalence would be if for given any theory T , there are algorithmic procedures that generate em- pirically equivalent rivals to T under any possible evidence. Let us label such constructs, following the nomenclature of their promoter in recent literature, Andr´eKukla, algorithmic rivals. Here are Kukla’s (1998; 2000) two examples:

(a) Given T , construct theory T1 according to which the empirical conse- quences of T are true, but that none of its theoretical entities exist – so T itself is false.

(b) Given T , construct theory T2 according to which T is true whenever an observation takes place and deny that T is true when no observation is going on.

106 The reason why T1 and T2 should be a threat to scientific realism is straight- forward: T is empirically indistinguishable from T1 and T2 – and from any other similar algorithmic construct – in the light of all possible evidence.1 In spite of their popularity in the philosophy of science, algorithmic rivals are practically nonexistent in scientific practice. It seems that scientists do not even bother to take them into consideration. Why is it that theory-candidates which are empirically successful and have truth values are constantly ignored? Kukla (1994; 1996; 2000) repeatedly complained about the lack of a resolution to this issue. He labels this dereliction the problem of scientific disregard: ...scientists routinely and uniformly ignore certain propositional struc- tures that seem to have a good measure of the empirical virtues. Let us call this the phenomenon of scientific disregard. The problem of scientific disregard is the philosopher’s problem of explaining how and why this phenomenon takes place. (Kukla 2000: 22) I agree with Kukla that the problem of scientific disregard needs a clear answer. However, unlike him, I argue that the phenomenon of scientific disregard is in place for good reasons. But before proceeding, a few preliminaries are required. First, a bit of terminology. Kukla (2000: 22) calls theoreticity the hypoth- esized property that algorithmic rivals lack in order to be genuine theories. So even if both T1 and T2 had truth values and were empirically success- ful, they could be distinguished from T because of their lack of theoreticity. Correspondingly, propositional structures lacking theoreticity are called quasi- theories. The philosopher’s task is then to establish what (if anything) makes T1 and T2 quasi-theories. I argue that scientific disregard is the right attitude of scientists. It will first be explained why T1 and T2 are to be treated separately. T2 will be shown not to qualify as a genuine rival, since it belongs to a family of skeptic con- structs which are supposed to be left behind in the scientific realism debate. Further, against T1, the general objection from section 2.3 will be reiterated, that it cannot provide proper causal explanations; besides, it will be shown that even when explanatory and predictively successful, T1 is parasitic upon T ’s conceptual resources. This comes down to showing T1’s lack of theoretic- ity. Consequently, T1 and T2 should indeed be labelled quasi-theories, i.e. as nonstarters in the epistemic race. 2 In spite of Kukla’s manifest opposition, T1 and T2 should be treated sep- arately. T1 belongs to a class of theory-candidates that we have labelled TI in 1This formulation of the UD thesis is logically stronger than the one urging empirical equivalence under a finite body of evidence. The present discussion takes for granted that the weaker version of underdetermination is not a threat to scientific realism. 2 Kukla protests against Laudan and Leplin’s separate treatment of T1 and T2: “if every proposed algorithm for producing empirical equivalent rivals is going to require us to come

107 section 2.3, often embraced by skeptics or agnostics with respect to unobserv- able entities, urging restriction to belief in observables. Belief in observables is taken for granted while epistemic ascent to unobservables is deemed unwar- ranted. Ontologically, a defender of T1 has no quarrel with the assumption that there is a realm of entities existing objectively, independently of our minds, language, and theories. At stake in confronting T with T1 are preeminently epistemic issues: How are we able to – and to what extent can we – ascertain this objectively existing realm? This doesn’t seem to be the relevant question with respect to T2. T2 belongs of a family of skeptical scenarios according to which the world is ontologically different from what we can normally believe on the basis of our evidence. T2 entertains assumptions different from those of T about the ontological constitution of the world, though it maintains that they make no observational difference from those of T . Thus, T2 urges a clarification with respect to the ontological set up of the world. Accordingly, T1 and T2 cannot be treated on a par. Their rebuttal urges a differentiated argumentation. In the next subsection, we’ll argue against T2, whose rejection turns out to be rather expedite in the context of our discussion. Then, it is shown how T1 is parasitic upon T ’s conceptual resources (5.3).

5.1.1 The dismissal of T2

A salient feature of the skeptical scenarios of the T2-family is that they are con- structed by way of systematic appendages to the ontological presuppositions of T , so that our actual epistemic procedures are not affected. The supposition that the fundamental make-up of the world is very different from what we can infer on the basis of empirical evidence has several famous representatives: the Cartesian demon story, which tells us that the world of familiar objects does not exist and that we are being deceived by a powerful demon; its modern counterpart, the Putnamian brain-in-a-vat story, which says that I may be a brain-in-a-vat artificially stimulated to have all the that I would have if I had a body and interacted in the normal way with the familiar world; the claim that God created the universe as we think we know it just five min- utes ago; the claim that the empirical world is a mere show of ideas going on in my mind, etc. The possibility that the world’s constitution is drastically different when no observation is taking place and that after observation has resumed, the familiar set up is reinstated, qualifies as a scenario of this kind. As a matter of up with a new ad hoc rule to insure its disqualification, the theoreticity argument against [the thesis of empirical equivalence] looses all credibility.” (Kukla 1996: 151). However, since I argue that the distinction between T1 and T2 is principled, the difference in the arguments rejecting them cannot be considered ad hoc.

108 fact, T2 evokes the consciousness-based problem of measurement in quantum theory: in the absence of measurement, it is meaningless to ascribe reality to the properties of a quantum system. These properties enter into existence once the quantum system interacts with an observer’s mind. Of course, it is presupposed that human consciousness is a substance which, though able to interact with matter, behaves quite differently from it. Now the only ‘unproblematic’ way of accounting for such a dramatic de- pendence of the world’s make-up on the human mind is to embrace a form of idealism, be it subjective (e.g. ), or objective (invoking God, a good or an evil demon, the Universal Spirit, etc.). Notoriously, there is no logically non-question begging way of eliminating these possibilities. Fortunately, the context of our discussion is not dependent on such a resolution. The debate over scientific realism is about the existence of unobservable scientific entities, as well as about our ways of coming to know them. As Devitt (2003) notes, realism about observables and the epistemic procedures which establish their existence are taken for granted. Hence, skepticism about the external world is left behind. The parties in the debate are supposed to agree about this much. Thus, we need not be bothered by seeking solutions to a problem which is not at issue. T2 can therefore be discarded as irrelevant. What is at stake in the scientific realism debate is whether the ampliative inferential procedures taken for granted at the observable level ought to establish facts about scientific unobservables. It is only T1 that threatens scientific realism in this respect.

5.1.2 The dismissal of T1 Recall our argument in 2.1.3 that instrumentalism is unable to offer causal explanations. It has been established that this deficiency is shared by all its brethren from the sceptical-about-unobservables family TI , and this naturally extends to T1. This is a serious handicap for T1, sufficient in itself to account for scientific disregard. Nonetheless, there is an additional problem which confronts T1: admitting that T1 explains (other than causally of course), this can be done only para- sitically upon T ’s explanatory resources. The objection has been formulated by Laudan and Leplin:3

...[T1] is totally parasitic on the explanatory and predictive mechanisms of T ... a [genuine] theory posits a physical structure in terms of which an independently circumscribed range of phenomena is explainable and predictable. (Laudan and Leplin 1996: 13)

3 Since the argument over T2 is, for our purposes, already behind us, I’ll investigate the parasitism objection only with respect to T1.

109 Recall that T , positing unobservables, is assumed to be empirically successful. Then, as the argument goes, if T1 must always make reference to T in order to explain and predict, T1 is just a parasitic propositional structure. But undoubtedly, science does well without such parasites. Faced with Laudan and Leplin’s parasitism objection, Kukla replies that even if the initial construction of T1 makes reference to T , this does not imply that T1 cannot be characterized so as to circumscribe the reference to T Kukla (1996: 149). He points out that simple theoretical structures can be devised so that their observational consequences can be described independently: if T is simple enough, its observational consequences can be described without reference to T . That means that T1 can sometimes be described without reference to T . Concerned about the possibility that this maneuver could be generalized, Leplin modifies the parasitism criterion. For T1 to be independent of T , he urges that T1’s class of consequences be specified independently of T :

Kukla wants to eliminate reference to T by specifying directly what the empirical consequences are to be. But the determination of what to specify can only be made by reference to T . That is the point of charge of parasitism. Whether or not reference is made to T in identifying its purported rival is not the proper test of parasitism. (Leplin 1997: 160)

This is a reply which Kukla rightly deems too strong to be fulfilled even by accepted scientific practices. He illustrates with the example of scientists seek- ing for a theory U which unifies two still disparate theories X and Y . “In this case, too, ‘the determination of what to specify can be made only by reference to’ the conjunction X & Y .” (Kukla 2000: 28). That is a correct answer. However, I believe that Leplin could have made his point without mod- ifying his criterion. I see no threat in algorithmically generating T1 from T and thereafter trying to describe T1’s consequences independently of T . Af- ter all, that comes to a direct search for independent empirical equivalents to T . Though this can presumably be done in few individual cases, there is no warrant that the maneuver can be generalized for every T . As a matter of fact, given the complexity of modern theories, the possibility of independent scientific description of their observational consequences becomes overwhelm- ingly implausible. I therefore maintain that the ‘proper test of parasitism’ is whether T1 is definitionally dependent on T : T1 is parasitic on T if it cannot be formulated other than in terms of T . Consider both variants of T1: T1-formulated-without-reference-to-T and T1-formulated-by-reference-to-T . The two variants will have to include dif- ferent theoretical assumptions. Of course, this is of no consequence for the empirical equivalence between T and T1-formulated-by-reference-to-T , since the latter is by definition exactly as empirically successful under all possible

110 evidence as T . Yet the same cannot be said about T and T1-formulated- without-reference-to-T . Agreed, both T and T1-formulated-without-reference- to-T entail all known evidence, E. They do that in conjunction with the set A of admissible auxiliary (theoretical) assumptions: T & A → E, and T1- formulated-without-reference-to-T & A → E. Nonetheless, as Laudan and Leplin (1996) point out,

auxiliary information providing premises for the derivation of observa- tional consequences from theory is unstable in two respects: it is defeasi- ble and it is augmentable. (Laudan and Leplin 1996: 57)

As the class of auxiliaries changes, the consequences of its conjunction with T 0 and T1-formulated-without-reference-to-T will change as well. If A becomes A , 0 then T1-formulated-without-reference-to-T & A does not, in general, entail the 0 same observable consequences as T & A does. Consequently, T1-formulated- without-reference-to-T is not an empirically equivalent rival of T under all possible evidence. If so, it doesn’t pose a threat to scientific realism. Put briefly, either T1-formulated-by-reference-to-T is a rival of T , yet one which is parasitic upon T ; or T1-formulated-without-reference-to-T is no algorithmic rival for all possible evidence. More effectively, Kukla claims that there are cases in science where T1 plays a role:

Let us grant that the parasitism criterion succeeds in eliminating T1 from the ranks of genuine theoretical rival. In that case, I would agree that it is far too strong a test for theoreticity, for there are circumstances where structures like T1 have an important role to play in the game of science. Consider the following scenario. (1) Theory T has been well-confirmed, so that its empirical adequacy is widely believed; (2) it is discovered that one of its theoretical principles is inconsistent with an even more firmly believed theory; and (3) no one can think of any way to describe the empirical consequences of T except as the empirical consequences of T . In that case, we might very well come to believe a proposition which has precisely the structure of T1: that the empirical consequences of T are true, but that T itself is false. (Kukla 1996: 149)

So we have a case where T is empirically successful, but on grounds of incon- sistency of one of its theoretical principles with a more fundamental theory, we may only retain T ’s observational consequences. Although we are assured of the fact that T must be rejected, what we do not know is how much of it could be retained. T ’s empirical consequences in particular need not be rejected, so Kukla’s scenario is logically sound. My reaction, though not logically decisive, is to point to a double implausi- bility of this scenario. First, the possibility that the divide between the rejected

111 and the retained parts exactly corresponds to the observable/unobservable di- vide strikes me as highly improbable. That is, if we have strong reasons to think that T is wrong, we expect this to show up at the observable level as well. Second, assuming that T is a well-established theory, given methodologi- cal coherence requirements which constrain the process of theory-construction, it is improbable that one of T ’s theoretical principles will be inconsistent with an even more fundamental principle of that particular field.4 The consequence of these plausibility considerations is that, most of the time, Kukla’s latter scenario cannot occur, so that T1 cannot play a systematic role in science. Admittedly, T1 can occur accidentally; but this is by no means a sufficient basis for the algorithmic generation of T1 to become an accepted methodological rule of theory construction. But of course, no matter how high, methodological unreasonableness doesn’t entail impossibility. What is then left for the realist is to bite the bullet and admit that he can live with situations in which T is false. That he can indeed, since his doctrine requires that most – not every one – of the well-established scientific theories be truthlike. Now by allowing that T might be false, scientific realism has to face the threat of the so-called pessimistic meta-induction, that we have discussed in detail in 2.1.3. If many theories of the past turned out to be false, why should we believe that our current ones fare better? Shouldn’t we be wise and learn from history that our current theories probably will turn out false as well? Recall briefly that we basically follow Devitt (1991:162–5) in contesting the inference from the falsity of past theories to the probable falsity of current the- ories. Why should this be so? After all, scientific methodology has constantly produced better and better theories committed to unobservables. These, in their turn, have led to an improved scientific methodology. Thus, one the one hand, the standards of truthlikeness have become increasingly high. This is a relevant fact, given that the very evaluation of past theories is made from the perspective of current science. Hence we can establish the falsity of the past theories only insofar as we accept the accuracy of modern theories’ descrip- tions. We saw McAllister (1993) arguing that “the theories deemed successful in the history of science were deemed to be so on the basis only of a set of crite- ria constructed in the light of imperfect knowledge about the properties of the properties of theories.” This is also supported by Kitcher’s (1993) finding that many falsities of past theories can be identified and eliminated. Besides, cor- respondences be established between the successful problem-solving schemata and mechanisms in superseded theories and the ones posited by contemporary theories.

4It follows that the appropriate unit of science ought to be broader than a single theory.

112 5.1.3 The insufficiency of Kukla’s solution to the problem of scientific disregard

Kukla explicitly rejects any solution to the problem of scientific disregard in terms of theoreticity. He argues Kukla (2000) that a Bayesian account offers a straightforward response. He explicates the fact that algorithmic rivals are disregarded as their having zero prior probabilities:

According to Bayesianism, prior probabilities are free, subject only to the constraint of probabilistic coherence.... Further constraints apply only to how our opinions must change with the receipt of new information. The requirement of coherence, however, already has the consequence that infinitely many theories must be ascribed priors of exactly zero (or priors whose infinite sum converges on a number less than or equal to 1). Given that we have to assign zero probabilities, it’s not at all surprising that there are theories which are utterly disregarded. It’s only rational not to waste any time on impossibilities. (Kukla 2000: 31)

Kukla maintains that Bayesianism has a built in explanation for scientific disregard. Nonetheless, the question to be immediately be raised is, Why should T1 be assigned zero prior probability, and not T ? Kukla believes that the answer to this question is not important. He permits the utterly arbitrary choice of those theories which are regarded as impossible: “Given a different roll of the dice, we might have disregarded T and pursued T1” (2000: 31). All that matters in his opinion, is that the disregarded hypothesis is deemed to be impossible. Yet, the subjectivity involved in the ascription of prior probabilities cannot be understood as completely free of methodological constraints. In particular, I have argued that there are quite independent reasons why, at least within the scientific realism debate, it is always T1 which is disregarded, and not T . I have also shown that we have a priori reasons to assign T1 zero prior probability. Contrary to Kukla, I see that the theoreticity constraints are a useful complement to his Bayesian account. He claims that the advocates of the theoreticity approach look at the propositional structures they want to disregard and to this purpose, come up with ad hoc requirements. Yet it is a matter of fact that algorithmic rivals cannot explain causally and that they are parasitic. Kukla wants to derive support for the underdetermination thesis from his Bayesian account of scientific disregard. However, given my arguments to the contrary, he cannot derive more underdetermination than he has already put in by raising all methodological constraints (except those of probabilistic coherence) upon the ascription of prior probabilities.

113 5.2 Versions of empirical equivalence

We shall henceforth restrict our considerations about the EE thesis to gen- uine rivals. We defined the empirical equivalence of two theories T and T 0 as the equality of their empirical contents: EC(T ) = EC(T 0). This can be construed as meaning that T and T 0 entail the same observational sentences. The following formulation of the EE thesis is thus obtained:

(EE1) T has genuine rivals that entail the same observational con- sequences.

It is actually probable that given a theory T entailing a body of evidence E, there will be a theory T 0 entailing E as well. But this kind of empirical equivalence cannot support the strong underdetermination thesis (SUD) which threatens scientific realism. The reason, as Quine puts it, is that all statements face the ‘tribunal of experience not individually but only as a corporate body’ (Quine 1953: 41). This is the so-called Quine-Duhem thesis.5 The consequence of this holism is that what is tested is not a hypothesis or theory alone, but a whole ‘theoretical group’, including auxiliary assumptions about the laboratory instruments used in testing it, about the background conditions, etc. Thus, in fact, E is entailed not by T alone, but by T & A, where A denotes an admissible set of auxiliary assumptions: T & A → E. But we saw Laudan and Leplin (1996) arguing that “auxiliary information providing premises for the derivation of observational consequences from theory is unstable in two respects: it is defeasible and it is augmentable.” Thus, assuming that the empirical equivalence of T and T 0 takes place under the same class of auxiliary assumptions – i.e., T & A is empirically equivalent to T 0 & A – then as the class of auxiliaries changes, the consequences of its conjunction with T , and, respectively, T 0, change too. If A becomes A0, then T & A0 does not, in general, deductively entail the same observable consequences as T 0 & A0 does. Accordingly, the empirical consequences of a theory “must be relativized to a

5The term ‘Quine-Duhem thesis’ coined in of science enjoys wide acceptance. Nonetheless, the detailed examination by Donald Gillies (1998: 302–17) of Duhem’s and, respectively, Quine’s contributions proves that these contain contradictory elements. On the one hand, Quine extends his holism to all statements, in all scientific disciplines. He maintains that even the claims of logic and mathematics are not immune to revision. Thus, “revision of the logical law of the excluded middle has been proposed as a means of simplifying quantum mechanics (Quine 1953: 43). On the other hand, Duhem exempts from revision logic and mathematics, as well as certain empirical disciplines – e.g., physiology. Besides, unlike Quine, Duhem believes that scientific methodology cannot rely on logic alone. Good scientists are led by a ‘bon sense’ in resolving theoretical disputes (cf. Duhem 1945: 217).

114 particular state of science” (Laudan and Leplin 1996: 58). The point is also made by Leplin:

The judgement that rival theories are identical in their observational com- mitments must be historically indexed to the auxiliary information avail- able for drawing observational consequences from theories, and to the technology and supporting theory that determine the range of things ac- cessible to observation. (Leplin 2000: 397)

0 It can thus be seen that EE1 allows the incoming evidence to favor T over T . This refutes SUD’s claim that no possible evidence can justify the choice of T over T 0. Obviously, a stronger version of the EE thesis is needed by supporters of underdetermination. Kukla (1998) has made an interesting move to block Laudan and Leplin’s argument from the “instability of auxiliary assumptions”(henceforth IAA). He appropriates Leplin’s suggestion of time-indexing the auxiliary assumptions: “every indexed theory has empirical equivalents under the same index.” (Kukla 1998: 63). Thus, according to Kukla, for T and T 0 to be empirically equivalent at time t is for them to entail the same observations under the common index At, the class of auxiliary assumptions accepted at time t. As Devitt (2003) indicates, this yields the following formulation of the EE thesis:

(EE2): T has genuine rivals which are such that when T and any of the rivals are conjoined with At they entail the same observations. Prima facie, Kukla’s approach does not bring forth any substantial difference from the previous situation: if T & At is empirically equivalent to T 0 & At, it does not follow that T & At0 will be empirically equivalent to T 0 & At0, where 0 t > t. Thus, the effectiveness of the IAA-argument is undeterred, so that EE2 is as inapt as EE1 to entail SUD. Kukla then emphasizes that to believe at time t that T and T 0 are empiri- cally equivalent is to believe that they are atemporally empirically equivalent:

The point is that we know that, whatever our future opinion about aux- iliaries will be, there will be timeless rivals to any theory under those auxiliaries. (Kukla 1998: 63)

This is logically equivalent to claiming that there are empirically equivalent total sciences, meaning, the “conjunction of any ‘partial’ theory and all ac- ceptable auxiliary theories we deem to be permissible.” (1998: 45). If that was correct, then EE2 would indeed lead to SUD, for IAA could at most establish that partial theories can be discriminated by their empirical consequences. IAA would simply not apply to total sciences, because as Boyd indicates, “to- tal sciences are self-contained with respect to auxiliary hypotheses.” (Boyd

115 1984: 50). In other words, a total science entails by itself all its observational consequences, since all needed auxiliaries are already part of it. Nonetheless, the way in which Kukla construes the notion of a total science is problematic. For one thing, he seems to take it that once accepted, a theo- retical sentence will not be ever rejected. This is the only way to make sense of his claim that “it doesn’t matter which partial theory we begin with – the end result will be the same [i.e., a total science]” (1998: 64). This does not square with the lesson which the debates about the dynamics of theories taught us, namely that science cannot be understood as a continuously growing corpus of accepted statements; there is a good deal of rejection of old statements taking place, too. What is true about them is often taken over by new formulations, corresponding to new interpretations and new mathematical formalisms. We saw earlier that scientific realism copes well with that. Besides, the “end result” envisaged by Kukla could only be contemplated at the “end of time”, by checking the list of all accepted assumptions in the history of science. Until then, we have to live with new auxiliary assumptions appearing every minute. For a fan of the concept of a total science, IAA emerges now as a result of our ignorance about what the list of all accepted assumptions will look like on the Judgement Day. Finally, and more to the point, it is significant to note that Kukla speaks of two or more – possibly infinitely many – total sciences. But granted that we could somehow fix the above mentioned difficulties, it is obvious that talk of two or more total sciences begs the question of underdetermination by assuming that all scientific theories face empirically equivalent rivals. But this already means to assume that SUD is true, whereas SUD is supposed to follow from EE2. We may thus conclude that Kukla’s time-indexing maneuver cannot cir- cumvent the IAA-argument. In order to establish SUD, a considerably stronger version of the EE thesis seems necessary:

(EE3): For every theory T and for any possible evidence E, there are genuine rivals of T entailing the same evidence E under the same body of auxiliaries.

If that was the case, all theories would be indistinguishable by all possible evidence. This would indeed be a hard time for the empirical sciences, since any claim to objectivity would have to be suspended. Fortunately, there is no reason to believe that EE3 is in place. Besides, as will be argued below, even if it were true, EE3 could not be used to establish SUD. Note first that there are interesting cases of theories empirically equiva- lent under all possible evidence. Earman (1993) gives the example of a four-

116 dimensional formulation of the Newtonian mechanics which is empirically in- discriminable from a mechanics adopting a non-flat affine structure and relin- quishing gravitation. Poincar´e(1902) mentions empirically indistinguishable theories about the structure of space. Let us take a more detailed look at one of the cases of underdetermination constructed by Newton-Smith and Lukes (1978) with respect to the structure of space-time. The example concerns the dense and, respectively, continuous characters of space and time as they are represented in different mechanical theories. In a rigorous axiomatization of the Newtonian mechanics, space and time are postulated to be continuous. That is, the points along an interval are mapped onto the real numbers. The motion of a particle can thus be represented by continuous functions from real numbers representing time, to real numbers representing spatial coordinates. We can also define higher-order kinematic notions, like velocity and acceler- ation, obtained by successive derivations of the position function. However, given the limited – to some finite number of decimals – accuracy of our mea- surements, we can only ascertain a dense structure of space and time, that is, spatial and temporal coordinates isomorphic to intervals of rational numbers. In different words,

the conjecture is ... that different hypotheses about space and time (mere density versus continuity) are compatible with all actual and possible measurements. While it is no doubt simpler to represent space and time continuous rather than merely dense, it might be that this is merely a matter of convenience, and that no measurement data can decide the matter. (Newton-Smith 2000: 535)

The computational difficulty of the dense representation comes from the fact that it makes derivation impossible. Therefore, the corresponding mechanics – Notwen’s mechanics, as Newton-Smith calls it – deals with average velocities and accelerations which are mere approximations of the Newtonian, instanta- neous values. One can also argue that Notwen could employ the full range of mathematical techniques used by Newton. Thus again, considering the sets of measurements made from either perspective, the two mechanical theories seem to be empirically equivalent:

Notwen’s theory with its postulation of merely dense space and time and Newton’s theory with continuous space and time are clearly incompatible. However the theories are empirically equivalent in the sense that an ob- servation counts for (against) Newton if and only if it counts for (against) Notwen. Notwen and Newton will test their theories by measuring the values of the parameters and plugging these values into the equations to generate predictions. ...the measured values with which they both begin will be represented by rational numbers. In a world in which Notwen’s

117 theory is successful a test of Notwen’s theory will involve predictions of ra- tional values for parameters which subsequent measurement supports. In this test Newton may predict the parameter to have a nearby non-rational value. On the other hand, if Newton’s theory is borne out Notwen can find a value h which is such that his theory is confirmed by the observa- tion confirming Newton. ...Thus, the choice between these theories is an empirically undecidable matter. (Newton-Smith and Lukes 1978: 85)

One can anticipate that the realist will want more than a mere fit with the observational data to be considered as empirical grounds for assessing the verisimilitude of a theory. Sometime in the future, it may be that one of the two mechanical theories (but not the other) will be embedded into a more general theory of a wider scope, with a high degree of empirical success. Ac- cording to Zahar (1973), this has been the case with the Special Relativity theory on the one hand, and the Lorenz ether-drift theory, on the other hand. In 1905, in light of the evidence existent at that time, the two theories were empirically equivalent. Nonetheless, Special Relativity was later preferred on the grounds of its compatibility with the General Relativity theory. We believe that this is what actually happens most of the time in the history of science with empirically equivalent theories: one of them is embedded in a new, more general theory, and thus benefits from indirect confirmation and from an ex- tended range of applications; the rival one loses more and more supporters, until it is practically abandoned. As will be seen in the chapter 7, this was the fate of the S-matrix theory in high energy physics. Newton-Smith and Lukes retort that “there is no reason to assume a priori that the best total physical theory (if there be such a theory) will decide between the rival hypotheses or that there is a unique best total theory as opposed to two empirically equivalent total theories.” (Newton-Smith and Lukes 1978: 86). This answer clearly evokes Kukla’s appeal to total sciences. For the reasons just presented, I deem this move untenable. Newton-Smith (1999, personal correspondence) also suggests that there may be a peculiarity of the space-time theories which allows the construction of interesting cases of empirical equivalence. There is no doubt something in it, since most examples of empirical equivalence have been constructed via mathematical transformations on the structure of space-time. But there are also exceptions. As Cushing (1990; 1994) indicates, quantum mechanics has two empirically equivalent interpretations: the dominant, Copenhagen inter- pretation, and Bohm’s interpretation. Also, for decades, quantum field theory was considered to be empirically equivalent to the S-matrix theory of strong interaction, until the latter was abandoned without being falsified (see chapter 7). However, returning to EE3, there is surely no reason to admit that all the- ories are empirically equivalent. In fact, there are in scientific practice quite

118 few theories having empirically equivalent rivals in the sense of EE3. Most cases of empirical equivalence are of the EE1 sort, which, as we have seen, is benign to scientific realism. Assuming that EE3 is in place, would this actually entail SUD? In line with Leplin (1997), we argue that EE3 cannot be used to infer SUD, because if SUD is true, then EE3 is undecidable. In other words, the argument purports to show that UD entails the negation of EE (UD → ¬ EE), which is logically equivalent to ¬ UD ∨ ¬ EE. From the disjunction of these negations it follows that EE and UD cannot both be true. With one stroke, Leplin elegantly blocks the impetus of the entire underdetermination argument. Here is the key phrase of the argument – though rather cryptic with respect to the consequences Leplin wants to extract from it: Because theories characteristically issue in observationally attestable pre- dictions only in conjunction with further, presupposed background the- ory, what observational consequences a theory has is relative to what other theories we are willing to suppose. As different presupposition may yield different consequences, the judgement that theories have the same observational consequences – that they are empirically equivalent – depends on somehow fixing the range of further theory available for presupposition. And this UD ultimately disallows. (Leplin 1997: 155) Leplin states that every epistemologist, realist or antirealist, has to employ some epistemic standards to appraise the admissibility of the auxiliaries con- joined with a given theory. Those auxiliaries which are independently war- ranted by empirical evidence are deemed admissible. Given theory T and a set of admissible auxiliaries A, we conjoin T & A to derive the prediction P (T & A → P ). Depending on P ’s truth-value, we expect T to be confirmed or disconfirmed. Certainly, this demands that A had been independently tested, and better supported than T . Otherwise, given the Quine-Duhem problem, the confirmation from P would be distributed indeterminately on both T and A. This would make uncertain the use of A to the purpose of testing T , for the auxiliaries would “be subject to the same immunity to probative evidence as afflicts theory in general.” (Leplin 1997: 155). If no theory can be pre- ferred over its empirically equivalent rivals, as SUD urges, then the epistemic standards for the admissibility of A cannot be met. If there is no basis for choosing A over of some empirically equivalent set of auxiliary assumptions, then there is no fact of the matter as to what the empirical consequences of T are. Consequently, there is no possibility to establish whether the empirical consequences of T , and respectively, of its empirically equivalent rivals, are the same. In other words, assuming SUD entails that EE3 cannot be established. As Leplin phrases it, “EE cannot be used to obtain UD, because if UD is true then EE is undecidable.” (Leplin 1997: 155).

119 In summary, the formulations of EE are either too weak to lead to SUD, or so strong as to undermine SUD itself. The former (EE1 and EE2) are not problematic to scientific realism, while the latter (EE3) has no reason to be accepted.

5.3 Arguments against the entailment thesis

We shall now examine and refute an argument formulated by Laudan and Leplin against the entailment thesis, claiming that the empirically equivalence thesis (EE) entails underdetermination (UD).

5.3.1 EE does not entail UD Laudan and Leplin (1996) claim that even if we had a general assurance that any theory has an empirically equivalent rival (which we definitely do not), there still remained no reason to suppose that selection among empiri- cally equivalent rivals is underdetermined by evidence. The argument is two- pronged and consists in the following theses:

(1) Hypotheses may be evidentially supported by empirical facts that are not their own empirical consequences, and

(2) Hypotheses may not be confirmed by empirical facts that are their own empirical consequences.

Let us begin with the first one. Laudan and Leplin criticize the claim that evidential support accrues to a statement only via its positive instances Here is how they mean to prove that:

Theoretical hypotheses H1 and H2 are empirically equivalent but concep- tually distinct. H1, but not H2, is derivable from a more general theory T , which also entails another hypothesis H. An empirical consequence e of H is obtained. e supports H and thereby T . Thus e provides indirect evidential warrant for H1, of which it is not a consequence, without af- fecting the credentials of H2. Thus one of the two empirically equivalent hypotheses or theories can be evidentially supported to the exclusion of the other by being incorporated into an independently supported, more general theory that does not support the other, although it does not pre- dict all the empirical consequences of the other. The view that assimilates evidence to consequences cannot, on pain of incoherence, accept the intu- itive, uncontroversial principle that evidential support flows “downward” across the entailment relation. (Laudan and Leplin 1996: 67)

120 The argument purports to show that in most cases of empirical equivalence we have the means to discriminate between the two hypotheses. Let us call H1 the hypothesis to be indirectly confirmed, the target hypothesis; H, the general theory entailing T , the bridge theory; and e, the evidence confirming H, the indirect confirmation of H1. Phrased in these terms, the reasoning is as follows: The bridge theory has both the target hypothesis and H as consequences. H receives a strong confirmatory boost by the fact that e is in place. Given e, the confirmation is transmitted through H to T . That is, e’s truth entails an increase of p(H) which, in turn, entails an increase of p(T ). If this increase is large enough, then the probability of the bridge theory p(T ) can become greater than the probability initially ascribed to the target hypothesis. But since the target hypothesis is a consequence of the bridge theory, it cannot be that p(T ) < p(H1). Therefore, in order to restore probabilistic coherence, p(H1) needs to be raised to at least the level of p(T ). It follows that the target hypothesis has been indirectly confirmed by a piece of evidence which is irrelevant to H2, H1’s empirically equivalent rival. Put differently, an empirical discrimination between the two empirically equivalent hypothesis has taken place. Accordingly, it is not the case that EE entails SUD, even if the former was taken in its strongest formulation, EE3. However seducing, this argument is question-begging. As Kukla (1998: 86) remarks, Laudan and Leplin’s misstep consists in assuming that the confir- matory boost given by e goes to the bridge theory. Yet, by EE3, the bridge theory has itself empirically equivalent rivals, which are also confirmed by e. To opt for T and not for one of its empirical equivalents in the course of an argument against underdetermination is to beg the question of realism, for epistemic antirealism claims that the very option for T is underdetermined. The argument assumes that antirealism is false while trying to dismiss it. The problem as I see it is not with the reasoning about indirect confirmation, but with the false assumption that EE3 is harmless. We have seen that it is not. Laudan and Leplin are convinced of having established that evidential re- sults relevant to an hypothesis do not need to be consequences of that hy- pothesis. They believe this is sufficient to undermine the entailment thesis, from empirical equivalence to underdetermination. They display the converse point, that an hypothesis may not be confirmed by a direct consequence, for instructive purposes. They present the following situation of absurd pedagogy:

Suppose a televangelist recommends regular reading of scripture to in- duce puberty in young males. As evidence for his hypothesis (H) that such readings are efficacious, he cites a longitudinal study of 1,000 males in Lynchburg, Virginia, who from the age of seven years were forced to read scripture for nine years. Medical examinations after nine years def- initely established that all subjects were pubescent by age sixteen. The

121 putatively evidential statements supplied by the examinations are posi- tive instances of H. But no other than a resident of Lynchburg, or the like-minded, is likely to grant that the results support H. (Laudan and Leplin 1996: 68)

The above tale shows how biological evidence of the puberty of the Lynchburg youngsters seems to be logically entailed by a televangelist’s silly hypothesis, which is certainly absurd. It is tacitly assumed that the only reasonable expla- nation for puberty is provided by human physiology. But remember that EE3 is accepted. The consequence of this acceptance is that as a logical matter of fact, all the empirically equivalents of human physiology are confirmed by the results of the teenagers’ medical examination. Thus, under the assumption of empirical equivalence, physiology is strongly underdetermined. To stick to the physiological theory in this context is simply to take scientific realism for granted. In particular, the supposition that the results of that study are the only empirical evidence accepted in Lynchburg and that, for dogmatic reasons, no methodological innovation is allowed, makes human physiology and the tele- vangelist’s hypothesis empirically equivalent in Lynchburg. The latter notion is of course a caricature, but one which obviously evokes our previous discussion of ‘total sciences’. The dogmatic limitation to a certain amount of knowl- edge corresponds to the previously mentioned idea of an “end of time” which seems to be required by the notion of a total science. Certainly, healthy sci- ence knows no such limitations. But even so, the acceptance of EE3 proves to be very problematic for Laudan and Leplin’s argument. Further auxiliary as- sumptions allow for new pieces of evidence to be produced, so that physiology is rationally preferred to televangelist’s tale. But even by light of the evidence entailed in conjunction with new auxiliaries, the theory of human physiology has indefinitely many empirically equivalents. If EE3 is true, the theory will have empirically equivalents under any possible evidence. Thus Laudan and Leplin’s reverse path only indicates that the televangelist’s tale is silly, but supports no positive conclusion as to which explanation is actually true. The counter-intuitiveness of this conclusions points to the fact that the concession to EE3 is excessive, even when made merely for the sake of the argument. A concluding remark: by repelling Laudan and Leplin’s argument against the entailment EE3 → SUD, we have not demonstrated that EE3 → SUD is true. This implication would also take place when both EE3 and SUD are true. But we have seen that Leplin’s (1997) argument demonstrates that SUD → ¬ EE3, which forbids that EE3 and SUD be simultaneously true. By corroborating these results, we conclude that EE3 and SUD cannot both be true, though they can both be false. Our independent argumentation has

122 pointed towards the truth of the latter.

123 Chapter 6

Social Constructivism

Constructivism is a fashionable term of today’s prolific literature produced in fields as various as sociology, literary criticism, gender studies, legal studies, political science, cultural studies, and several others. Ian Hacking (1999) lists no fewer than sixty alleged cases of socially constructed entities. These include people, objects, states, conditions, events, practices, relations, experiences, substances, concepts, as well as facts, reality, knowledge, and truth – called ‘elevator words’, because they raise the level of discourse, both rhetorically and semantically.) The contexts of construction are no less heterogenous. Hack- ing’s list mentions gender, quarks, illness, Zulu nationalism, Indian forests, the past, emotions, serial homicide, authorship, the self, and many others. Faced with such a dazzling diversity, there is little hope that a comprehensive defini- tion could subsume the meaning of construction in all such contexts. It is in fact a characteristic of constructivist that for any belief, the appropri- ate question is why it is held; as Hacking states, “Don’t ask for the meaning, ask what’s the point.” (Hacking 1999: 5). With respect to scientific practice, constructivism refers to a cluster of approaches that involve empirical studies emphasizing the social nature of scientific practice. Its supporters typically maintain that the social factors be- longing to the micro-structure of scientific practice literally lead to the creation of facts about the world. The antirealism engendered by constructivism, both ontological and epis- temic, has been the subject of various philosophical criticisms. One point of dissatisfaction is the lacking rigor – both expository and analytical – that con- structivists frequently display in their case studies. As Arthur Fine phrases it,

when it comes to defending their doctrines, constructivists tend to rely more on polemics rather than on careful argument. Their rhetorical style, moreover, is at once romantic and apocalyptic. They portray themselves

124 in the vanguard of a new dawn in understanding science, a profound awakening that sweeps away oppressive philosophical categories – truth, reality, rationality, universality. (Fine 1996: 232)

In spite of these ‘sins’ which, unsurprisingly, gather little sympathy from an- alytical philosophers, the question to be raised is whether there is anything consistent and nontrivial that constructivism could tell us, given that very few philosophers would nowadays deny that social factors have a role to play in science. We set out to show that although constructivism, in most of its variants as distinguished below, is either inconsistent or unacceptable, some of its ideas are defensible and ought indeed to be retained by any sophisticated philosophy of science.

6.1 Varieties of social constructivism

Consider an entity X. What does it mean to say that X is socially constructed? Before trying to answer this, two sense of “social construction” need to be dis- tinguished. There is, first, a pejorative sense of the notion, in which social and historical circumstances give rise to relationships of dominance are critically unmasked. In this sense, the characteristic of an entity X being constructed is its being contingent; in Hacking’s (1999: 6) words, “X need not have existed, or indeed be at all as it is. X is not determined by the nature of things; it is not inevitable.” There is a strong emancipatory impetus triggered by this feature: the idea that objects and institutions whose meanings were deemed to be established, as it were, by nature itself, turned out to be merely the product of social circumstances is “wonderfully liberating” (Hacking 1999: 2). To take one example, feminist writers invested much effort into showing that gender and its incumbent social roles are imposed on us in order to serve ideological interests of which most of us may not be aware.1 A second sense of the concept of ‘social construction’ is a non-pejorative attempt at investigating the mechanisms by social relations are constitutive of certain kinds of entities. In particular, John Searle’s (1995) takes the following constitutive rule formula to be the key to social constructivism: X counts as Y in context C, where X are material entities which acquire a status Y in the socio-cultural context C. For example, “Bills issued by the Bureau of

1More radical than this reformist position is the one of Judith Butler, who asserts that “the construct called ‘sex’ is as culturally constructed as gender” (1990: 7), to the effect of denying that human sexuality is a biological given. Most extremely, Monique Witting (1992) rejects the whole categorial system relying on the conventional sexual and gender dualism and commends the lesbian as a lucid ‘agent of revolution’ who refuses to be either man or woman.

125 Engraving and Printing (X) count as money (Y ) in the (C)” (1995: 28). The new status, Y , could not exist in the absence of a system of constitutive rules by which functions are imposed through the continued cooperation of the agents in being aware of, and in accepting Y . Now, there is an obvious tension between the two senses of ‘social con- struction’, and, correspondingly, between Hacking’s and Searle’s approaches. According to Hacking, Searle’s book is not really about social constructivism (Hacking 1999: 12). It’s commonplace to Hacking that money, for instance, are a social product. Yet, according to Hacking, a precondition for properly arguing that an entity X is socially constructed is that “ In the present state of affairs, X is taken for granted; X appears to be inevitable.” (1999: 12). Otherwise, there is no point is talking about social construction. On the other hand, Searle’s understanding of social constructivism is not related to any emancipatory ambitions. He offers a comprehensive and sys- tematic account of how aggregates as an objective world. As far as I am concerned, while I am sympathetic with the spirit of Searle’s approach, my motivation is to see in how far can it be the case that entities such as quarks (not only as ideas, but also as objects!), posited by successful scientific theories, are social constructs – i.e. contingent products of intelligent collective action – and not determined by the world. That is why I find it appropriate to take the following mix of Hacking and Searle as clauses inclusive enough to encompass the plurality of schools and types of constructivism:

(1) X is produced by intentional collective human activity.

(2) X is not inevitable, that is X need not have existed, or be as it presently is.

As with Searle, I stress that the kind of intentionality needed for construc- tivism is collective. Intentionality is taken here to mean those features of representations by which they are about something or directed at something. Further, constructivism is about a publicly accessible world, so that individual intentionality (private beliefs, desires, fantasies, etc.) play no role on the con- structivist scene unless shared by many members of the society.2 Nonetheless, unlike Searle, I shall not insist that “collective intentionality is a biologically primitive phenomenon that cannot be eliminated in favor of something else.” (Searle 1995: 24; my italics). Neither shall I search deeper into the question of whether individual intentionality is fundamental or derived from collective intentionality. Both issues outreach my current interest.

2A discussion could certainly take place about how individual beliefs, desires, and the like can come to be collective.

126 The two above clauses are not independent. Although (2) does not entail (1), (1) does entail (2): the evitability of X comes from the fact that human agents could have chosen to do something different. In Kukla’s words,

The type of possibility at issue in constructivist claims is the option of free agents to do something other than what they actually did. (Kukla 2000: 3)

Adapted to scientific knowledge, (1) and (2) become, respectively,

(1’) Scientific facts are not given, but are produced by scientists’ choice- making involved in collective theory construction.

(2’) Scientists’ are not inevitable.

Obviously, these clauses require detailed qualifications. To this purpose, sev- eral kinds of social construction ought to be identified. According to the philo- sophical assertions they involve, I distinguish, in line with Kukla (2000), among a metaphysical, an epistemic, and a semantic version of social constructivism.

Metaphysical constructivism – henceforth MC – is the thesis according to which the facts about the world we live in are socially constructed.3 It is of course useful to make further, finer-grained distinctions under the label of metaphysical constructivism. I suggest an ordering on two dimensions: a vertical dimension, corresponding to the kind-inclusiveness of the constructed entities, and a horizontal one, corresponding to the scope of the construction. Let us begin with the vertical dimension. First, it is commonsense that artifacts such as radios, sandwiches, houses and the like are fabricated in a sense which straightforwardly satisfies (1) and (2). That is, the socially con- structed character of artifacts can be taken as literally and obviously true. It is part of the meaning of ‘artifacts’ that they are constructed. Second, the claim that apart from artifacts, ideas such as numbers, values, concepts, theories, etc., are constructed is somewhat bolder, though also typically easy to accept.4 Third, the assertion that the referents themselves of ideas are constructed is definitely as strong as to seem unbelievable to many. For example, the claim is not merely that the concept of a ‘quark’ is socially constructed, but also that quarks themselves are socially constructed. The latter is certainly an intriguing thesis, in need of detailed argumentation.

3For expository reasons, ‘social constructivism’ will frequently be abridged to ‘construc- tivism’. 4Platonists find this step objectionable. For example, epistemic values like ‘don’t accept inconsistent theories’ are not deemed to be mere constructions.

127 On the horizontal dimension, the varieties of metaphysical constructivism align according to the quantifier under which the construction takes place. If one means that some facts about the world are constructed, then one is a mod- erate metaphysical constructivist – henceforth MMC. The idea that some facts are constructed is of course compatible with another realm’s of facts being unconstructed. Further, if one implies that all facts (not only actual, but also possible) which are ever knowable to us are constructed, then one is a strong metaphysical constructivist – henceforth SMC. Finally, if one asserts that there is no independent unconstructed reality, then one is thereby a radical meta- physical constructivist – henceforth RMC. To claim this is not merely to say that all facts that we can ever know are constructed, but also that absolutely all facts are so, including those that are inaccessible to human knowledge by any possible method. Following Kukla (2000: 25), we call these facts that are inaccessible in principle noumenal facts. Even assuming that all the facts that we know of, as well does not yet imply that the world in its entirety is con- structed. There may be facts unknowable to us, whose construction cannot be asserted. Yet the radical metaphysical constructivist makes the stronger claim that there are no unconstructed noumenal facts. The ground for such a claim can consist either in some valid argument against the existence of a noumenal realm, or in showing it to be socially constructed. Many would take the latter claim to be incoherent, but not everyone. Kukla (2000: 25) for example, suggests that some idea of the psychoanalytic notion of the ‘uncon- scious’ would fit the bill. He has not developed the idea, and neither shall we, because it will anyway be soon seen that strong metaphysical constructivism – and a fortiori radical metaphysical constructivism – suffers from irredeemable inconsistencies. Employing these two dimensions, we can pigeonhole different sorts of meta- physical constructivism. However, some of them must be filtered out from the beginning either because they are irrelevant, or incoherent. It is for artifacts a definitional matter that they are constructed, meaning, it is an analytic sentence that artifacts are constructed. As such, prefixing a quantifier would garner no further information: ‘Some artifacts are constructed’ is exactly as true as ‘All known artifacts are constructed’ or as ‘Absolutely all artifacts are constructed.’ In all cases, the truth value is known a priori. With respect to ideas, no such limitations are present. One can embrace either of the following doctrines: ‘Some ideas are constructed’, ‘All known ideas are constructed’, or ‘Absolutely all ideas are constructed’. The latter sentence presupposes the notion of an realm – something like Popper’s Third World or the Platonic topos eideticos – of abstract entities, some of which are epistemically inaccessible to humans. Such a realm might be con- structed by non-human rational beings, or by some higher . But

128 it’s contentious whether there can be any principled demarcation of a sphere of ideas inaccessible to human rationality. As to facts about the world, metaphysical constructivism again reveals some complications. Metaphysical constructivism about some facts (MMC), and respectively about all known facts (SMC) raises no understanding prob- lems. We distinguish among everyday facts, scientific facts, religious facts, moral facts, etc. We speak accordingly of commonsense constructivism, sci- entific constructivism, religious constructivism, etc. Our focus will predomi- nantly be on the metaphysical constructivism about scientific facts. MMC and SMC presuppose the existence of an unconstructed world of brute facts, out of which constructed facts are made. We argue that this is the case following the reasoning of Searle (1995). Searle presents a transcen- dental argument for the existence of the external world. In a transcendental argument, one assumes that a certain condition holds and then tries to depict the presuppositions of that condition. As a matter of fact, Searle presents two distinct arguments corresponding to the two kinds of antirealism questioning the existence of an the external world: phenomenal idealism (the view that all reality consists of mental states) and social constructivism. Although we focus on the latter, it should be observed that Searle’s argumentation does not apply to radical metaphysical constructivism (RMC), which differs from phenome- nal idealism in admitting the independent existence of a socially constructed, publicly accessible world. We shall return to this immediately. Searle proposes a distinction between the brute reality of the external world and the reality of socially constructed facts:

The simplest way to show that is to show that a socially constructed real- ity presupposed a reality independent of all social constructions, because there has to be something for the construction to be constructed out of. To construct money, property and language, for example, there have to be raw materials of bits of metal, paper, land, sounds, and marks, for example. And the raw materials cannot in turn be socially constructed without presupposing some even rawer materials out of which they are constructed, until eventually we reach a bedrock of brute phenomena independent of all representations. The ontological subjectivity of the socially constructed reality requires an ontologically objective reality out of which it is constructed. (Searle 1995: 190)

The conclusion of the argument, to which I fully subscribe, is that SMC pre- supposes a realm of unconstructed facts: there simply must be brute facts out of which the social ones are constructed. Now metaphysical constructivism with respect to absolutely all facts about the world (RMC) is only possible at the price of an ontological hiatus. To as- sert the social construction of all facts about reality entails that the existence

129 of a brute external world is itself a constructed fact. Consequently, there were no brute facts out of which collective human intentional action could construct other facts. Note that there seems to be no non-question-begging argument against those varieties of constructivism which make the leap over this onto- logical hiatus. Some considerations about the eventuality of idealist scenarios were provided in the preceding chapter, when the idea of algorithmically gener- ated empirical equivalents was dismissed. These scenarios urge an ontological clarification which cannot be achieved by argumentation, since any reasoning can be plagued by Cartesian scepticism. This is why I’ll cut this Gordian knot by choosing realism about the external world, as a matter of plausibil- ity. I believe that stones and trees and cats, as well as electrons and genes, exist materially out there, independently of us, and not as ideas or figments of imagination, or as some sort of spiritual . This assumption is shared by both MMC and SMC.

Epistemological constructivism – henceforth EC – is the claim that rational belief warrant is not absolute, but socially constructed, and hence it makes sense only relative to a culture or paradigm. As will be seen later, the natural name for EC is epistemic relativism.

Semantic constructivism – henceforth SC – is the claim that meanings are socially constructed, that is, they are byproduct of a consensual social activity. Since consensus is liable to fortuitous changes, meanings are not de- terminate. This idea was primarily inspired by Wittgenstein’s considerations of the foundations of mathematics. Wittgenstein’s reflections have famously been developed by Kripke (1985), who has further inspired David Bloor’s doc- trine of finitism. According to Bloor, ‘meaning is constructed as we go along’. The future applications of a concept are not ‘fully determined by what has gone before.’ (Bloor 1991: 164). Finitism will be critically discussed in the next section.

These variants of constructivism are, at least prima facie, largely indepen- dent of each other, although they are often taken in a lump by both friends and foes of constructivism.5 In order to see this, let us begin by checking the relation MC with EC. On the one hand, MC does not entail EC: the claim that

5As to the latter, Devitt’s definition of constructivism is illustrative: “A metaphysical doctrine which combines two Kantian ideas with relativism. The Kantian ideas are that the known world is partly constructed by our imposition of concepts (and the like); and that there is an unknowable world that exists prior to our imposition of concepts. The addition of relativism yields the view that the different groups impose different concepts making different worlds.” (Devitt 1999: 308)

130 facts about the world are socially constructed goes well along with the nonrel- ativistic claim that our beliefs about them are true, or rationally warranted. For example, it is an institutional fact that money can buy things. This is so whether I believe it or not.6 The belief is either absolutely true or absolutely false. Likewise, one can maintain that scientific facts are constructed, while also maintaining that the justification of one’s beliefs about them comes with the imperatives of an universal rationality. On the other hand, relativism of beliefs is compatible with the realist assumption of an external, objectively existing world – provided one does not pretend that every belief gets it right – so that EC is shown not to entail MC. Consider further the relation between MC and SC. They are also indepen- dent: the eventuality that the world consists of constructed facts has nothing to do with our capacity of representing the world accurately. Whether the world is (or is not) made up of constructed facts is one thing, and whether language hinges upon the world or not is another thing. No doubt, many con- structivists will claim that different languages construct different worlds, but this actually conflates and . The relation between EC and SC is not one of complete independence. On the one hand, it is true that EC does not entail SC: epistemic relativism does not exclude that sentences have non-conventional, determined meanings. Hence, a belief can be absolutely truth-evaluable even if there is no absolute warrant for it. This shows EC to be compatible with the denial of SC. On the other, the reverse entailment (SC → EC) seems to hold. Kukla suggests the contrary. He suggests the possibility that sentences lacking de- terminate empirical content be compatible with absolutism (non-relativism), owing to tacit knowledge:

Sentences have no determinate empirical content, but we may still have non-propositional knowledge about the world that is absolutely correct. That is to say, we may tacitly know what happens next and act accord- ingly, even if we’re unable, even in principle, to say what happens next. (Kukla 2000: 6)

However, knowing what happens next requires that I know tacitly that the next entity X is indeed an X. Usually this assumes that there are real kinds and that X belongs to one of them. Yet, semantic constructivism also denies that real kinds have determinate meanings. Accordingly, what is actually needed is a notion of tacit knowledge which transcends any division into kinds, and to

6Naturally, the condition is that most people do believe that money can buy things. If no one – or too few people – believed that, then money would lose the definitional property of being an exchange value.

131 make sense of it may be very difficult. Thus it seems, after all, that SC does entail EC. Be it as it may, the forthcoming discussion will reveal more complex links at a deeper level among MC, EC and SC.

6.2 The reflexivity problem

An immediate reaction against constructivism is to object that its claims fall under their own scope, and are thereby self-refuting. I call this predicament the reflexivity problem, inspired by the use of the notion in the social sciences, whose subjects are themselves epistemic agents which can be influenced by their own beliefs about the generalizations of social theory (cf. Rosenberg 1988). Also appropriate is the name that Hacking (1999) gives this phe- nomenon: the looping effect of human kinds, which he describes as follows: People [classified within one kind or another] can become aware that they are classified as such. They can make tacit or even explicit choices, adapt or adopt ways of living so as to fit or get away from the very classification that may be applied to them. (Hacking 1999: 34) Social sciences are thus supposed to be distinguished from the natural ones, whose object of study is by and large indifferent our categorial system. Re- flexivity in social sciences is well illustrated by self-fulfilling prophecies and ‘suicidal’ predictions. A nice example of the latter is given by Rosenberg (1988): An economist surveys farmer’s costs and the current price of wheat and, plugging these data into his theory, predicts that there will be a surplus this fall and the price will decline. This prediction, circulated via the news media, comes to the attention of farmers, who decide to switch to alternative crops in expectation of lower wheat prices. The results are a shortfall of wheat an high prices. (Rosenberg 1988: 96) The essential factor leading to the self-undermining character of many econom- ical predictions consists in their very dissemination. The economic prediction was falsified because its subjects, after having become acquainted with it, changed their behavior. By contrast, physical theories are not and cannot be influenced in their subject matter by the fact of their dissemination. The doctrine of social constructivism must itself face the problem of re- flexivity, since it posits the intentional collective action of human agents, and since these are subject to a looping effect. In the next section, we shall inspect the reflexivity of social constructivism under the specific formulations of MC, EC, and SC, and conclude that while MC can cope with reflexivity, EC and SC display fatal inconsistencies.

132 6.2.1 The reflexivity of metaphysical constructivism

Reflexivity is problematic for strong metaphysical constructivism (SMC), which claims that all facts about the known and the knowable world are so- cially constructed. If we grant that this is a fact, then the meta-fact that it is a fact that all facts about the world are constructed is itself constructed, as is the meta-meta-fact that the meta-fact that the fact that all facts are constructed is itself constructed, and so on ad infinitum. The moderate constructivist (MMC) claim is that only some facts about the world are socially constructed, while the rest are not. Even if the fact that some facts are socially constructed is itself a part of the class of socially constructed facts, its meta-fact need not be socially constructed. Collins and Yearley (1992), for example, claim that nature is socially constructed, but that the social realm is not. In their understanding, what Newton says about the world is a construction, but when sociology of science says that Newton had such and such an interest, this is objectively true, hence not constructed. They explicitly deny reflexivity. But how could Collins and Yearley’s own position be exempt from social interests? Shouldn’t we seek the social factors that determine their own view, that Newton’s thinking being governed by interests is objectively true? And would not whatever answer we may give be in its own turn socially determined, and so on ad infinitum? One answer is simply to deny that this is the case: Collins and Yearley’s account is not itself a scientific theory, so it does not fall under its own scope. The situation is similar with Popper’s dictum that ‘To be scientific, a theory must be refutable’. It does not follow that this claim must itself be refutable since it might have a very different status, e.g. an a priori norm. The point may arguably be taken. A different answer is to bite the bullet and prove that even if an infinite regress is generated, it is not a vicious one. This is the strategy of the Edinburgh school (David Bloor, Barry Barnes, Steven Shapin), which developed the celebrated Strong Programme in the sociology of scientific knowledge. The theoretical principles of the ‘Strong Programme’ were laid down in Bloor’s (1976) Knowledge and Social Imagery. The principle of Causality states that the explanation of scientific beliefs should employ the ‘same causal idiom’ as any other science (Bloor 1976: 3). Impartiality requires that both true and false (or both rational and irrational) beliefs be causally explained, while Symmetry demands that both kinds of beliefs be explained by the same type of factor. Finally, Reflexivity dictates that the program should apply to itself. According to Bloor, there is no methodological difference between the nat- ural and the social sciences. In a sociological account of science,

133 [the] of explanation would have to be applicable to sociology itself. It is an obvious requirement of principle because otherwise sociol- ogy would be a standing refutation of its own theories. ...the search for laws and theories in the sociology of science is absolutely identical in its procedure with that of any other science. (Bloor 1976: 5, 17)

The Strong Programme takes science to be a ‘social phenomenon’ whose meth- ods, results, and objectivity are causally influenced by social factors. This stance clearly evokes the so-called externalist approach to the history of sci- ence, according to which science in its particular configurations is essentially determined by social factors. Well-known is Paul Forman’s (1971) thesis that the German scientists during the Weimar Republic ‘sacrificed physics’ to the Zeitgeist. The readiness of quantum physicists to accept indeterminism and to find a failure of causality was the expression of a compromise they made under socio-intellectual pressures from a mystical and anti-rational public. Ni- iniluoto (1999) neatly formulates the general structure of such an externalist explanation:

The members of the community C belong to social class S. The members of S have the social interest I. The members of C believed that theory T would promote interest I. Therefore, the members of C believed in theory T . (Niiniluoto 1999: 255)

It follows that the belief in social causation falls under its own scope, being itself a byproduct of social interests. Bloor admits that his program is self-referring, but denies that it is self- refuting. He does not dispute that an infinite regress is generated, but refuses to admit that this particular kind of regress is logically problematic. The point is made explicit by Kukla:

this particular regress doesn’t entail that anybody has to do an infinite amount of work. The fact that every belief is socially caused entails that there is always an additional SSK [sociology of scientific knowledge] project to work on if one is looking for work. But this no more precipitates us into the abyss of Hell than the fact that we can always count more numbers. (Kukla 2000: 72)

I agree with Kukla that the Strong Programme is not menaced by a vicious regressus ad infinitum – it will be immediately explained why the regress is not vicious. Indeed, Barnes and Bloor present this fact as one of the strengths of their enterprise. Nevertheless, it is worthwhile making it clear, together with Brown (1989: 42), that with respect to reflexivity, the Strong Programme fails

134 not because of being self-referring, but because of its ambition of explaining all human action in terms of social causality while excluding internal explanations, i.e., the role of intra-scientific rationality. The point here is that the Strong Programme self-undermines its capacity to argue for its position:

Bloor’s claim is that it is not evidence, but instead social factors, which cause belief. If Bloor is right, then he must drop bricks on our heads or alter our class interests or some such thing. There is no point in arguing his case; for if he is right, then arguments must be causally ineffective. (Brown 1989: 42)

Arguments of this kind led Bloor (1991) to suggest that the message of the Strong Programme has been misunderstood. He accepts that internalist expla- nations of science are possible, but emphasizes the need for social explanations in order to understand why the scientific community accept certain reasons to support certain beliefs. He means that ‘the link between premise and con- clusion is socially constituted’. Yet Bloor is willing to admit causal empirical explanations about perception, assuming a ‘naturalistic construal of reason’. In fact, naturalism seems to be a substitute, In Bloor account, for his previous sociologism. As he insists, cognitive science and the sociology of knowledge are ‘really on the same side’, since they are both naturalistic. (Bloor 1991: 170). Through these concessions, Bloor’s version of the Strong Programme gains some acceptability for realism, but loses a lot of its bite. Some doubts still persist. As Susan Haack (1996) points out, the compelling nature of deduc- tive and mathematical reasoning requires no social explanation. The mere assumption that the scientist has trained his thinking by learning and practic- ing mathematics, is in place. But unless one embraces the absurd view that a mathematical apparatus is the byproduct of a social class following its egoistic interests or something like that, then the assumption is trivial. To return to the problem of infinite regress, the relevant issue here is whether all infinite regresses are vicious. Kukla does not think so, and thus proposes the following criterion to distinguish between vicious and benign re- gresses: an infinite regress is vicious if it demands an infinite amount on events to take place in a finite amount of time. By contrast, if the infinite amount of events had an infinite time at their disposal in order to take place, the regress is benign. Reference to Zeno’s paradoxes of motion is useful in the latter case. In the Achilles and the tortoise argument, each time that Achilles sets out to catch up with the tortoise, it turns out that by the time he arrives at the place where the tortoise was when he set off, the tortoise has moved slightly. The argument displays poor Achilles with an endless amount of tasks to be performed in a physically finite time. Certainly, nowadays we say that as a finite space was decomposed into infinitely many parts, a finite time interval

135 can be mathematically decomposed into infinitely many distinct parts. If we take it that Achilles has an endless series of tasks to do, then by the same token he has an infinite amount of time at his disposal. Doing infinitely many things requires a lot of stamina, but this should not be a problem for Achilles, as he has infinitely many time intervals.7 An example closer to real epistemological problem-solving is given by the so-called common knowledge of rationality (CKR) assumption in game theory. The assumption is essential for enabling one to form expectations about the behavior of someone else. Formally, CKR is neatly presented by Hargreaves Heap and Varoufakis:

[CKR] is an infinite chain given by

(a) that each person is instrumentally rational (b) that each person knows (a) (c) that each person knows (b) (d) that each person knows (c). And so on ad infinitum.

(Hargreaves Heap and Varoufakis 1995: 24)

But how can these infinitely many conditions be fulfilled in a finite time? The answer again appears to be that the finite series of tasks is performed by the involved players in infinitely many infinitesimal time intervals. Accordingly, the regress generated by CKR is not vicious. So much for benign regresses. As to the vicious ones, considerably less work is required to show that they exist. Following Kukla again,

suppose that someone claims that he has always rung a bell before per- forming any action. If this were true, then he would have to ring a bell before imparting that information to us. Moreover, since the ringing of the bell was itself an action, he would have had to ring a bell before the last ring, and so on. Obviously, if what he told us were true, he would have had to ring the bell infinitely many times, by which I mean that no number of bell rings would prove sufficient. (Kukla 2000: 73)

Clearly, an infinite amount of labor, and hence an infinite physical time is required to perform this kind of action. Let us now take a look at a concrete infinite-regress argument against strong metaphysical constructivism (SMC). As proposed by Niiniluoto (1991),

7One could object and say that we know the whole time interval to be finite, so that according to Kukla’s definition of a vicious regress, Achilles is in such a predicament. But so considered, we may just ignore Zeno’s analysis and take the space interval as finite too.

136 ...a fact F exists if:

(2) there exists a laboratory B where F has been constructed.

Now (2) expresses a fact, F ’ say, and it exists if:

(3) there is a laboratory B’ where F ’ has been constructed,

etc. Continuing in this way, either we admit at some stage that some facts exist without construction or else we are involved in an infinite regress of an endless sequence of labs B, B’, B”,. . . (Niiniluoto 1991: 151)

It is unclear what made Niiniluoto double the series of construction levels: F , F ’, F ” . . . , and B, B’, B”,. . . . Presumably he understands SMC in terms of one of its species, the doctrine that all scientific facts are socially constructed and that they are ontologically prior to all other facts. In any event, the viciousness of the above infinite regress consists for him in the infinite number of laboratories required to construct the series of facts F , F ’, F ”,. . . . The natural question arises, why should the construction of a scientific fact be itself scientific? It could be so, but it need not. Reference to laboratories is superfluous. Besides, even if scientific facts were scientifically constructed, why should we suppose that these actions take place in different laboratories? There is no reason to urge more than a finite number of laboratories for the construction of the F s. Therefore, I conclude that the infinite series of labs does not generate a genuine regress. The real problem resides in the infinite series of constructed events. The question which at this point can be raised, is why should the F s be distinct? Imagine a laboratory in which a brain in a vat is being induced to believe that there is a laboratory where a brain in a vat is being induced. . . , and so on, ad infinitum. Are the processes of inducing these beliefs about facts? And if they are, does their construction require an infinite amount of effort? For reasons already exposed, at least the latter question can be answered in the negative. To be sure, this does not prove that there are not vicious cases of infinite regress concerning SMC. It only shows that each regress case requires individ- ual analysis, so that SMC cannot in general be proven to turn destructively against itself.

6.2.2 The reflexivity of epistemic constructivism As already indicated, EC – the thesis that truth and rational belief are socially constructed, and hence only make sense relative to a culture or paradigm – is really epistemic relativism. It is notorious that relativism has troubles with reflexivity. Granted (absolutely) that every belief is warranted only relatively

137 to a paradigm, then the belief that every belief is warranted only relatively to a paradigm is itself relative to a paradigm. Therefore, the assumption that relativism is warranted implies that relativism is not warranted. As we can see, the difficulty that reflexivity raises for EC is not infinite regress, but inconsistency. One possible attempt to avoid this hindrance is to take the line of least re- sistance, i.e. to claim that relativism does not pretend to any absolute warrant. Instead, assume that the relativist is satisfied with being right only relative to a paradigm. In line with Siegel (1987: 25) and Kukla (2000: 29), let us label this doctrine relativistic relativism. There is at least one good reason why relativis- tic relativism cannot redeem relativism. Granting that relativism is warranted only relatively to a paradigm, we need, in order to escape the original prob- lem, to relativize this belief even further: only relatively to a paradigm can the belief that relativism is warranted only relatively to a paradigm be warranted, and so on ad infinitum. This is an infinite regress with the consequence that relativistic relativism cannot be justified.8 A possible retort is that all beliefs could be justified relatively to the same paradigm. This takes us to the second attempt of the relativist to escape the poser of reflexivity: to admit that at the meta-level belief in the relativity of rational warrant is absolute. In fact, such an absolutistic relativism can main- tain a relativistic position for any number n of levels as long as level n + 1 offers absolute warrant. A second-order absolutistic relativism, for instance, is advocated by those social scientists whose case studies bring forward an incon- trovertible cultural and social relativity of opinions. Nevertheless, it should be noted that absolutistic relativism is at the end of the day not relativism, but absolutism. As long as level n+1 ensures absolute warrant for belief, epistemic relativism does not deserve its name. Therefore, we may agree that reflexivity is an insurmountable problem for EC. Put bluntly, EC is out of the race.

6.2.3 The reflexivity of semantic constructivism Recall that SC claims that meanings are socially constructed, i.e. meanings are a matter of social consensus. Since consensus is liable to change, meanings are unreliable creatures. Accordingly, the empirical content of sentences is also undetermined. Assume, for example, that hypothesis H has no determinate empirical content. If SC is true, then the sentence ‘H has no determinate empirical content’ has no determinate empirical content. So if we grant that SC

8Of course, it is important who asks the questions: if asked, the relativistic relativist cannot justify any of her answers, but this doesn’t mean that she cannot make claims justified according to non-relativist’s standards.

138 is true, it follows that it is undetermined whether SC is true. It is not quite clear what conclusion follows from this situation. This is why it is recommendable that we take a closer look at the argumentative capabilities of SC. However eccentric, the claim that beliefs and sentences have no determinate empirical contents has been plentifully advocated by semantic constructivists. Barnes (1982), Bloor (1983), and Collins (1985) embrace the slogan that nature has no role to play in forming our beliefs.

Finitism and interest theory SC takes its support from the doctrine of finitism, according to which the future applications of a concept are not determined by its use in the present. According to Bloor, since these future applications can always be contested and negotiated, meanings have the ‘character of social institutions’ (Bloor 1991: 167), that is, they are always liable to social negotiations. The doctrine of finitism stems from the Wittgensteinian sceptical paradox about meaning and rule following. Since Kripke (1982) offers the most lucid and complete treatment of this paradox, I’ll follow his argumentation. Let us first present Kripke’s argument and then see to what extent it really supports constructivism. Kripke proceeds by means of an arithmetical example. He refers to the word ‘plus’ and to the symbol ‘+’ to denote the well-known mathematical function of addition, defined for all pairs of positive integers. Although I have computed only finitely many sums in the past, the rule of addition will determine my answers for infinitely many new sums. If 68 + 57 is a computation that I have never performed, I just follow the familiar rule and answer confidently 125. Now a sceptic comes along and objects that according to the way I have used the term ‘plus’ in the past, the intended answer for 68 + 57 should have been 5! He insists that

perhaps in the past I used ‘plus’ and ‘+’ to denote a function which I call ‘quus’ and symbolize by ‘L’. It is defined by:

x L y = x + y, if x, y < 57, = 5, otherwise.

Who is to say that this is not the function I previously meant by ‘+’? (Kripke 1982: 8–9)

Kripke wonders what fact about my previous mental, behavioral or physical history makes it the case that I mean this particular time the familiar plus rather than the ridiculous quus. To make the issue clear, the question is not, how do I know that 68 plus 57 is 125?, which should be answered by performing

139 an arithmetical computation, but rather, how do I know that 68 plus 57, as I meant ‘plus’ in the past, should denote 125? (1982: 12). In other words, how do I know that in the past I meant addition and not quaddition? The apparent problem is that I do not and cannot know. There is no fact about my history, as the sceptic maintains, that establishes that I meant ‘plus’ rather than ‘quus’. Therefore, she argues that my answering 125 to the problem 68+57 is ‘a leap in the dark’. After having explored several tentative responses – speakers have disposi- tions to use words in particular ways, or irreducible experiences with their own signaling the proper use; they appeal to simplicity considerations which will presumably make speakers use ‘plus’ instead of ‘quus’; there are appropri- ate Platonic forms corresponding to the correct meanings – Kripke’s sceptic concludes that there are simply no facts of the matter concerning meaning. The discouraging conclusion follows that there is nothing in the use of words in the past to determine their future use. As Kripke phrases it,

There can be no such thing as meaning anything by any word. Each new application we make is a leap in the dark; any present could be interpreted so as to accord with anything we may choose to do. So there be neither accord, nor conflict. This is what Wittgenstein said in §202. (Kripke 1982: 55)

Nonetheless, Kripke also teaches us that we can live with this consequence. Claims about what words mean indeed lack truth-conditions, yet as Kripke maintains, they have socially determined assertibility conditions. They have conditions under which we are prepared to asserts that someone is using a word in conformity with its meaning. Individuals cannot be said to follow rules for the use of words, since assertion-conditions require a consensus. Consensus is a community matter, and this fact rules out the possibility of a private language. Several philosophers found Kripke’s solution deficient and aimed for more promising approaches.9 McDowell (1984) maintains that rules are simply en- shrined in our communal practices. No explanation, he claims, can be given for the significance of rules and none is needed – a position which came to be known as . The extension-determining approach by Wright (1989) sets out to avoid giving up the objectivity of meaning while at the same time bring- ing it within epistemic reach. Wright’s point is that only those judgements that we make in the most propitious conditions for judging go into determining the right use of a term. Thus, our linguistic judgements determine rather than re- flect the correct application of our terms. The major difficulty of this approach

9I am partly guided in the following brief review of arguments by Barry Smith’s (1998) account of the rule-following issue.

140 is the non-circular specification of the most propitious conditions. Boghossian (1989) general attack against nonfactualism – the doctrine according of which ascriptions of meaning do not possess truth-conditions – argues that the above sceptical solution is incoherent. He argues, on the one hand, that nonfactualism presupposes a robust theory of truth, i.e. a theory “committed to holding that the predicate ‘true’ stands for some sort of language independent property” (1989: 526). On the other hand, he argues that, by being committed to the inexistence of a substantive property corresponding to the predicate “is true”, nonfactualism also entails that truth is not robust. Now one way to circumvent this incoherence would be to show that nonfactualism is as a matter of fact not committed to the view that truth is some substantive, language-independent property, as Jane Heal (1990) claims to have done. Alexander Miller (1998: 172–5) aptly draws our attention to an objection that Wright (1984) and Zalobardo (1995) have independently raised against Kripke’s sceptical solution: the assertibility conditions to which Kripke turns need themselves to be accounted for their content, or else they will be of no avail in explaining our practice of ascribing meanings. Nonetheless, with respect to assertibility conditions associated to past sentences, the question pops up, would not any truths concerning assertion conditions previously associ- ated by somebody with a particular sentence have to be constituted by aspects of his erstwhile behaviour and mental life? So the case appears no weaker than in the sceptical argument proper for the conclusion that there are no such truths; whence, following the same routine, it speedily follows that there are no truths about the assertion conditions that any of us presently associates with a particular sentence, nor, a fortiori, any truths about a communal association. (Wright 1984: 770) Thus if the ascriptions of meaning depend on assertibility conditions, the for- mer are themselves meaningless. So much for the discussion of ‘Kripke’s Wittgenstein’ (KW). Returning to semantic constructivists, many of them are very fond of KW’s attack on meaning determinism. Barnes, Bloor, and Collins adopt and adapt Kripke’s solution: meaning is merely a matter of social negotiation. The claim is that meanings are byproducts of social interests reflected in negotiation processes, marked by social hierarchies and power relations. Indeed, the main concern of SC seems to be the elucidation of power and interest relations within the scientific establishment. The view in the Strong Programme’s interest theory is that the truth-conditions of scientific beliefs are determined by the interplay of social interests, not by any rational method. But, of course, the Wright-Zalobardo objection against the sceptic solution will also affect its semantic constructivist formulation. We have seen that the

141 sceptical paradox turns upon itself to the effect that nothing can be meaning- fully said about assertion condition, to the effect that no meaning ascriptions can be fulfilled. Thus, the very formulation of the sceptic paradox verges on senselessness. Besides, by rejecting the capability of rational methods to as- cribe and fix meanings, SC reaches the brink of irrationalism. Accordingly, the question to be raised is, in how far can we take seriously the argumentation of SC. This straightforwardly leads us back to problem of epistemic construc- tivism, which has been seen in the previous subsection to be inconsistent.

Facts as hardened text pieces A second line of argumentation for semantic constructivism comes from the celebrated Laboratory Life by Bruno Latour and Steve Woolgar (1986). While the the Edinburgh School takes beliefs as the unit of analysis, Latour and Wool- gar focus on ‘scientific facts’. Thus the weight center shifts from knowledge10 as an intellectual possession of individuals to knowledge as a commodity. Another difference from the constructivism of the Strong Programme is that Latour and Woolgar do not consider scientific knowledge determinable solely by social factors. As Latour (1987) argues, the social determinist thesis fails because we do not understand society any better than we do the natural world. He considers science as something to be studied as a connecting point between society and nature. To have a more precise understanding of the latter claim, let us take a look at Latour and Woolgar’s method and at their conception about the outcome of science. Their study of scientific activity is based on an anthropological approach of the scientific community:

Whereas we have fairly detailed knowledge of the myths and circumcision ritual of exotic tribes, we remain relatively ignorant of the details of equivalent tribes of scientists. (Latour and Woolgar 1986: 17)

This ‘methodological’ separation from the object of study clearly attempts at lessening both the claims of scientists about the rationality of their activity, and the respect that science enjoys in society:

We take the apparent superiority of the members of our laboratory in technical matter to be insignificant. This is similar to an anthropologist’s refusal to bow before the knowledge of a primitive sorcerer. There are no a priori reasons for supposing that scientists’ practice is any more rational than that of the outsiders. (Latour and Woolgar 1986: 29)

10The concept of ‘knowledge’ in the analyses of social theorists is not taken as ‘true belief’, but as ‘whatever scientists agree that knowledge is’.

142 As Brown (1989) indicates, Latour and Woolgar describe scientists as a com- munity of graphomaniacs: Publication is the final goal of scientists and of science in its entirety. There seems to be nothing significant beyond the pro- duction, via large inscription devices, of texts. Here is another telling excerpt from Laboratory Life:

The problems for participants [scientists] was to persuade readers of pa- pers (and constituent diagrams) that its statements should be accepted as facts. To this end rats had been bled and beheaded, frogs had been flayed, chemicals consumed, time spent, careers had been made and broken, and inscription devices had been manufactured and accumulated within the laboratory. By remaining steadfastly obstinate, our anthropological ob- server resisted the tentation to be convinced by the facts. Instead, he was able to portray laboratory activity as the organization of through literary inscription. (Latour and Woolgar 1986: 88)

Scientific facts themselves are ‘hardened’ parts of texts, i.e. conjectures trans- formed into background knowledge. Latour (1987) refers to social facts as ‘black boxes’. These are inscriptions invested with so much authority that once established, they are never challenged or reinvestigated. They thus come to constitute bedrock knowledge for all participants in a research field. A central thesis of Latour and Woolgar is that the fundamental commodity in scientific activity is peer recognition. Scientists want the approval of other members of their tribe. No interest for truth or objectivity or empirical ade- quacy dominates. All that matters is gained credibility, which can be traded by the rules of a market-driven economy:

It would be wrong to regard the receipt of reward as the ultimate objec- tive of scientific activity. In fact, the receipt of reward is just one small portion of a large cycle of credibility investment. The essential feature of this cycle is the gain of credibility which enables reinvestment and further gain of credibility. Consequently, there is no ultimate objective to scien- tific investment other than the continual redeployment of accumulated resources. It is in this sense that we liken scientists’ credibility to a cycle of capital investment. (Latour and Woolgar: 1986: 198)

What would then be the point of all these incontinently produced scientific papers? How are we to understand their content? Answer: what scientists mean is determined by those who have the say – i.e., enough credibility – in science. The distribution of credibility and of scientific authority is an ongoing process. The owners of the largest credibility stocks will decide what is scientific fact and what is the meaning of a sentence. This takes us to the conclusion of Collins (1985: 172–3), who characterizes semantic constructivism

143 as leading to . Therefore, the Collins and the Latour-Woolgar versions of SC join hands in an unrestrained conventionalism. The problem is that a generalized conventionalism entails an absurd view of science. Undoubtedly, we all agree that progress is a very desirable, but also very strenuously achieved feature of science. Yet if conventionalism were true, why not just decree that progress is in place? One might remark that it is already fairly late to start worrying about SC’s absurdity. For what can be more absurd than the idea that facts are reified portions of text, that they can be constructed and de-constructed at the whim of science’s bosses, and that truth and falsity are negotiable? If the spheres of influence in the scientific establishment changed sufficiently, we might find out tomorrow that the moon is made of cheese. Enough has been said about semantic constructivism for a conclusion to be drawn. I dismiss Strong Programme’s view of the social determination of meaning as being self-undermining and as leading to irrationalism. I also dis- miss Latour and Woolgar’s view about scientific facts as inscriptions invested with epistemic authority as being plainly absurd. The overall balance of the reflexivity issue is as follows: epistemic construc- tivism is rejected as inconsistent, semantic constructivism is rejected as either meaningless or as leading to irrationalism, while metaphysical constructivism is still in the run. But the latter will suffer great losses from the following series of arguments.

6.3 Spatial and temporal inconsistencies

One direct implication of strong metaphysical constructivism (SMC) is that no – known or knowable – entities exist independently of our conceiving of them. That is, entities are created by our conceptual schemes, insofar as the latter are socially shared. However, the thought that , for instance, are created by our minds or that their existence two hundred million years ago depended on our concepts today strikes many as ridiculous. The current section argues that this intuition gives voice to a sound argument to the effect that SMC encounters severe spatial and temporal inconsistencies.

6.3.1 Spatial inconsistencies

Consider a society S1 which constructs the fact D1 that dinosaurs existed in the Mesozoic era, and another society S2 which constructs the fact D2 that dinosaurs never existed, but that the numerous skeletons of huge reptiles were deliberately buried in the earth by God to make S2’s denizens believe that dinosaurs existed. Both D1 and D2 have been constructed so that, according

144 to MC, they are facts about the world. Obviously, they contradict each other. To be clear, the issue is not about two distinct societies having contradictory beliefs about paleontology, but about two societies constructing incompatible facts. How is constructivism supposed to cope with this incompatibility? A relevant discussion of this problem is offered by (1978). Goodman is well-known as a fervent promoter of ontological relativism, the doctrine that entities exist only relative to one version of reality as expressed in some symbolic system, but not relative to some other version of reality. The notion of a version of reality designates all those classes of entities whose existence is required to make the use of a certain system intelligible. To display the relevance on SMC of a discussion on ontological relativism, we merely need to accept that SMC entails ontological relativism (the converse need not be true: one can be ontologically relativist without being construc- tivist.) Hence, any knock-down objection against ontological relativism will knock down SMC as well. This is not the place to deal in detail with Good- man’s views about ontological relativism. It suffices to say that his answer to the aforementioned inconsistency is straight and clear: incompatible facts may both be true if they are facts about different worlds. Thus, society S1 lives in world W1 in which D1 is true, whereas society S2 lives in world W2 in which D2 is true. As Goodman phrases it, ‘contradiction is avoided by segregation’ (Goodman 1996: 152). Different civilizations could thus construct thoroughly foreign to each other. How does Goodman support his position? We’ll discuss two of his lines of argument. First, he argues that there are conflicting truths that cannot be accommodated in a single world:

Some truths conflict. The earth stands still, revolves around the sun, and runs many other courses as well at the same time. Yet nothing moves while at rest. (Goodman 1996: 151)

The natural response to this is that the sentences ‘The earth is at rest’ and ‘The earth moves’ should be understood as elliptical for ‘The earth is at rest accord- ing to the geocentric system’, and respectively, ‘The earth moves according to the heliocentric system’. But Goodman tells us that this is a wrong answer. He analogizes with the following two historiographical sentences: ‘The kings of Sparta had two votes’ and ‘The kings of Sparta had only one vote’. There is an inclination to understand these sentences as ellipses for ‘According to Herodotus, the kings of Sparta had two votes’, and ‘According to , the kings of Sparta had only one vote’. But obviously, these sentences do not tell us anything about Sparta. They only tell us about what Herodotus and Thucydides said about Sparta. It is clear that Herodotus’s and Thucydides’s versions cannot generate anything but self-descriptions, not descriptions of the

145 world. It is true that ‘According to Herodotus, the kings of Sparta had two votes’, even if they actually had no vote, or had three votes. The same about the relativizations to the geocentric and the heliocentric systems: it is true that the earth is at rest according to the geocentric system, but that still does not inform us about the world.

Merely that a given version says something does not make what it says true; after all, some versions say the earth is flat or that it rests on the back of a tortoise. That the earth is at rest according to one system says nothing about how the earth behaves but only something about what these versions say. What must be added is that these versions are true. But then the contradiction reappears, and our escape is blocked. (Goodman 1996: 151)

Though subtle, Goodman’s argument is based on a spurious analogy. On the one hand, Herodotus’s version of the history of Sparta cannot possibly contain more than a finite list of permissible assertions about Sparta. On the other hand, the fact that the earth moves according to the heliocentric system allows an indefinite number of possible assertions which are not all of them constrained by the stipulations of the system. After all, the heliocentric system merely requires that the sun be in the center of the solar system. The relative positions of the planets might have been fixed relatively to the sun. But even if the motion of the earth is made in this system a matter of definition, there are indefinitely many observational facts that cannot be obtained by stipulations about the constitution of the solar system. Think of the gravitational deviation by the sun of the light rays emitted by distant stars; that is a fact about the world which is not entailed by defining a heliocentric system thought the stipulation that the earth revolves around the sun. It is thus clear that not every sentence which is relativized to a world-version is only about that version. Goodman can do more of the same: he can make any new fact a fact about the system. The move would not seem entirely ad hoc if one looks at Goodman’s story about how world-versions are formed through the transfor- mation of other world-versions by the operations of composition, decomposi- tion, weighting, deletion, supplementation, deformation, and so on. Yet such a strategy should remind us of our previous discussion about semantic con- structivism’s slide into irrationalism. The same threat ought to be a sufficient ground to dismiss Goodman’s first attempt to tackle the spatial inconsistency of ontological relativism. His second attempt to avoid the contradiction, quite surprisingly for an ontological pluralist, is an argument from parsimony. Goodman’s reasoning takes again a surprising direction. For him, the tolerant realist view that a

146 plurality of worlds can be versions of unique underlying world is nothing but the addition of an useless concept. To make this point clear, it needs to be emphasized that according to Goodman, only the accessible counts as real – a contentious assumption, no doubt, but one which will not be discussed here. Since what is accessible is relative to versions, he concludes that what is real is relative to versions. Therefore, Goodman firmly opposes the concept of an independent world underlying the many world-versions. Even if its idea is intelligible, he maintains, such a world would be inaccessible, and thus of no philosophical avail. As Goodman urges, Shouldn’t we stop speaking of right versions as if each were, or had, its own world, and recognize all versions of one and the same neutral and underlying world? The world thus regained is a world without kinds or order or motion or rest or pattern – a world not worth fighting for or against. (Goodman 1978: 20) But if he conceded indeed, even if only for the sake of the argument, that the underlying world exists, why should it be entirely featureless? Why should the whole realm of properties be relegated to the world-versions and no bit of it to the world itself? Again, a way to understand this might be offered by SMC, which claims that all facts about the world (apart from the world of brute facts itself) are constructed. But this does not explain why Goodman holds the concept of a world to be theoretically useless. Besides, there is at least one good reason for adopting this concept even if it has no specifiable properties: it allows the shunning of a fundamental inco- herence. After all, even Goodman must recourse to this world lacking “kinds or order or motion or rest or pattern”. We saw earlier that it is incoherent to talk about constructed world-versions unless one assumes that there are brute facts out of which constructed facts are made – apart from the event where one is a radical metaphysical constructivist, which certainly is not Goodman’s case. We can now turn to another argument against Goodman’s strategy of avoiding inconsistency by segregation. Following Kukla (2000), we label it the argument of the interparadigmatic lunch. The argument goes against Good- man’s rebuttal of representing the universe in terms of a spatio-temporal con- tinuum. Instead, the suggestion is that we give up any representation of an all-encompassing space-time continuum: The several worlds are not distributed in any space-time. Space-times of different worlds are not embraced within some greater space-time. (Good- man 1996: 152) I do not see logical inconsistencies in this suggestion and shall not search for any other shortcomings. Suffices it to notice, together with Carl Hempel

147 (1996), that advocates of different world-versions can debate about their world- versions even though they are supposed to be living in different worlds. As Hempel phrases it,

If adherents of different paradigms did inhabit totally separated worlds, I feel tempted to ask, how can they ever have lunch together and dis- cuss each other’s views? Surely, there is a passageway connecting their worlds; indeed it seems that their worlds overlap to a considerable extent. (Hempel 1996: 129–30)

I take it that this objection is decisive against the model of the unconnected spatio-temporal continua. True, Goodman does not plead for a model of unconnected continua. In fact, he does not plead for anything specific. He merely objects to the repre- sentation of an all-encompassing space-time. Given his incomplete ontological proposal, we cannot conclude that it is incoherent. Nevertheless, we cannot infer that Goodman’s proposal stands as an adequate reply to the objection that the worlds constructed by different civilizations fight for the same space. Therefore, we are warranted to infer the spatial inconsistency of SMC.

6.3.2 Temporal inconsistencies Let us once again consider some facts about dinosaurs. Let D be the fact of dinosaurs’ existence. Let also D0 be the fact that D occurs at time t0 (Dinosaurs existed in Mesozoic), and not-D0 the fact that D does not occur at t0 (Dinosaurs did not exist in Mesozoic). Let also C1(D0) be the fact that D0 was constructed at time t1 (say, 1970), and C2(not-D0) the fact that not-D0 was constructed at time t2, where t2 > t1 (t2 is, say, the year 1980). Thus the world at t1 has a past containing the event D0, and the world at t2 has a past containing the event not-D0. But since pastness is transitive, it results that the world at t2 contains in its past both D0 and not-D0. That is, the world in 1980 contains both the fact that dinosaurs existed in the Mesozoic era, and the fact that dinosaurs did not exist in the Mesozoic era. Consequently, SMC faces a diachronic inconsistency. Perhaps these examples which allude to the creationist debate seem con- trived. However, a glance at the constructivist literature gives us genuine examples of the sort we want to illustrate. Latour and Woolgar (1986) tell us that it became true in 1969 that TRH (tyrothropin-releasing-hormone) has the chemical structure Pyro-Glu-His-Pro-NH2. Before 1969 there was no fact of the matter whether TRH had or had not that chemical structure. In terms of the previous scheme, the world in 1970 contained both the fact that before 1969 there was no matter of fact whether TRH was Pyro-Glu-His-Pro-NH2, and the

148 fact that from 1969 on, TRH had been Pyro-Glu-His-Pro-NH2. Obviously, this is a temporal inconsistency of the same sort. A typical constructivist retort would be that this argument stems from the difficulties to articulate the constructivist view. Indeed, some philosophers have suggested that this trivial paradox points to an intrinsic problem of the modality and tense structure of assertions of fact – e.g. Hacking (1988: 281). Hacking blames the inability of natural language to express the complex sit- uations arising from social construction. Nevertheless, no concrete proposals of a better temporal logic are available. Additionally, the selected examples can easily be understood in terms of our good old temporal logic. Instead of the awkward formula that it became true in 1969 that TRH had always been Pyro-Glu-His-Pro-NH2, we can simply say that it was always true that TRH is Pyro-Glu-His-Pro-NH2, although scientists discovered this fact only in 1969. But of course, to admit this is to admit the temporal incoherence of SMC. Let us now summarize the inventory of the species of constructivism which managed to pass through the sieve of the previous arguments. We saw that re- flexivity generates an infinite regress for SMC, but that it is undecided whether the regress in question is in general vicious. This gave SMC a pause, but not for long, since we afterwards saw that it encounters irredeemable spatial and temporal inconsistencies. Reflexivity is an insuperable problem for EC and re- veals SC to be either incoherent, or irrational. Under the bottom line, only a moderate metaphysical constructivism (MMC), according to which some facts about the world are constructed, can overcome these objections. The claim is rather disappointing for someone expecting more spectacular deeds from social constructivism. To say that some facts are constructed would barely make a headline. However, as will be seen in the next chapter, MMC can have an explanatory role in the philosophy of science.

149 Chapter 7

A Case for Selective Scientific Realism: S-Matrix Theory

Scientific realism is often taken to be an overarching doctrine, claiming to account for the great majority of cases of genuine science. Recall our working definition of scientific realism: most of the essential unobservables of well- established current scientific theories exist mind-independently. I took pains to defend this definition and shall stick to it. Nonetheless, I do not believe that scientific realism should be an overarching doctrine. On the contrary, it should be selective. I delivered positive argumentation for scientific realism (see the success arguments), and defended it from various attacks – in particular, from the un- derdetermination argument and from the implications of the strong versions of social constructivism. However, as admitted in 2.1.3, instrumentalism – realism’s archenemy – stands as a proper account for appreciably many scien- tific episodes. We touched upon the tendency of natural science to embrace more and more abstract formalisms intended to serve as models, i.e. as struc- tures claiming empirical adequacy, and conjectured that the more abstract the formalism of a theoretical science, the more inviting it is to instrumentalist attitudes. We argued that the a framework of a causal explanation theory is indicative of the presence of a theory demanding scientific realism. The point is also expressed by Campbell: “causal explanations identify the real agents which are producing the real effects we attempt to explain.” (Campbell 1994: 31). By contrast, when a causal framework is absent, an instrumentalist understanding of a theory’s claim may well be accepted. Specific to an instrumentalist stance is the presence of abstract theoret- ical models, of theories involving no more than the belief in their empirical adequacy. What matters is whether they are adequate to the task for which they were devised. In laboratory jargon, they are supposed ‘to do the job’,

150 whatever that might be: to solve a previously unsolved problem, to support the discovery of new principles, to make relevant predictions, etc. From our standpoint, the most important aspect is that no to the posited entities and explanatory mechanisms is required. It is comfortable for the realist to consider that if there is anything success- ful about such theoretical zombies, it will come to be embedded in ‘respectable’, approximately true theories. As Ellis phrases it, “many scientific realists en- visage the eventual replacement of model theories by systemic ones in which all of the laws and principles are just true generalizations about how actual things behave.” (Ellis 1985: 175). However, they seem to be pointing in the wrong direction:

a great deal of theoretical scientific research goes into devising increas- ingly abstract model theories, and relatively little into reducing the degree of idealization involved in our theories in order to make them more re- alistic. ...basic theoretical development in science tends, if anything, to proceed in the opposite direction – to greater and generality. (Ellis 1985: 175)

To be clear, we maintain the claim that virtually all well-established theories are approximately true. The point is that an important part of science is developed via theoretical structures of instrumental value. Moreover, these theories cannot be ‘domesticated’ in the sense of being embedded in well- established realistic theories. Because of the meager causal constraints, abstract model theories are pri- marily subjected to internal, coherence demands. As such, they are con- structed within a space of theoretical tolerance, which allows for external (e.g. social) factors to intervene in the process of theoretical construction. An his- torical survey of an episode in modern high-energy physics (HEP) helps to illustrate this: the S-matrix program and its development between 1943 – when Heisenberg introduced the concept – and the late 1970s, when S-matrix was by and large abandoned. I present the S-matrix theory as an example of a program which failed without being falsified. As has happened more than once in modern theoretical physics, the fate of S-matrix was not decided by empirical disconfirmation. Its fate was decided rather by ‘external’ factors such as the particular expertise and philosophical view favored by the dom- inant part of the HEP community. The claim is not that S-matrix theory (henceforth SMT) could not have been falsified by any imaginable experiment. The point is rather that after several decades of empirical and institutional success, SMT was abandoned mainly due to factors other than the internal logic of theoretical physics.

151 The exposition of the historical facts is substantially informed by James Cushing’s (1990) Theory Construction and Selection in Modern Physics. The S-Matrix.

7.1 The S-Matrix Theory (SMT): a historical case study

The story of the S-matrix is inseparably related to the evolution of quantum field theory (QFT). It is therefore useful to start with at least a sketchy pre- sentation of the latter, if we are to have an intelligible perspective on the former.

7.1.1 Quantum field theory (QFT) The first quantum field theory is quantum electrodynamics (henceforth QED). QED is the quantum theory of the interactions of charged particles with elec- tromagnetic fields. It was devised in 1926, when P. A. M. Dirac came up with an equation describing the motion and spin of electrons, incorporating both quantum mechanics and special relativity. QED rests on the idea that charged particles interact by emitting and absorbing photons. These exchange photons are virtual, that is, they cannot be detected in any way, because their existence violates the conservation of energy and momentum. The electromagnetic interaction involves a series of processes of increasing complexity. In the simplest process, two charged particles exchange one virtual photon. In a second-order process, there are two exchanged photons, and so on. These processes correspond to all the possible ways in which particles can interact through exchange of virtual photons; and each process can be graph- ically represented by means of the diagrams developed by Richard Feynman. Apart from furnishing an intuitive picture of the process being considered, these graphs precisely prescribe calculation rules of the involved variable. Calculations at the lowest order in QED turned out to be finite and reason- ably feasible. However, each interaction process became computationally more difficult that the previous one, and there were an infinite number of processes. Moreover, integrals of a higher order proved to be divergent, resulting in non- sensical infinities. The attempt to dispose of these divergences took the name of the renormalization program. Renormalization is attained whenever a finite number of redefinitions is sufficient to remove all the divergences at all orders of perturbation. The procedures consisted simply in discarding the infinities, meaning, in substituting them with the observed values of the mass and ‘cou- pling constants’, a procedure which led to surprisingly accurate predictions.

152 Yet we can say with hindsight that it was basically a historical accident that QED could be renormalized. For instance, Fermi’s (1933) β-decay theory is non-renormalizable. But even so, QFT is not automatically useful for calculations since the renormalization program and the calculations themselves can be carried out only within the framework of perturbation theory. For QED that was fine, since the expansion parameter α = 1/137 (the fine structure constant) is small and the first few terms in the expansion could reasonably be expected to deliver a good quantitative approximation. The more complex the process, i.e. the more additional virtual photons involved, the smaller the probability of its occurrence. For each level of complexity, a factor of α2 decreases the contribution of the process, so that after only a few levels, the contribution becomes negligible. Nonetheless, in the QFT relevant to nuclear and elementary-particle physics, the coupling constant is of the order g ≈ 15, so that a perturbation expansion is inapt to deliver numerical results. This proved to be a serious problem for a QFT of strong interactions, which was not solved before the 1960s, when the gauge field theories of strong interactions emerged. The condition of gauge in- variance imposes a symmetry on QFT’s equations. The structure of the gauge transformation group in a particular gauge theory entails general restrictions on the way in which the field described by that theory can interact with other fields and elementary particles. More specifically, gauge theories took a ba- sic Lagrangian formulation where different symmetries of the Lagrangian were known to correspond to invariances of physical quantities. Yang and Mills were able in 1954 to implement a general symmetry group by means of local gauge fields. These are fields satisfying local invariance conditions of the form ψ(x) → ψ(x) = eiα(x)ψ(x), where α(x) is an arbitrary function of x. In 1961, Goldstone proved that the Lagrangian symmetry for a field theory could be spontaneously broken, the solution thus being less symmetrical than the Lagrangian. The theory was developed by Peter Higgs, who demonstrated that broken symmetry solutions can obtain without postulating massless par- ticles, as Goldstone did. These results were adopted by Weinberg, Glashow, and Salam in the late 1969s to produce a unified theory of the weak and elec- tromagnetic phenomena. Another crucial property for the renormalization of gauge theory is the so-called asymptotic freedom property. It ensures that for certain high-energy scattering processes, the lower-order of the perturbation calculations provide the major contribution to the cross sections. This feature provided accurate numerical calculations for the QFT of strong interactions.

153 7.1.2 The origins of the S-matrix. S-matrix theory (SMT) An account of strong interactions parallel to QFT is offered by the S-matrix theory (SMT). The concept of an S-matrix was introduced by John Wheeler in 1937 in the context of a theoretical description of the scattering of light nuclei. In modern notation, the S-matrix consists of elements Sαβ, where α, β = 1, 2,...,N, that give the relative strengths of the asymptotic forms of the wave function for various reaction channels: i + −ikαrα ikβ rβ ψαβ(rβ) → [δαβe − Sαβe ], 2kβrβ where e−ikαrα represents the incident wave, and eikβ rβ represents the scattered wave. From calculus emerges the expression of the total cross section: 2π 4π σt = σsc + σr = 2 [1 − ReSαα] = Imfαα, kα kα where σsc is the elastic (scattering) cross section, σr is the interaction (reaction) cross section, and fαα is the forward scattering amplitude. This expression forms the so-called optical theorem, which establishes the crucial result that the total cross section is completely fixed once the imaginary part of the forward scattering amplitude is known. Therefore once the elements of the scattering matrix Sαβ are given, all the cross sections can be calculated by their means. Apparently independently of Wheeler, Heisenberg proposed at the begin- ning of the 1940s a program whose central entity was a matrix which he termed the ‘characteristic matrix’ of the scattering problem. He conceived this pro- gram as an alternative to QFT. His explicit aim was to avoid any reference to a Hamiltonian or to an equation of motion, and instead to base his theory only upon observable quantities. Heisenberg’s programmatic papers outlined the hope for a theory capable of predicting the behavior of all observed par- ticles, together with their properties, based on symmetries and constraints on the S-matrix: unitarity and analyticity. As appears very much in nonrelativistic quantum mechanics, the basic probability amplitudes allowed for the prediction of the outcomes of experi- ments. The output of Heisenberg’s theory consists of those probability am- plitudes which directly correspond to measurable quantities. The S-matrix elements connect the initial, asymptotically free state ψi to the final, asymp- totically free state ψf :

ψf = Sfiψi, where Sfi represents the overlap of ψf with a given ψi as Sfi =< ψi | ψf >. Thus, the chance of beginning in an asymptotic state ψi and ending in the asymptotic state ψf is

154 2 2 |< ψi | ψf >| =| Sfi | .

Since the probability of starting in ψi and ending up is some allowed state ψ must be unity, the conservation of probability requires the unitarity relation

S+S = I.

However, for a relativistic quantized field theory, it has never been demon- strated that any realistic interacting field system of fields has a unitary solu- tion. While solutions of formal power series have been generated, satisfying unitarity to each order of the expansion, the series itself cannot be shown to converge. The success of the renormalized QFT did undercut Heisenberg’s original motivation for his S-matrix program – the inability of QFT to produce finite and unique results. However, in the late 1950s and early 1960s, the inability of the even renormalized QFT to provide quantitative results for the strong interactions led once more to a new S-matrix program. Jeffrey Chew, the father of the renewed S-matrix, surmised that

finally we have within our grasp all the properties of the S-matrix that can be inferred from field theory and that future development of an under- stating of strong interactions will be expedited if we eliminate from our thinking such field-theoretical notions as Lagrangians, bare masses, bare coupling constants, and even the notion of elementary particles. (Chew 1962: 2)

Apart from unitarity, other properties of the S-matrix proved to be important for the development of the program. Unitarity’s twin principle, analyticity, had been already related in QFT to the causality constraints upon the scat- tering processes. The suggestion was made that the S-matrix be considered an analytic function of its energy variable, so that scattering data could (in prin- ciple) constrain the values of bound-states energies. Thirring pointed out that the causality requirement in QFT can be implemented by demanding that the effects of the field operators φ(x) and φ(y) acting at spacelike separated points x and y be independent. This condition was used for the derivation within perturbation theory, of dispersion relations for forward photon-scattering to the lowest order in e2. The key point here is that these relations were derived from causality constraints. A noticeable feature of this theoretical development is the pragmatic char- acter of the S-matrix approach. Marvin Goldberger (1955) stated it quite clearly:

155 We made rules as we went along, proceeded on the basis of hope and conjecture, which drove purists mad. (Goldberger 1955: 155)

Quite often, key mathematical instruments were used without rigorous foun- dation. Instead, intuition and previous empirical success of similar procedures motivated numerous theoretical decisions. Thus, in his 1961 paper on the applications of single variable dispersion relations, Goldberger remarked:

It is perhaps of historical interest to relate that almost all the important philosophical applications of the nonforward dispersion relations were car- ried out before even the forward scattering relations were proved rigor- ously. (Goldberger 1961: 196)

It was only in 1957 that a rigorous proof for the forward case was provided, but no proof for all scattering angles nor for massive particles was ever given. Dispersion relations were simply assumed to hold in these cases as well and were successfully applied to the analysis of both electromagnetic and strong interaction scattering data. Murray Gell-Mann (1956) put together the list of theoretical constraints on the S-matrix: Lorenz invariance, crossing, analyticity, unitarity, and asymp- totic boundary conditions. He expected them to be in principle enough to specify the scattering amplitudes of QFT without having to resort to any spe- cific Lagrangian. It is worth reemphasizing that this expectation was actually the main attraction of SMT. Certainly, Gell-Mann did not consider it to be anti-QFT, but rather an alternative to the standard QFT approach of us- ing Lagrangians. Relatively soon afterwards (1958), Chew expressed the even bolder hope that the S-matrix equations might provide a complete dynamical description for strong interaction physics. Nonetheless, the main actors on the HEP scene put SMT and QFT in direct opposition. In 1961, Chew emphatically rejected QFT, not so much for being incorrect as for being empty or useless for strong interactions:

Whatever success theory has achieved in this area [strong interactions] it is based on the unitarity of the analytically continued S-matrix plus symmetry principles. I do not wish to assert as does Landau [1960] that conventional field theory is necessarily wrong, but only that it is sterile with respect to strong interactions and that, like an old soldier, it is destined not to die but just to fade away. (Chew 1961: 3)

Ironically, the last phrase best describes the fate of SMT itself. Lev Landau was notorious for being most dismissive of QFT:

The only observables that could be measured within the framework of the relativistic quantum theory are momenta and polarizations of freely

156 moving particles, since we have an unlimited amount of time for measur- ing them thanks to momentum conservation. Therefore the only relations which have a physical meaning in the relativistic quantum theory are, in fact, the relations between free particles, i.e. different scattering ampli- tudes of such particles. (Landau 1960: 97)

At the same time, SMT had declared adversaries. For example, the prominent field theorist Francis Low (of whom Chew had been a student!) maintained that

The S-matrix theory replaced explicit field theory calculations because nobody knew how to do calculations in a strongly coupled theory. I believe that very few people outside the Chew orbit considered S-matrix theory to be a substitute for field theory. (Low 1985; correspondence with Cushing 1990)

Most participants in the debate had a rather pragmatic and conciliatory position. Stanley Mandelstam maintained that preference for one theory over the other depends on which one is more likely to provide the best results rather than on any question of principle. Similar views were expressed by Gell-Mann and Goldberger. The S-matrix was eventually abandoned as an independent program due to its insurmountable calculational complexity, as well as to several concep- tual difficulties – for example, the physical basis of the analyticity constraints (Chew 1961), and the anomaly that the constituted to SMT. Improvements were achieved with succeeding versions of the program (the du- ality version and the topological SMT version), but only at the price of even more complex mathematics. On the other hand, neither calculational com- plexity, nor conceptual muddiness were absent from the ‘winning camp’, QFT – suffice it to remind one of the ad hoc character of some renormalization manoeuvres. Let us draw the conclusions of this sketchy presentation. First, if QFT had not run into difficulties (divergent integrals at higher orders of the per- turbation expansion, and inapplicability to strong interactions), it is unlikely that SMT would have ever existed. But theorists were able to have QFT over- come the divergence problems through renormalization, and in the late 1970s, to formulate a local gauge principle along with a mechanism of spontaneous symmetry breaking, which allowed QFT to cope with the strong interactions. Second, each time QFT made a comeback, the overwhelming majority of scientists re-embraced it. The emphasis on this flipping back and forth between QFT and SMT is of great significance for our argumentative strategy. By underscoring the repeated pragmatic changes of the theoretical apparatus,

157 the fragility of the ontological commitment in either QFT or SMT is shown. This in turn, invites an instrumentalist interpretation of the two theories. Third, SMT has never been properly falsified. It was abandoned mainly because of its calculational complexity. However, QFT is also a highly complex mathematical construct. The comparatively better prospects of development of QFT turned out to be decisive, yet this kind of judgment allows for social factors to come into play. For example, as Pickering (1984) aptly observes, QFT’s progress depended upon the choice made by experimentalists to study a class of very rare scattering events, which QFT could handle, to the exclusion of a much more common class of events, which it could not. It is thus not irrelevant that the HEP community displays a pyramidal organization, on the apex of which few dominant personalities make crucial theoretical decisions, which in turn open (or close) research programs.

7.2 Philosophical conclusions

The lesson of the SMT case study is that both instrumentalism and a moderate social constructivism are present in science, at least in certain of its episodes. It was argued in the preceding chapter that social factors have a limited influence in science, and that the thesis of social determination of scientific beliefs is self-undermining. Besides, given the compelling nature of deductive and mathematical reasoning, social explanation is not required in mathemat- ics. However, SMT highlights the role of scientists as creative agents who at critical junctures make theoretical choices that are capital for the development of whole research programs. Such choices are not entirely constrained by causal relations, but they take place in a tolerance space. Divergent paths may open up for research, the choice among which is physically underdetermined. As Cushing (1982; 1990) has repeatedly emphasized, the structure of the commu- nity of scientists becomes relevant in this respect. At least in HEP’s case, as already mentioned, this structure has a marked pyramidal form. The major research options are presented to the community by relatively few dominant epistemic authorities situated on the apex of the pyramid. In SMT and QFT, these epistemic authorities were Chew, Goldberger, Gell-Mann, Low, Mandel- stam, and a few others. Meanwhile, the majority of scientists ‘play the game’ as already outlined. This explains in part the stability and convergence of opinions in the research programs. It is a general fact within science that few people are called to change scientific theories, resulting in the history of science being characterized by stability rather than by turning points. The point is aptly phrased by Cushing:

of course, such a theory is stable (for a longer or shorter period of time),

158 until some more clever person finds the crucial chink. This stability is related to the fact that very few people have the ability and good fortune to create a theory that can cover a set of data and keep them nailed down while the theory is adjusted to cover some new phenomenon. (Cushing 1990: 245)

Apart from that, once a theory has entered a new field, it is defended against newcomers. This is another important stability factor. First, simple empirical adequacy will not do for the candidate theory;1 since the old one must also be empirically adequate, the new one has to do better than this in order to occupy the throne. Second, as the choice of the relevant class of scattering events in HEP shows, the relevant questions are tailored to fit the interests and competence of the old theory’s supporters. This fact diminishes the affirmation chances of the challenger. Another important question is, how constraining are the physical phenom- ena on the process of theorizing. Are these constraints sufficient to uniquely determine a scientific theory? The SMT case study depicts the capital con- cern of accommodating the empirical phenomena. But no claim of more than empirical adequacy is warranted. We have already illustrated the of the participants in the SMT program. The easiness with which some of them switched their theoretical frameworks displays little ontological commit- ment. Both SMT and QFT were used instrumentally, and some theoreticians expressed openly their theoretical opportunism: they used the approach that promised to solve the immediate problems. Once the ontological anchor lets loose, the uniqueness of the physical the- ories becomes less probable. Both QFT and SMT were, in circumscribed domains, empirically successful theories, but the choice between them was, in the overlap domain, underdetermined by the empirical data. At different moments epistemic considerations – simplicity (especially calculational), pre- dictive power, theoretical potential – recommended one of the theories over the other. However, we saw that in both QFT and SMT, any conceptual concession was made for the sake of empirical success. Using the terminology employed in chapter 5, the kind of underdetermi- nation of the choice between SMT and QFT stems from EE2, the thesis that a given theory T has empirical rivals such as that when T , and respectively its rivals, are conjoined with the set At of auxiliary hypotheses acceptable at time t, they entail the same observational sentences. It was argued that this sort of empirical equivalence does not entail a version of underdetermination problematic to scientific realism. In our case, the temporal index t traversed a period of almost three decades, after which SMT was abandoned without di-

1This seems to be have happened to Bohm’s deterministic version of quantum mechanics.

159 rect empirical refutation. However, it cannot be excluded that novel empirical evidence might have made a crucial experiment feasible. In any event, QFT developed thereafter without a contender of SMT’s caliber. Gauge QFT be- came the ‘standard model’, offering us the most detailed and accurate picture of the internal constitution of matter by means of the electro-weak theory and of quantum chromodynamics. A final issue to be discussed addresses the inevitability of constructed the- ories. In line with Cushing (1990: 266), we can ask whether theoretical physics would have arrived at one of its most promising constructs, superstring models, if it had not been for the existence, nearly four decades earlier, of Heisenberg’s S-matrix theory which ultimately proved in its own right to be a dead end. It is impossible to prove that we would not have arrived at the quantized string model by another route. No doubt we might have, though I believe this eventuality is rather implausible. Nonetheless, among the most creative and influential high-energy physicists, there is the firm idea of a cognitive necessity dictating the normal succession of scientific discoveries. Richard Feynman, for example, said that if Heisenberg had not done it, someone else soon would have, as it become useful or necessary. Richard Eden, one of the early contributors to SMT, takes a similar position:

a general view that I would support [is] that most people’s research would have been discovered by someone else soon afterwards, if they had not done it. I think that this would have been true for Heisenberg’s S-matrix work. (Eden: correspondence with Cushing (1990: 267))

This strikes me as unjustified optimism. What kind of necessity can guarantee that a theory created by a scientist would, under different circumstances, have been created by another?2 I shall not press this issue, since even if we grant that most discoveries were made by someone else, the question remains open, as to whether at critical junctures, the same creative moves would have been made. In other words, while it is indeed plausible that quantum theory would have been created even without Niels Bohr, would it have necessarily looked like the Copenhagen version? To respond in the affirmative would be a risky induction. Even the most important theoretical decisions involve a degree of historical contingency. It is not unimaginable that SMT could have been pressed over QFT.

2I have in mind situations which precede the constitution of well-established theories. Given quantum mechanics, it is highly probable that Heisenberg’s uncertainty relations would have been discovered by someone else too. The internal logic of the discipline would have im- posed it. However, given , no one need have discovered quantum mechanics. (I am grateful to J. R. Brown for drawing this point to my attention.)

160 These considerations do not in any respect disconfirm the accuracy of the scientific realist account of science. They only raise doubts about a frequent interpretation of scientific realism as an overarching doctrine. A scientific real- ism more true to scientific practice ought to be selective, i.e. it ought to tolerate at its side episodes in which instrumentalism and moderate social construc- tivism are present. Certainly, it would be nice to have a clear-cut definition of such a selective scientific realism, but that would demand a principled distinc- tion between theories urging a realist interpretation, and theories urging an instrumentalist/constructivist interpretation. We cannot provided that, and it is doubtful that it can be provided. For reasons exposed in 2.1.3, scientific realism accounts for the majority of scientific theories. The identification of those theories where instrumentalism and constructivism have the say demands empirical investigation, that is, a case by case analysis. Nonetheless, we have hinted at the main suspects: the abstract model theories with complex mathematical formalisms, and loose causal connections to the physical world.

161 Chapter 8

Appendix: Truthlikeness

For multiple reasons, strict truth is not to be had in science. Scientific theo- ries involve virtually unexceptionably idealizations, approximations, simplifi- cations, and ceteris paribus clauses. Scientific predictions can only be verified within the limits of experimental errors stemming both from calculation, and from unremovable ‘bugs’ and ‘noise’ in the experimental apparatus. Therefore, to impose standards of science so high as to accept only true sentences would mean to expel most of the scientific corpus as we know it. For the antirealist, this is already a reason for scepticism. He explains the obvious empirical success of science as a matter of selection, through trial and error, of the lucky scientific theories – yet, we have seen that this is not, properly speaking, an explanation. The realist is an epistemic optimist. Aware that we cannot reach the exact truth, he resolutely maintains that we can well live with a more modest, fallibilist conception of science, according to which our best theories are truthlike, that is, ascertainably close to the truth. For many practical purposes, we have a fairly dependable intuitive notion of closeness to the truth.1 As Devitt (2002) indicates, “a’s being approximately spherical explains why it rolls.” However, against the advocates of an intuitive approach of truthlikeness (like Psillos 2000), it will be argued (A.2) that many problems related to the dynamics of scientific theories (e.g. unification, reduction, theory replacement, etc.) demand fairly accurate measurements of the distance to the truth. In fact, we have such an acceptable quantitative theory in Niiniluoto’s (1978; 1999) account. In A.3 we shall critically consider the position of Richard Giere (1988, 1999), an opponent to the notion of truthlikeness. Let us now proceed with ’s pioneering approach of verisimili- tude.

1The notions of approximate truth, truthlikeness, and verisimilitude are used interchange- ably by some authors. Still, though closely related, they should, for reasons to be exposed in A.2, be kept apart.

162 A.1 Popper’s theory of verisimilitude

According to Popper’s (1963; 1972) falsificationist view of science, theoretical hypotheses are conjectured and tested through the observable consequences derived from them. If a hypothesis passes the test, then it is ‘corroborated’. Being corroborated really means nothing more than being unrefuted by exper- imental testing. In particular, corroboration is not to be taken as an indicator of evidential support or confirmation for the hypothesis – Popper wanted his view to be thoroughly deductivist, excluding notions such as induction or con- firmation. Popper suggested, however, that corroboration is a fallible indicator of verisimilitude, meaning, likeness-to-the-truth. As Niiniluoto (1999: 65) notes, although the concept of probability is derived from the Latin verisimilitudo, Popper distinguished verisimilitude from probability. His insight was to define verisimilitude in purely logical terms. He took theories to be sets of sentences closed under deduction, and defined verisimilitude by means of relations be- tween their truth-content and falsity-content:

Theory A is less verisimilar than theory B if and only if (a) their truth- contents are comparable and (b) either the truth-content of B is less than or equal to the truth-content of A; or the truth-content of A is less than or equal to the truth-content of B and the falsity-content of B is less than the falsity-content of A. The truth-content TT of a theory T is the class of all false consequences of T . (Popper 1972: 52)

Expressed formally, theory B is more truthlike than theory A if and only if either

A ∩ T ⊂ B ∩ T and B ∩ F ⊆ A ∩ F , or (1)

A ∩ T ⊆ B ∩ T and B ∩ F ⊂ A ∩ F , (2) where T and F are the sets of true and false sentences, respectively, and ⊂ is the set-theoretic inclusion. However, Pavel Tich´y(1974) and David Miller (1974) proved that Popper’s definition is defective. They demonstrated that the two conditions in (1) cannot both be satisfied – and the same goes for (2). Assume that (1) is the case. Then, B has at least one more true conse- quence than A has. Let us label this sentence q. It is also true that there are false sentences common to A and B. Let us take one of them and label it p. It follows that

p & q ∈ B ∩ F and p & q∈ / A ∩ F .

163 It turns out that contrary to our initial assumption, there is at least one false consequence of B which is not a false consequence of A. Assume now that (2) is the case. Then, A has at least one more false con- sequence than B has. Let us label this sentence r. Take any false consequence common to A and B, say, k. It follows then that

k → r ∈ A ∩ T and k → r∈ / B ∩ T , since r∈ / B. So, contrary to the initial assumption, there is at least a true consequence of A which is not a true consequence of B. Therefore, (1) and (2) cannot be true. The vulnerable point in Popper’s account is that the verisimilitude of false theories cannot be compared. Suppose that starting from the false theory A, we try to obtain a more verisimilar theory B by adding true statements to A. The problem is that we thereby add to B falsities which are not consequences of A. The situation is symmetrical when we try to improve on A’s verisimilitude by taking away some of its falsities: we thereby take away from A true statements which are not true consequences of B. In spite of the failure of Popper’s definition, the basic line of his approach of verisimilitude has been appropriated and integrated into more complex the- ories. According to Niiniluoto, what was missing from Popper’s approach was a notion of similarity or likeness:

truthlikeness = truth + similarity.

This approach was first proposed by Hilpinen and Tich´y(1974), and thereafter developed by Niiniluoto, Tuomela and Oddie.

A.2 The possible worlds/similarity approach

Aronson, Harr´eand Way (henceforth AHW) (1994) and Psillos (1999), label the formal attempts by Tich´y(1976), Oddie (1986), and Niiniluoto (1987) as the ‘possible world approach’. Niiniluoto (1999) himself uses the name ‘similarity approach’, because he doesn’t actually rely on possible worlds talk. In any event, in spite of terminological differences, the approaches by Tich´y, Oddie and Niiniluoto have important commonalities. What follows is a mixed presentation of the ‘possible worlds’ and ‘similarity’ aspects of the approach. I mainly draw on Niiniluoto’s (1999) more sophisticated formal results when discussing the ability to solve real philosophical problem, but apply the handier formulas of Tich´yand Oddie to simple examples. Truthlikeness `ala Tich´yand Oddie is characterized in terms of the distance between a possible world and the actual world. A theory T picks out a set of

164 possible worlds from the set of all possible worlds. T is characterized in terms of a set of basic states it ascribes to the world. So, given n basic states, there n will be 2 possible worlds Wi, defined through the conjunction:

n ^ Wi = ±hi, i=1 where i = 1, . . . , n, and hi are sentences formulated in a semantically inter- preted language L, characterizing basic states. The actual world WA is among the possible worlds. Every consistent theory characterizes a possible world, while the actual world corresponds to the true theory. Possible worlds then correspond to all conceivable distributions of truth-values of hi. The set of statements corresponding to the basic states constitutes, in Niiniluoto’s terms, a cognitive problem:

B = {hi | i ∈ I}

The requirement is that the elements of B be mutually exclusive and jointly exhaustive:

` ¬(hi & hj) for all i 6= j, i, j ∈ I, and

_ ` hi. i∈I

If the basic states of the actual world are unknown (as it is frequently the case in science), we have a cognitive problem B with the target h∗. Thus, the cognitive problem consists in identifying, among all hi’s which constitute ∗ the possible worlds Wi, the sentence h true of the actual world. The above conditions guarantee that there is one and only one element h∗ of B which is true in WL. Niiniluoto defines the statements hi in B as complete potential answers. Disjunctions of complete answers constitute partial potential answers; the latter belong to the disjunctive closure D(B) of B:

_ D(B) = { hi | φ 6= J ⊆ I} i∈J A real function is then introduced, in order to measure the distance between the elements of B:

∆ : B × B → R, ∆(hi, hj) = ∆ij,

165 where 0 ≤ ∆ij ≤ 1, and ∆ij = 0 iff i = j. ∆ needs to be specified in each epis- temic context. But, as Niiniluoto (1999: 69-71) shows, there are standard ways of doing this for specific problems. For example, in a mathematical problem dealing with real numbers, the distance between two points x = (x1, . . . , xn) n and y = (y1, . . . , yn) in R is given by v u n uX 2 t (xi − yi) . i=1

Next, he extends the definition of ∆ to a function B × D(B) → R, such that ∆(hi, g) measures the distance of partial answers g ∈ D(B) from the complete answers, hi ∈ B. If g ∈ D(B) is a potential answer so that _ ` g = hi, i∈Ig

∗ where I ⊆ Ig, then g is true if and only if it includes h . At this point, Niiniluoto introduces the following measures:

∆min(hi, g) = minj∈Ig ∆ij

P j∈Ig (∆ij) ∆sum(hi, g) = P j∈I (∆ij)

γγ0 0 ∆ms (hi, g) = γ∆min(hi, g) + γ ∆sum(hi, g),

0 (γ > 0, γ > 0), with the following meanings: “∆min is the minimum distance from the allowed answers to the given answer, ∆sum is the normalized sum of these distances, and ∆ms is the weighted average of the min- and sum- factors.” (Niiniluoto 1999: 72). Quantitative definitions of approximate truth and, respectively, truthlikeness can now be offered:

∗ ∗ AT (g, h ) = 1 − ∆min(h , g),

∗ γγ0 ∗ T r(g, h ) = 1 − ∆ms (h , g). The plausibility of these formulas is supported by some interesting properties. g is approximately true when ∆min is sufficiently small. In the limit, if ∆min = 0, g is strictly true, that is, approximately true to the degree 1. Truthlikeness

166 is defined not only as closeness to truth, but also as information content (i.e. as exclusion of falsity). So, Niiniluoto’s min-sum formula provides us with a method to achieve a trade-off between these desiderata: “the weights γ and γ0 indicate our cognitive desire of finding truth and avoiding error, respectively.” (1999: 73). He further notes that “if we favored only the truth (γ0 = 0), then nothing would be better than a trivial tautology, and if we favored only information content (γ = 0), then nothing would be better than a logical contradiction.” (1999: 73)

A.2.1 The criticism of the possible world/similarity approach Among the critics of the possible world/similarity of truthlikeness account are AHW (1994) and Psillos (1999). In spite of relative clarity in their recon- struction of the approach, both AHW and Psillos – who shares virtually all the views of the former – commit certain simplifications which put into ques- tion their proper understanding of the matter. For one thing, they use the concepts of truthlikeness and verisimilitude interchangeably in the context of Niiniluoto’s theory, which is simply flawed. Moreover, they ascribe to truth- likeness the definition which Niiniluoto actually reserves for approximate truth, a fact indicating a further confusion: the one between approximate truth and truthlikeness. We have just seen that approximate truth and truthlikeness γγ0 have different definitions: AT = 1 − ∆min, while T r = 1 − ∆ms . Concern- ing verisimilitude, it also incorporates an element of epistemic probability, P (hi/e), understood as the rational degree of belief in the truth of hi given the empirical evidence e. The expected degree of verisimilitude of g ∈ D(B) given evidence e is then X ver(g/e) = P (hi/e)T r(g, hi). i∈I Notwithstanding these inexactitudes in Psillos’s description, his criticism of the possible worlds theory of truthlikeness draws upon two reputable objections raised by Miller (1976) and, respectively, Aronson (1990) and AHW (1994).

(a) Miller constructs two weather-predicates: ‘is Minnesotan’ and ‘is Arizo- nan’ out of three natural weather predicates ‘hot’, ‘rainy’, and ‘windy’: a type of weather is Minnesotan if and only if it is either hot and rainy or cold and dry:

m =df (h ∧ r) ∨ (¬h ∧ ¬r); a type of weather is Arizonan if and only if it is either hot and windy or cold and still:

167 a =df (h ∧ w) ∨ (¬h ∧ ¬w).

Given these definitions, the following equivalence can be easily obtained:

h ∧ r ∧ w ≡ h ∧ m ∧ a.

Thus, the target problem can be formulated either in terms of {h, r, w}, or alternatively, in terms of {h, m, a}. A problem arises from the following fact: if the target theory is h ∧ r ∧ w, then the statement ¬h ∧ m ∧ a, which is logically equivalent to ¬h∧r∧w, proves to be less truthlike than the statement ¬h ∧ ¬m ∧ ¬a (which is logically equivalent to ¬h ∧ ¬r ∧ ¬w). In other words, according to Niiniluoto’s definition of truthlikeness, while it is obvious that

T r(¬h ∧ r ∧ w) > T r(¬h ∧ ¬r ∧ ¬w), the logical equivalents stay in a reversed truthlikeness relationship:

T r(¬h ∧ m ∧ a) < T r(¬h ∧ ¬m ∧ ¬a).

The problem seems indeed to be a serious one. Some philosophers (Urbach 1983; Barnes 1991) have concluded that truthlikeness `ala Niiniluoto is a shaky concept. Nonetheless, Miller’s objection is itself problematic. It assumes complete liberty in choosing the language in which to formulate the cognitive theory. That is, it assumes that the {h, r, w}–language and the {h, m, a}–language are equally entitled to serve for the formulation of target theories. When a target theory is being formulated in one language and thereafter mapped onto an- other language, what happens is that the metric ∆ is in general not preserved. Consequently, such mappings can dramatically distort the cognitive problem. Of course, in scientific practice the difficulty is avoided by the fact that only one language – namely, the one in fact used by the scientific community – is in use. But is this methodological fact metaphysically arbitrary? Can we dispose at libitum of the linguistic framework in which to formulate our cognitive prob- lem? I do not believe so. There is a straightforward sense in which ‘hot’, ‘rainy’, and ‘windy’ are more fundamental than ’is Minnesotan’ and ‘is Arizonan’. The former are ‘natural kind’ predicates and serve as constitutive elements for the latter. It suffices now to argue that I take them to be ‘natural’ not in the sense of any essentialist metaphysics, but in the sense of their belonging to the most adequate linguistic framework, given the constraints set by the outside world.

168 A language is adequate if it contains concepts lawfully related to the quantities involved in the target theory. In this sense, as Niiniluoto states, “it may be possible to find, for each cognitive problem, a practically ideal language (or a sequence of more and more adequate languages).” (1999: 77). The adequation of the language to particular cognitive interests brings a methodological or pragmatic dimension to the concept of truthlikeness. In order to see whether objects are similar by measuring the distance between their basic states, we use a class of relevant features, as well as certain weights for these features. It follows that truthlikeness is relative to the cognitive problems. Yet, pragmatic interests by no means exclude epistemic objectivity. Niiniluoto is explicit about the ways in which truthlikeness depends on our cognitive interests:

(i) We measure the distance from truth relative to the target theory h∗, not relative to the whole world.

(ii) The choice of the metric ∆ involves dimensions and weights correspond- ing to specific cognitive interests.

(iii) The weighted average of the min and sum factors directly expresses our cognitive interests in finding truth (γ) and shunning error (γ0). These two parameters “point in opposite directions, and the balance between them is a context-sensitive methodological decision, and cannot be effected in purely logical grounds.” (1999: 77)

I believe these considerations can do justice to the scruples legitimately voiced by Wolfgang Spohn:

An aprioristic procedure seems questionable also on general grounds. Analogy, in its full sense only meagerly captured in formal models..., is a highly a-posterior matter; it is concerned with often rather vague considerations (or should we say: feelings?) of how theories in one empir- ical field might be carried over to another field; and passable intuitions about concrete analogies only evolve after a thorough-going examination of the subject at hand. (Spohn 1981: 51)

Moreover, Niiniluoto’s above remarks are also cogent with respect to a second interesting objection raised against the similarity theory of truthlikeness.

(b) Aronson (1990) and AHW (1994) have raised a different criticism, which they deem even more devastating, based on the following two intuitions:

...first of all, no false statement can be equally true or truer than the truth; and, secondly, the number of basic states in the universe should

169 not, in itself, affect the verisimilitude of a proposition. The second can be put another way. Theories carve out chunks of the world which are semantically independent of one another. For example, the fact that there are one billion Chinese should not affect the truth or verisimilitude of the special theory of relativity unless the latter somehow entail the former. (AHW 1994: 118)

First, AHW have noticed that by adding new items of information to the de- scription of the world, the truthlikeness of extant propositions changes. Recall that in the weather-predicates model, the target theory given the three basic states is h ∧ r ∧ w. Let us see what happens to h after adding new basic states. Initially, the truthlikeness of h alone is T r(h) = 0.67. However, after the addi- tion of a fourth predicate, say ‘cloudy’, the Oddie-Tich´ytruthlikeness measure of h decreases to T r(h) = 0.625. A fifth weather-predicate diminishes it even further: T r(h) = 0.6. The trend continues further, so that the truthlikeness of h given n basic states is given by the formula

n − 1 T r(h) = . 2n Meanwhile, the truthlikeness of false propositions follows the opposite trend: by adding a fourth basic state, the truthlikeness of ¬h increases from T r(¬h) = 0.33 to 0.35, and goes to 0.375 for five predicates. The evolution is given by the formula n + 1 T r(¬h) = . 2n Obviously,

1 lim T r(h) = lim T r(¬h) = . n→∞ n→∞ 2 This indicates that as the number of basic states tends to infinity, a false proposition has the same truthlikeness as a true one, a result which AHW rightly find aberrant. Second, AHW have noticed that the truthlikeness of h is also altered by the addition of further basic states which have nothing at all to do with h’s con- tent: “h is about the weather while the 100th state might be about something entirely unrelated to the weather: say, the average height of the mountains on the moon.” (AHW 1994: 119). Of course, it is prima facie utterly counterintu- itive that the verisimilitude of a contingent proposition is sensitive to addition of propositions which have nothing to do with h’s content. So, AHW conclude, a “pernicious holism” is being entailed by this version of truthlikeness.

170 For these reasons AHW deem the possible world/similarity approach to truthlikeness to be insuperably flawed and propose instead their own account of verisimilitude: the so-called ‘type-hierarchies’ approach, which will be briefly discussed below. For his part, Psillos loses faith in the possibility of reaching a capable formalized account of truthlikeness and opts for an intuitive concept. Nonetheless, their conclusions are way too hasty. For one thing, I do not agree that their objection is that damaging to Niiniluoto’s account. For another, as can here be elucidated only in passage, their own proposals do not fare any better. It does indeed follow from Niiniluoto’s min-sum formula that false proposi- tions can be more truthlike than true ones. But truthlikeness has two dimen- sions: (i) acquiring truth, and (ii) shunning error. The trade off between them is reached by the weights respectively assigned to γ and γ0. If one values ex- clusively the truth-finding aspect (i) (i.e., γ = 1), then any tautology is better than the best established statements of our best science. However, we know that it cannot be so. We value scientific statements also for being informative. Consequently, we must accept that some false statements, if sufficiently close to truth, are better than some true ones.2 As to the other aspect of the criticism, we have already insisted that truth- likeness is defined for specific target problems; it is relative to specific cognitive interests and to specific epistemic contexts. When the information about a target problem changes, the target itself changes, so it’s unsurprising that the distance of a given statement from the target changes as well. The point is illustrated by Psillos, based on a personal communication from Niiniluoto:

Suppose, for instance, that you are asked to tell the color of Professor Niiniluoto’s eyes, and that you have a theory h which says (correctly) that they are blue. But now suppose you give the same answer in the context of a question that concerns the color of his eyes, hair and skin. In this context the answer h is less verisimilar than it was in the context of the previous question, because it gives much less information about the relevant truth. (Psillos 1999: 269)

Psillos accepts this as a fair point but still displays an uneasiness arising from the contextual character of truthlikeness judgements. Now, it has been suffi- ciently argued that the dependency of truthlikeness on our cognitive interests does not make the measurements of the distances to the truth less objec- tive. This feature is also important in responding the accusation of ‘pernicious

2To have a more precise notion of how close must the false statement be to the truth, we can impose, in specific epistemic contexts, threshold values which set constraints on the acceptable values of ∆sum.

171 holism’. A specific epistemic context excludes addition of propositions corre- sponding to basic states of the world which are irrelevant to the target theory. Therefore, there is no reason to alter, say, a target problem in quantum physics on the grounds of information regarding the Chinese population. There is no purely semantic theory of truthlikeness, strictly in terms of distances to the truth. Truthlikeness also incorporates an essential pragmatic or methodological component. As a matter of fact, the latter is also present in AHW’s (1994) account of verisimilitude: the ‘type-hierarchies approach’. They construe theories as networks of concepts (nodes) related by links repre- senting the relations between concepts. The higher nodes in the network stand for higher types, while the lower nodes stand for subordinate types. Links are then relations of instantiation between nodes in the hierarchy. AHW’s account is based on the idea of similarity within type-hierarchies. Two types are said to be similar when they are represented as subtypes of the same type. For example, both ‘whale’ and ‘dog’ are subtypes of the type ‘mammal’. Further- more, among the subtypes of ‘mammal’, some are more similar to one another than others. For instance, ‘whale’ is more similar to ‘dolphin’ than it is to ‘dog’. AHW borrow a distance function from Amos Tversky (1977) in order to measure degrees of similarity. The question to be raised then is, what determines a type-hierarchy in the first place, if not prior judgements of similarity? How do we establish the framework in which to make quantitative measurements? Obviously, the an- swer lies in the particular cognitive interests of the scientific community. The prior configuration of the network corresponds to pragmatic and methodolog- ical interests. Thus, AHW’s approach is by no means less context dependent than the possible worlds/similarity approach. It can be therefore concluded that the similarity approach of truthlikeness manages to overcome the criti- cisms raised by Miller and by AHW. Moreover, the approach has the ability to deal with a multitude of other major challenges in the dynamics of the- ories: approximation, idealization, meaning variance, conceptual enrichment, reduction, etc. (cf. Niiniluoto 1999).

A.3 Anti-truthlikeness: Giere’s constructive-realist proposal

Many realist philosophers have lost optimism in the prospects of a formalized account of increasing truthlikeness. Devitt (1984) is one of them. Yet, he doesn’t think that problems with the doctrine of truthlikeness are problems for scientific realism. He defends the concept that scientific realism is not necessarily related to the doctrine of convergence, as it is not necessarily related

172 to any theory of truth (Devitt 1984: 114–5). For the purposes of scientific realism, disquotationalism with respect to the usage of ‘refer’, ‘truth’, and ‘approximate truth’ is as good as a robust theory (Devitt 2003). However, as already pointed out, there are reasons why straight talk of approximate truth is more help that its disquotation (see chapter 2). Psillos (1999) is also skeptic about the prospects of a formalized account of truthlikeness. However, he deems the concept to be indispensable to scientific realism and believes that an intuitive account will suffice. But we saw that there are situations related to the dynamics of theories that repel any vague- ness in the truthlikeness measures, that is, situations demanding quantitative measurements of the distances to the truth. There are also realists who sanguinely reject any approach of verisimilitude as misguided. Richard Giere (1988, 1999), for example, thinks that philosophy of science should do away with ‘the bastard semantic relationship of approx- imate truth’ (1988: 106). This claim ought to be understood as part of his conviction that the philosophy of science should be freed from general ques- tions about language (1999: 176). He opposes the propositional view of theories – which he deems a vestige of logical empiricism – and proposes instead the so-called semantic view of theories – which construed theories as sets of models. According to Giere, models are non-linguistic representational devices satisfy- ing certain theoretical definitions – usually, sets of mathematical equations. For example,

A one-dimensional linear harmonic oscillator is a system consisting of a single mass constrained to move in one dimension only. Taking its rest position as origin, the total energy of the system is,

p2 1 H = T + V = + kx2 2m 2 where

dx p = m . dt The development of the system in time is given by solutions to the fol- lowing equations of motion:

dx ∂H = dt ∂p

dp ∂H = − . dt ∂x (Giere 1999: 175)

173 What is the relationship between this theoretical model and the physical world? Giere charges theoretical hypotheses with the task of making this link. The idea is to define theoretical systems so as to be faithful replicas of real systems. Of course, no real linear oscillator (e.g., a bouncing spring) can be replicated in all detail by the above model. However, as Giere suggests, this can be done in specified respects and to specified degrees:

I propose we take theoretical hypotheses to have the following general form:

The designated real system is similar to the proposed model in specified respects and to specified degrees.

We might claim, for example, that all quantities in our spring and weight system remained within ten percent of the ideal values for the first minute of operation. The restriction to specified respects and degrees insures that our claims of similarity are not vacuous. (Giere 1999: 179)

So, the substantive claim in the theoretical hypotheses is that the real system is similar to the model to a specified degree. The real bouncing spring is similar to the linear harmonic oscillator to a certain degree. Typically, advocates of the semantic approach of theories impose rigorous constraints on the relations between models and the modelled systems. Suppe (1977) urges that the theoretical model be homomorphic to an idealized replica of the empirical phenomena. Van Fraassen (1980) demands that an empirical substructure of the model be isomorphic the observed phenomena. By contrast, Giere’s relation of similarity is rather vague. He tells us that similarity is a matter of degree, but offers no metric with which to measure similarity degrees. More to the point, theoretical hypotheses are truth-evaluable linguistic entities. They are true or false depending of the degree of similarity between the model and the real system. However, in line with Psillos (1999: 274), I suspect that this is only an indirect way of saying that the description which the theoretical description gives to the real system is truthlike. After all, the job of theoretical hypotheses is to relate the model with idealized descriptions of the real systems. Thus, the similarity stated by theoretical hypotheses rely on idealizations of the real system. But truthlikeness appears to be the best instrument to allot the gap between idealizations and reality. To conclude, I believe that Giere’s approach actually relies on tacit judge- ments of truthlikeness. Of course, these judgements can only be of intuitive nature. Though sufficient for many practical purposes, these are problematic in specific situations where quantitative comparisons of distances from the truth are being urged by theory dynamics.

174 Summary

The present work is a defense of the doctrine of scientific realism. Our working definition of scientific realism comprises the following two claims:

(i) Most of the essential unobservables posited by our well-established cur- rent scientific theories exist independently of our minds.

(ii) We know our well-established scientific theories to be approximately true.

Claim (i) presents the ontological aspect of scientific realism. This claim about the independent existence of unobservable scientific entities is contended by some purveyors of social constructivism, the doctrine according to which scien- tific facts are constructed by social intentional activity. Claim (ii) underscores the epistemic aspect of scientific realism. The realist claim about science’s abil- ity to provide us with (approximate) knowledge about the world is contended by various sceptics relying primarily on the argument from the underdetermi- nation of theories by empirical data. Chapter 1 (the Introduction) offers the topography of the various kinds of scientific realism – metaphysical, semantic and epistemic – and of their corresponding antirealisms, and explores the logical relations among them. Chapters 2 and 3 offer positive argumentation for scientific realism. I begin with the so-called ‘success arguments’ for scientific realism. Scientific realism garners support from explanations of two respects in which science is very successful: first, scientific theories most of the times entail successful predic- tions. Second, science is methodologically successful in generating empirically successful theories. These are not trivial facts. Scientific realism explains them via inference to the best explanation (IBE): the best explanation for the empirical success of science is that theories are approximately true; the best explanation for the methodological success of science is its dialectical reliance on theories which are approximately true. Section 2.1 presents Putnam’s (1975) ‘no miracle argument’, as well as Smart‘s (1963) older ‘no coincidence argument’ and Maxwell’s (1970) argu- ment based on the epistemic virtues of theories. I also offer an explanationist

175 argument for scientific realism, relying on the inability of its opponents to pro- vide causal explanations. Section 2.2 is a defense of IBE against van Fraassen’s criticisms of context dependency (2.2.1) and inconsistency (2.2.2). IBE is also defended against Fine’s criticism of vicious circularity (2.4). In chapter 3, I combine Hacking’s (1983) experimental argument for en- tity realism with Salmon’s (1984; 1998) common-cause principle. I take entity realism to be foundational to scientific realism. The idea of entity realism is that one may believe in the existence of some theoretical entities without be- lieving in any particular theory in which these are embedded. Its motivation comes from experimental practice, where the manipulation of these entities of- ten relies on incompatible theoretical accounts. The manipulation of electrons, for example, consists in disposing of their causal properties in order to obtain expected effects – typically, to investigate the properties of other kinds of en- tities. Thus, the manipulability of electrons based on their well-understood causal properties supports the belief in the reality of electrons. I argue that at least in some cases, the experimental argument is to be reinforced by means of Salmon’s common-cause analysis. Chapters 4, 5, and 6 are a defence of scientific realism from antirealist attacks. Chapter 4 opens the discussion of the underdetermination topic. Sev- eral attempts to distinguish between an observable and an unobservable realm are critically discussed. I begin by presenting the received view’s failed seman- tic dichotomy between observation and theory. Next, van Fraassen’s observ- able/unobservable distinction is discussed and rejected. Fodor’s attempt at a perception-grounded distinction between observation and theory are shown to be empirically unsettled. Finally, the shortcomings of Kukla’s semantic approach are discussed. Chapter 5 begins by discussing the possibility that for any given theory, there are empirically equivalents generated by means of algorithms (5.1). In this manner, the underdetermination thesis would be straightforwardly estab- lished. However, I present extensive argumentation to the effect that such algorithmic rivals are not to be taken seriously. Next (5.2), several formu- lations of the thesis of empirical equivalence are inspected. The conclusion is that there are no compelling grounds to accept the version of empirical equivalence able to entail a strong underdetermination. Accordingly, scientific realism survives unscathed by the underdetermination debate. Chapter 6 is dedicated to social constructivism. I proceed by distinguish- ing between a metaphysical, a semantic, and an epistemic variant of social constructivism. After analyzing each of them from the viewpoint of their consistency, I conclude that only a moderate version of metaphysical construc- tivism can stand on its own feet. Its claim is merely that some facts about the world are socially constructed.

176 Chapter 7 is a plea for a selective scientific realism, able to do justice to the presence of both instrumentalism and modest constructivism in scientific practice. Section 7.1 offers an historical outline of the S-matrix program in high energy physics. The appendix begins with a presentation of Popper’s theory of verisimili- tude (A.1) and continues with Niiniluoto’s similarity approach of truthlikeness (A.2). The latter is further on defended against various criticisms (A.3).

177 Zusammenfassung

Das Ziel der vorliegenden Arbeit ist die Verteidigung des Wissenschaftlichen Realismus. Die zugrunde liegende Arbeitsdefinition von Wissenschaftlichem Realismus basiert auf den folgenden zwei Behauptungen:

(i) Die meisten wesentlichen unbeobachtbaren Entit¨aten, die von unseren gegenw¨artig etablierten Theorien postuliert werden, existieren unabh¨angig von unserem Denken.

(ii) Wir wissen, dass unsere etablierten Theorien ann¨ahernd wahr sind.

Aussage (i) stellt den ontologischen Aspekt des Wissenschaftlichen Realis- mus dar. Diese Behauptung der unabh¨angigen Existenz unbeobachtbarer wis- senschaftlicher Etit¨atenwird von manchen Anh¨angerndes sozialen Konstruk- tivismus (die Lehre, laut der wissenschaftliche Tatsachen durch soziale inten- tionale Handlungen konstruiert werden) bestritten. Aussage (ii) konstituiert den epistemischen Aspekt des Wissenschaftlichen Realismus. Die Behauptung der Realisten, dass die Wissenschaft imstande ist, uns (ann¨aherndes) Wissen ¨uber die Welt zu liefern, wird von manchen Skeptikern vor allem auf Grund des Arguments der Unterdeterminiertheit (underdetermination) von Theorien durch die empirischen Daten bestritten. Kapitel 2 und 3 dieser Untersuchung vermitteln die positive Argumenta- tion f¨ur den Wissenschaftlichen Realismus. Ich beginne mit den so genannten Erfolgsargumenten“ f¨urden Realismus. Wissenschaftlicher Realismus wird ” gest¨utzt durch die Erkl¨arung folgender zwei Ph¨anomene,in denen die Wis- senschaft erfolgreich ist: erstens ist Wissenschaft erfolgreich in der Erzeugung genauer Vorhersagen. Zweitens ist Wissenschaft methodologisch erfolgreich in der Herstellung von empirisch erfolgreichen Theorien. Beide sind keine triv- ialen Tatsachen. Wissenschaftlicher Realismus erkl¨artsie durch die inference to the best explanation: die beste Erkl¨arung f¨urden empirischen Erfolg der Wissenschaft ist, dass die Theorien ann¨ahernd wahr sind; die beste Erkl¨arung f¨ur den methodologischen Erfolg der Wisseschaft ist, dass Wissenschaft sich in einer dialektischen Weise auf wahre Theorien verl¨asst.

178 Sektion 2.1 stellt Putnams (1975) no miracle Argument, sowie Smarts (1963) ¨alteres no coincidence Argument und Maxwells (1970) Argument, das auf den epistemischen Tugenden der Theorien basiert, in den Mittelpunkt. Meine Argumentation f¨urden Wissenschaftlichen Realismus basiert auf der Unf¨ahigkeit der Opponenten des Realismus, kausale Erkl¨arungen zu liefern. Sektion 2.2 beinhaltet eine Verteidigung der Inferenz zur besten Erkl¨arung gegen¨uber van Fraassens Kontextabh¨angigkeit- und Inkonsistenz-Einw¨ande. Im dritten Kapitel kombiniere ich Hackings (1983) experimentales Argu- ment f¨urEntit¨at-Realismus mit Salmons (1984, 1988) Prinzip der gemeinsamen Ursache. Ich betrachte den Entit¨aten-Realismus als grundlegend f¨urden Wis- senschaftlichen Realismus. Kern des Entit¨aten-Realismus ist, dass man an die Existenz von theoretischen Entit¨aten glauben darf, ohne gleichzeitig an bes- timmte Theorien zu glauben, in denen diese Entit¨aten eingebettet sind. Die Kraft des Entit¨aten-Realismus kommt von der experimentellen Praxis, in der sich die Manipulierung der Entit¨atenauf verschiedene, oft inkompatible the- oretische Ans¨atzeverl¨asst. So basiert beispielsweise die Manipulierung von Elektronen auf der Kontrolle ihrer kausalen Eigenschaften, um erwartete Ef- fekte zu erhalten. Die auf ihren hinreichend verstandenen kausalen Merkmalen basierende Manipulierbarkeit von Elektronen st¨utzt den Glauben an die Wirk- lichkeit von Elektronen. Ich argumentierte, dass mindestens in einigen F¨allen, das experimentelle Argument mithilfe einer Analysis der gemeinsamen Ursache verst¨arktwerden sollte. Kapitel 4, 5, und 6 verteidigen den Wissenschaftlichen Realismus vor an- tirealistischen Angriffen. Kapitel 4 startet die Diskussion des Unterdeter- minierung-Themas. Es werden mehrere Versuche, zwischen beobachtbaren und nicht-beobachtbaren Entit¨aten zu unterscheiden, kritisch diskutiert. Ich beginne mit der Darstellung des misslungenen Versuchs, des so genannten re- ceived view, eine semantische Dichotomie zwischen Beobachtung und The- orie zu etablieren. Als n¨achstes wird van Fraassens Unterschied zwischen beobachtbaren und nicht-beobachtbaren Entit¨atendiskutiert und widerlegt. Kapitel 5 beginnt mit einer Er¨orterung ¨uber die M¨oglichkeit, dass f¨urjede Theorie empirisch equivalente konkurrierende Theorien existieren, die mit- tels eines Algorithmus erzeugt werden (5.1). Mit Hilfe solcher Konstrukte k¨onnte die These der Unterdeterminiertheit unmittelbar durchgesetz werden. Jedoch belege ich ausf¨uhrlich, dass solche algorithmische konkurrierende The- orien nicht ernst zu nehmen sind. In Abschnitt 5.2 werden verschiedene For- mulierungen der These der empirischen Equivalenz diskutiert. Die Schlussfol- gerung ist, dass es keine zwingenden Gr¨unde gibt, die Version der empirischen Equivalenz-These, die eine starke Unterdeterminiertheit nach sich zieht, zu akzeptieren. Dementsprechend ist der Wissenschaftliche Realismus durch die Unterdeterminiertheit-Debatte nicht zu widerlegen.

179 Kapitel 6 ist dem sozialen Konstruktivismus gewidmet. Ich beginne mit der Unterscheidung zwischen einer metaphysischen, einer semantischen und einer epistemischen Variante von sozialem Konstruktivismus. Nach der Analyse der Konsistenz eines jeden Einzelnen schlussfolgere ich, dass lediglich eine moder- ate Version von metaphysischem Konstruktivismus haltbar ist. So erscheint nur die Konstruktion mancher Tatsachen in der Welt nachvollziehbar. In Kapitel 7 pl¨adiereich f¨ur einen selektiven Wissenschaftlichen Realismus, der auch dem Instrumentalismus und einem moderaten Konstruktivismus eine Rolle zugestehen kann. Sektion 7.1 beinhaltet die historische Fallstudie des S-Matrix Programmes in der Hochenergie Physik. Der Appendix beginnt mit einer Darstellung von Poppers Theorie der verisimilitude (A.1) und setzt mit Niiniluotos Ahlichkeitsansatz¨ fort (A.2). Letztere wird vor mehreren kritischen Beurteilungen verteidigt (A.3).

180 References

Aronson, J. L. (1990), ‘Verisimilitude and Type Hierarchies’, Philosophical Topics 18, 5–28.

Aronson, J. L., R. Harr´e,and E. Way (1994), Realism Rescued, London: Duckworth.

Bloor, D. (1976), Knowledge and Social Imagery, London: Routledge & Kegan Paul.

Barnes, B. (1982), ‘On the Extension of Concepts and the Growth of Knowledge’, Sociological Review 30, 23–44.

Bird, A. (1999), Philosophy of Science, University College London Press.

Block, N. (1993), ‘Holism, Hyper-Analyticity and Hyper-Compositionality’, Mind and Language 8 (1), 1–27.

Block, N. (1998), ‘Holism: Mental and Semantic’, Routledge Encyclopedia of Philoso- phy, vol. IV, 488–93.

Bloor, D. (1983), Wittgenstein: A Social Theory of Knowledge, London: Macmillan.

Bloor, D. (1991), Knowledge and Social Imagery (2nd. edn. with afterword), Chicago: Press.

Boghossian, P. A. (1989), ‘The Rule-Following Considerations’, Mind 98, 507–49.

Boyd, R. (1984), ‘The Current Status of Scientific Realism’, in J. Leplin (ed.), Scien- tific Realism, Berkeley: University of California Press.

Braithwaite, R. B. (1953), Scientific Explanation, Cambridge: Cambridge University Press.

Brock, W. H. and D. M. Knight (1967), ‘The Atomic Debates’, in W. H. Brock (ed.).

Brock, W. H. (1967) (ed.), The Atomic Debates. Broadie and the Rejection of the Atomic Theory, Leicester University Press.

Brown, J. R. (1989), The Rational and the Social, London and New York: Routledge.

Brown, R. H. and R. Harr (eds.), Philosophical Foundations of Quantum Field Theory, Clarendon Press: Oxford.

181 Bruner, J. I. Postman and J. Rodrigues (1951), ‘Expectation and the Perception of Color’, Americal Journal of Psychology LXIV, 216–27.

Butler, J. (1990), Gender Trouble: Feminism and the Subversion of Identity, Rout- ledge.

Campbell, K. (1994), ‘Selective Realism in the ’, The Monist, vol. 77, 1, 27–46.

Carnap, R. (1936–7), ‘ and Meaning’, Philosophy of Science 3, 419–71; 4, 1–40.

Carnap, R. (1956), ‘The Methodological Character of Theoretical Concepts’, in H. Feigl and M. Scriven (eds.), The Foundations of Science and the Concepts of Psychology and Psychoanalysis, Minneapolis: University of Minnesota Press.

Carnap, R. (1968), ‘Inductive Intuition and Inductive Logic’, in I. Lakatos (ed.), The Problem of Inductive Logic, Amsterdam: North-Holland Publishing Company.

Cartwright, N. (1983), How the Laws of Physics Lie, Oxford: Clarendon Press.

Chew, G. F. (1961), S-Matrix Theory of Strong Interactions, New York: W. A. Ben- jamin.

Christensen, D. (1991), ‘Clever Bookies and Dutch Strategies’, Philosophical Review 100, 229–47.

Churchland, P. (1979), Scientific Realism and the Plasticity of Mind, Cambridge Uni- versity Press.

Churchland, P. (1988), “Perceptual Plasticity and Theoretical Neutrality: A Reply to ”, Philosophy of Science, 55: 167–187.

Churchland, P. and C. A. Hooker (1985) (eds.), Images of Science, Chicago: The University of Chicago Press.

Collins, H. M. (1985), Changing Order: Replication and Induction in Scientific Prac- tice, London: .

Collins, H. M. and S. Yearley (1992), ‘Epistemological Chicken’, in Pickering, A. (ed.), 301–26.

Craig, W. (1956), ‘On Axiomatizability within a System’, Philosophical Review 65, 38–55.

Curd, M. and J. A. Cover (1998), Philosophy of Science. The Central Issues W. W. Norton and Company, from Philosophy of Science in the Twentieth Century, Oxford: Blackwell Publishers.

Cushing, J. T. (1982), ‘Models and in Current Theoretial High-Energy Physics’, Synthese 50, 5–101.

182 Cushing, J. T. (1990), Theory Construction and Selection in Modern Physics. The S-Matrix, Cambridge University Press.

Cushing, J. T. (1994) Quantum Mechanics: Historical Contingency and the Copen- hagen Hegemony, Chicago, IL: University of Chicago Press.

Devitt, M. (1995), Coming to Our Senses: A Naturalistic Program for Semantic Lo- calism, Cambridge: Cambridge Universita Press.

Devitt, M. (1997), Realism and truth, 2nd edn. with a new afterword (1st edn. 1984, 2nd edn. 1991), Princeton: Princeton University Press.

Devitt, M. (1998), ‘Naturalism and the a priori’, in Philosophical Studies 92, 45–65.

Devitt, M. (2001), ‘The Metaphysics of Truth’, in M. P. Lynch (ed.), 579–612.

Devitt, M. (2003), ‘Scientific Realism’, in F. Jackson and M. Smith (eds.), The Ox- ford Handbook of Contemporary , Oxford: Oxford University Press.

Devitt, M. and K. Sterelny (1999), Language and Reality. An Introduction to the , 2nd edn., Oxford: Blackwell Publishers.

Dowe, P. (2000), Physical Causation, Cambridge: Cambridge University Press.

Earman, J. (1992), Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory, Cambridge, Mass.: A Bradford Book, MIT Press.

Ellis, B. (1979), Rational Belief Systems, Oxford: Blackwell.

Ellis, B. (1985), ‘What Science Aims to Do’, in P. M. Churchland and C. A. Hooker (eds), Images of Science, Chicago: University of Chicago Press. Reprinted in Papineau (ed.), 1996, 166–93.

Ellis, B. (1990), Truth and Objectivity, Oxford Univesity Press.

Feigl, H. (1950), ‘Existential Hypotheses: Realistic versus Phenomenalistic Interpre- tations’, Philosophy of Science 17, 35–62.

Feigl H., M. Scriven, and G. Maxwell (eds.), Minnesota Studies in the Philosophy of Science, vol. II, Minneapolis: University of Minnesota Press.

Feigl, H. and G. Maxwell (eds.) (1962), Minnesota Studies in the Philosophy of Sci- ence, vol. III: Scientific Explanation, Space, and Time, Minneapolis: University of Minnesota Press.

Fine, A. (1984), ‘The Natural Ontological Attitude’, in J. Leplin (ed.), Scientific Realism, Berkeley: University of California Press.

Fine, A. (1986), The Shaky Game, Chicago: Chicago University Press.

183 Fine, A. (1986a), ‘Unnatural Attitudes: Realist and Instrumentalist Attachements to Science’, Mind 95, 149–79.

Fine, A. (1991), ‘Piecemeal Realism’, Philosophical Studies 61, 79–96.

Fine, A. (1996), ‘Science Made Up: Constructivist Sociology of Scientific Knowledge’, in P. Galison and D. Stump (eds.), 231–54.

Fodor, J. (1983), The Modularity of Mind, Massachussets: The MIT Press.

Fodor, J. (1984), ‘Observation Reconsidered’, Philosophy of Science 51, 23–43.

Fodor, J. (1988), ‘A Reply to Churchland’s “Perceptual Plasticity and Theoretical Neutrality”’, Philosophy of Science 55, 188–98.

Fodor, J. and E. LePore (1992), Holism: A Shoppers’ Guide, Oxford: Blackwell.

Forman, P. (1971), ‘Weimar Culture, Causality and Quantum Theory: 1918–27: Adaptation by German Physicists and Mathematicians to Hostile Intellectual Environment’, Historical Studies in the Physical Sciences 3, 1–115.

Foss, J. ‘On Accepting van Fraassen’s Image of Science’, Philosophy of Science 51, 79–92.

Friedman, M. (1982), book review: ‘Bas van Frassen, The Scientific Image’, Journal of Philosophy 79, 274–83.

Galison, P. (1987), How Experiments End, University of Chicago Press.

Galison, P. and D. Stump (eds.) (1996), The Disunity of Science: Boundaries, Con- texts, and Power, Stanford: Stanford University Press.

George, A. (ed.) (1989), Reflections on Chomsky, Oxford: Blackwell.

Giere, R. (1988), Explaining Science: A Cognitive Approach, Chicago: University of Chicago Press.

Giere, T. (1999), Science Without Laws, University of Chicago Press.

Gillies, D. (1998), ‘The Duhem Thesis and the Quine Thesis’, in A. Curd and J. A. Cover (eds.), 302–17.

Gilman, D. (1992), ‘What’s a Theory to do with ... Seeing? or Some Empirical Considerations for Observation and Theory’, British Journal for the Philosophy of Science 43, 287–309.

Gilman, D. (1994), ‘Pictures in Cognition’, Erkenntnis 41, 87–102.

Goldman, A. (1986), Epistemology and Cognition, Cambridge: Harvard University Press.

184 Goodman, N. (1978), Ways of Worldmaking, Indianapolis: Hackett Publishing Com- pany.

Goodman, N. (1996), ‘Notes on the Well Made World’, in P. J. McCornick (ed.).

Goldberger, M. (1955), ‘Causality Conditions and Dispertion Relations. Boson Fields’, Physical Review 99, 978–985.

Haack, S. (1996), ‘Reflections on Relativism: From Momentous Tautology to Seductive Contradiction’, in J. Tomberlin (ed.).

Hacking, I. (1983) Representing and Intervening. Introductory Topics in the Philoso- phy of Natural Science, Cambridge University Press.

Hacking, I. (1988), ‘The Participant Irrealist at Large in the Laboratory’, British Journal for the Philosophy of Science 39, 277–94.

Hacking, I. (1999), The Social Construction of What?, Cambridge MA: Harvard.

Hanson, N. H. (1961), Patterns of Discovery, Cambridge University Press.

Hardin, C. L. and A. Rosenberg (1981), ‘In Defense of Convergent Realism’, Philoso- phy of Science 49, 604–15.

Hargreaves Heap, S. P. and Y. Varoufakis (1995), Game Theory. A Critical Introduc- tion, London and New York: Routledge.

Harr´e, R. and M. Krausz (1993), Varieties of Relativism, Oxford: Blackwell.

Heal, J. (1990), Fact and Meaning: Quine and Wittgenstein om the Philosophy of Language, Oxford: Basil Blackwell.

Hempel, C. G. (1958), ‘The Theoretician’s Dilemma: A Study in the Logic of Theory Construction’, in H. Feigl, M. Scriven, and G. Maxwell (eds.).

Hempel, C. G. (1996), ‘Comments on Goodman’s Ways of Worldmaking’, in P. Mc- Cornick (ed.).

Hesse, M. (1967), ‘Laws and Theories’, in P. Edwards (ed.) The Ecyclopedia of Phi- losophy, vol. 4, 404–10, New York: Macmillan.

Hobbs, J. (1994), ‘A Limited Defence of Pessimistic Induction’, British Journal for the Philosophy of Science 45, 171–91.

Hoefer, C. and A. Rosenberg (1994), ‘Empirical Equivalence, Underdetermination, and Systems of the World’, Philosophy of Science 61, 592–607.

Hooker, C. A. (1974), ‘Systematic Realism’, Synthese 26, 409–97.

Horwich, P. (1991), ‘On the Nature and Norms of Theoretical Commitment’, Philos- ophy of Science 51, 1–14.

185 Horwich, P. (2001), ‘A Defense of Minimalism’, in M. P. Lynch (ed.), 559–78.

Jackson, F. (1998), From Metaphysics to Ethics. A Defence of Conceptual Analysis, Oxford: Clarendon Press.

Kitcher, P. (1985), ‘Two Approaches to Explanation’, in Journal of Philosophy 82, 632–9.

Kitcher, P. (1996), The Advancement of Science: Science Without Legend, Objectivity Without Illusions, Oxford: Oxford University Press.

Kripke, S. (1982), Wittgenstein on Rules and Private Language, Basil Blackwell: Ox- ford.

Kukla, A. (1994), ‘Non-Empirical Theoretical Virtues and the Argument from Under- determination’, Erkenntnis 41, 157–70.

Kukla, A. (1996), ‘Does Every Theory have Empirically Equivalent Rivals?’, Erken- ntnis 44, 137–66.

Kukla, A. (1998), Studies in Scientific Realism, Oxford University Press.

Kukla, A. (2000a), ‘Theoreticity, Underdetermination, and the Disregard for Bizarre Scientific Hypothese’, Philosophy of Science 68, 21–35.

Kukla, A. (2000b), Social Constructivism and the Philosophy of Science, London and New York: Routledge.

Kuhn, T. (1979), The Structure of Scientific Revolutions, Chicago: University of Chicago Press.

Kvanvig, J. L. (1994), ‘A Critique of van Fraassen’s Voluntaristic Epistemology’, Syn- these 98, 325–48.

Latour, B. and S. Woolgar (1986), Laboratory Life: The Social Construction of Sci- entific Facts 2nd edn, London: Sage.

Latour, B. (1987) Science in Action, Cambridge, MA: Harvard University Press.

Laudan, L. (1981), ‘A Confutation of Convergent Realism’, Philosophy of Science 48, 19–49. Reprinted in J. Leplin (ed.) (1984), Scientific Realism, University of California Press.

Laudan, L. (1984), Science and Values, Berkeley: University of California Press.

Laudan, L. (1996), Beyond Positivism and Relativism: Theory, Method and Evidence, Boulder, CO: Westview Press.

Laudan, L. and J. Leplin (1991), ‘Empirical Equivalence and Underdetermination’, The Journal of Philosophy 88, 448–472. Reprinted in L. Laundan (1996).

186 Leplin, J. (1984), Scientific Realism, Berkeley: University of California Press.

Leplin, J. (1997), A Novel Defense of Scientific Realism, Oxford University Press.

Leplin, J. (2000), ‘Realism and Instrumentalism’, in W. H. Newton-Smith (ed.) (2000).

Lewis, D. (1986), Philosophical Papers, vol. II, Oxford University Press.

Lipton, P. (1993), ‘Is the Best Good Enough?’, Proceedings of the Aristotelian Society 93/2, 89–104.

Lynch, M. P. (2001) (ed.), The Nature of Truth, Cambridge, Mass.: A Bradford Book, MIT Press.

Margenau, H. (1950), The Nature of Physical Reality, New York: McGraw-Hill Book Co.

Maxwell, G. (1962), ‘The Ontological Status of Theoretical Entities’, in Feigl and Maxwell (eds.), vol. III.

Maxwell, G. (1970), ‘Theories, Perception, and Structural Realism’, in R. Colodny (ed.), The Nature and Function of Scientific Theories, Pittsburg: University of Pittsburg Press.

McAllister, J. W. (1993), ‘Scientific Realism and the Criteria for Theory-Choice’, Erkenntnis 38, 203–22.

McCornick, P. (ed.) (1996), Starmaking. Realism, Antirealism, Irrealism, Cambridge MA: MIT Press.

McDowell, J. (1984), ‘Wittgenstein on Following a Rule’, Synth`ese 58, 325–63.

McMichael, A. (1985), ‘Van Fraassen’s Insttrumentalism’, British Journal for the Phi- losophy of Science, 36: 257–72.

Miller, A. (1998), Philosophy of Language, London: University College London Press.

Miller, D. (1976) ‘Verisimilitude Redeflated’, British Journal for the Philosophy of Science 27, 363–80.

Misner, C. W. (1977), ‘ and Theology’, in Yourgrau, W. and A. D. Breck (eds.).

Musgrave, A. (1985), ‘Realism versus Constructive Empiricism’, in Curchland and Hooker (eds.).

Nagel, E. (1961), The Structure of Science, New York: Hartcourt, Brace & World.

Nelson, A. (1994), ‘How Could Facts be Causally Constructed?’, Studies in History and Philosophy of Science 25, 535–47.

187 Newton-Smith, W. H. (1978), ‘The Underdetermination of Theory by Data’, Proceed- ings of the Aristotelian Society 52, 71–91.

Newton-Smith, W. H. (2000), ‘Underdetermination of Theory by Data’, in W. H. Newton-Smith (ed.), A Companion to the Philosophy of Science, Blackwell Pub- lishers.

Niiniluoto, I. (1991), ‘Realism, Relativism, and Constructivism’, Synthese 89, 135– 62.

Niiniluoto, I. (1999), Critical Scientific Realism, Oxford University Press.

Nye, M. J. (1976), ‘The Nineteenth-Century Atomic Debates and the Dilemma of an “Indifferent Hypothesis”’, Studies in History and Philosophy of Science 7, 245–68.

Oddie, G. (1986), Likeness to Truth, Dordrecht: Reidel Publishing Company.

Papineau, D. (1996) (ed.), The Philosophy of Science, Oxford University Press.

Perrin, J.-B. (1913), Les Atomes, Paris: Alcan.

Pickering, A. (1984), Constructing Quarks: A Sociological Study of Particle Physics, Edinburgh University Press.

Pickering, A. (ed.) (1992), Science as Practice and Culture, Chicago and London: The University of Chicago Press.

Pollock, J. L. and J. Cruz (1999), Contemporary Theories of Knowledge, 2nd edn, Rowman and Littlefield.

Popper, K. (1963), Conjectures and Refutations. The Growth of Scientific Knowledge, 3rd rev. edn., London: Hutchinson.

Popper, K. (1972), Objective Knowledge: An Evolutionary Approach, 2nd enlarged edn., Oxford: Oxford University Press.

Psillos, S. (1999), Scientific Realism. How Science Tracks Truth, Routledge: London and New York.

Psillos, S. (2002), Causation and Explanation, Acumen.

Putnam, H. (1975), Philosophical Papers, vol. 1, Mathematics, Matter and Method, Cambridge: Cambridge University Press.

Putnam, H. (1978), Meaning and the Moral Sciences, London: Routledge and Kegan Paul.

Quine, W. V. O. (1961), ‘Two Dogmas of Empiricism’, From a Logical Point of View, New York: Harper and Row.

188 Quine, W. V. O. (1975), ‘The Nature of Natural Knowledge’, in S. Guttenplan (ed.), Mind and Language, Oxford: Clarendon Press, 67–81.

Quine, W. V. O. (1981), Theories and Things, Cambridge, MA: Harvard University Press.

Reichenbach, H. (1928), Philosophie der Raum-Zeit-Lehre, Berlin: W. de Gruyter & Co.

Rescher, N. (1987), Scientific Realism. A Critical Reappraisal, D. Reidel Publishing Company.

Rosenberg, A. (1988), Philosophy of Social Science, Oxford: Clarendon Press.

Salmon, W. (1984), Scientific Explanation and the Causal Structure of the World, Princeton University Press.

Salmon, W. (1997a), Causality and Explanation, New York: Oxford University Press.

Salmon, W. (1997b), ‘Causality and Explanation: A Reply to Two Critics’, Philosophy of Science 64, 461–77.

Searle, J. (1995), The Construction of Social Reality, Penguin Press.

Siegel, H. (1987), Relativism Refuted: A Critique of Contemporary Epistemological Relativism, Dordrecht: Reidel.

Smart, J. J. C. (1963), Philosophy and Scientific Realism, London: Routledge and Kegan Paul.

Smith, B. C. (1998), ‘Meaning and Rule-Following’, in E. Craig (ed.), Routledge En- cyclopedia of Phisosophy, vol. 6, 214–9.

Spohn, W. (1981), ‘Analogy and Inductive Knowledge: A Note on Niiniluoto’, Erken- ntnis, 16: 35–52.

Stich, S. (1998), “Reflective Equilibrium, Analytic Epistemology, and the Problem of Cognitive Diversity”, in M. dePaul and W. Ramsey (eds.), Rethinking Intuition , Rowman and Littlefield, 95–112.

Suppe, F. (1989), The Semantic Conception of Theories and Scientific Realism, Ur- bana, IL: University of Illinois Press.

Tomberlin, J. (ed.) (1996), Philosophical Perspectives, X: Metaphysics, Oxford: Black- well.

Tversky, A. (1977), ‘Features of Similarity’, Psychological Review 84, 327–52.

Urbach, P. (1983), ‘Intimations of Similarity: The Shaky Basis of Verismilitude’, British Journal for the Philosophy of Science 34, 266–75.

189 van Fraassen, B. (1980), The Scientific Image, Oxford: Clarendon Press. van Fraassen, B. (1984), ‘Belief and the Will’, The Journal of Philosophy, vol. LXXXI, 5, 135–56. van Fraassen, B. (1985), ‘Empiricism and the Philosophy of Science’, in P. Churchland and C. A. Hooker (eds.). van Fraassen, B. (1989), Laws and Symmetry, Oxford: Clarendon Press. van Fraassen, B. (1995), ‘Belief and the Problem of Ulysses and the Sirens’, Philo- sophical Studies, 77, 7–37.

Witting, M. (1992), The Straight Mind and Other Essays, Boston: Beacon Press.

Worrall, J. (1989), ‘Structural Realism: The Best of Both Worlds’, Dialectica 43, 99–124.

Wright, C. (1984), ‘Kripke’s Account of the Argument Against the Private Language’, Journal of Philosophy 81, 759–77.

Wright, C. (1989), ‘Wittgenstein’s Rule-following Considerations and the Central Project of Theoretical Linguistics’, in A. George (ed.).

Wright, C. (1993), Realism, Meaning and Truth, 2nd edn., Oxford: Blackwell.

Yourgrau, W. and A. D. Breck (eds.) (1977), Cosmology, History, and Theology, New York: Plenum Press.

Zahar, E. (1973), ‘Why Did Einstein’s Programme Supersede Lorentz’s?’, British Journal for the Philosophy of Science 24, 95–123.

Zalobardo, J. (1995), ‘A Problem for Information Theoretic Semantics’, Synthese 105, 1–29.

190