NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES

Constructive emergence and the reuniting of the sciences: a computer scientist looks at philosophy

Russ Abbott

Department of Computer Science, California State University, Los Angeles, California [email protected]

1. Introduction

For a number of years I have written papers about emergence. (See Abbott 2006, 2007, …) The perspective I have taken is that of a computer scientist. I believe that these papers resolve the issue of emergence. Yet in philosophy emergence is still taken as a mystery. As recently as April 2008, for example, Bedeau and Humphreys (2008) published a volume of collect- ed articles about emergence. The introduction includes the following. Emergence relates to phenomena that arise from and depend on some more basic phenomena yet are simultaneously autonomous from that base. The topic of emergence is fascinating and controversial in part because emergence seems to be widespread and yet the very idea of emergence seems opaque, and perhaps even incoherent.1 As suggested above, emergence is related to the notion that higher level phe- nomena can be “autonomous” from the underlying phenomena on which they are based. This certainly is not a new idea in philosophy. For more than three decades, functionalist philosophers—such as Putnam (1975) and Fodor (1974)—have ar- gued for the autonomy of both what are called the special sciences (all science other than physics) and the regularities they explain. Here’s Fodor (1998) in a widely quoted passage. The very existence of the special sciences testifies to reliable macro-level regularities … Damn near everything we know about the world suggests that unimaginably complicated to-ings and fro-ings of bits and pieces at the extreme micro-level manage somehow to converge on stable macro-level properties. …

1 The “Emergence microsite” website (http://mitpress.mit.edu/emergence/) associated with the book, which at one point was promoted having the latest developments, does not indicate that this sense of mystery has dissolved. 2 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES

So higher level regularities exist. But claiming to speak for Kim (1992)—and apparently also for himself—Fodor continues (somewhat paraphrased) as follows. The “somehow” really is entirely mysterious. Why should there be (how could there be) macro-level regularities at all in a world where, by common consent, macro-level stabilities have to supervene on a buzzing, blooming confusion of micro-level interactions. I’m puzzled by Fodor’s bafflement. A fundamental principle of functionalism is that higher level regularities are realized by lower level phenomena—perhaps even by buzzing blooming micro-level phenomena—and some even in multiple ways. So why is it mysterious that some higher level phenomena have actually been realized? But Fodor goes on to insist that he doesn’t know why higher level regularities exist. So, then, why is there anything except physics? … Well, I admit that I don’t know why. I don’t even know how to think about why. I expect to figure out why there is anything except physics the day before I figure out why there is anything at all. In our role as software developers computer scientists produce emergence as part of our job. Every software application is the implementation of emergent phe- nomena, which we refer to by terms such as level of abstraction, specification, API, etc. Microsoft Word, for example, implements abstractions such as para- graphs, words, fonts, pages, documents, etc. These concepts and the rules that con- strain them are, it would seem, autonomous from the rules that constrain the un- derlying computer. Depending on the level on which one wants to focus, the com- puter rules have to do with logic gates, or machine instructions, or programming language constructs. None of those levels have anything to do with documents di- vided up into paragraphs. Software development may be difficult; it may be chal- lenging to make the results come out right, but there is no mystery about it. We know how to turn logic gate operations into Microsoft Word abstractions. The example of emergence I like is a Turing machine implemented within the Game of Life. Turing machines compute functions and are constrained by com- putability theory. Neither of these intellectual domains is derived from the rules that describe how Game of Life transitions occur. They are, it would seem, auton- omous. Yet Game of Life Turing machines operate only as a direct consequence of Game of Life rules. Nothing happens on a Game of Life grid other than that cells go on and off as a consequence of the Game of Life rules. So a Game of Life Turing machine would seem to be an easily understandable and paradigmatic ex- ample of emergence. It is not news in philosophical circles that Turing machines can be implemented within a Game of Life universe. Dennett (1991) pointed this out in a widely cited paper published more than a decade and a half go. So why are so many philoso- phers still so baffled by emergence? In attempting to understand this phenomenon, I have examined some of the philosophical literature on emergence and related topics. (I looked at what I was

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 3 able to determine to be the relevant papers. See the bibliography for some of them.) In doing so I have run across a number of concepts and terms that appear to be used differently by computer scientists and philosophers. These include “emergence” itself as well as “reduction,” “autonomous,” “natural kind,” “causali- ty,” and “supervenience.” In this paper, I will examine some of these differences. Here are a few brief observations. Reduction and autonomy. Computability theory is autonomous from the Game of Life rules, but it isn’t in conflict with them. Computability theory has nothing to do with the Game of Life rules. It is a theory that can be developed de novo. Assuming the Game of Life rules does not preclude the possibility of com- putability theory. So if one is looking for higher level theories to be dependent on lower level theories, computability theory is not dependent on the Game of Life. The real issue is whether the Game of Life rules are powerful enough to imple- ment Turing machines, not whether the constraints that apply to Turing machines as Turing machines depend on the Game of Life rules. The way a number of pa- pers put it, emergence reflects the impossibility of deriving high level rules from low level rules. Computability theory can be derived from Game of Life rules since it can be derived independently of the Game of Life. But that doesn’t seem to be what the philosophical perspective demands. Some recent work on reduction recognizes the difference between reduction as theory derivation and reduction as mechanism. This is moving in a positive direc- tion. My surprise is that this seems like a new insight to philosophers. The impres- sion I get is that philosophers seem to think in terms of predicates that are true or false rather than in terms of mechanisms and operations. Supervenience. Supervenience is the idea that higher level properties can’t vary unless the lower level properties on which they are built also vary. For exam- ple, if two objects are identical with respect to their lower level properties, i.e., if they are molecule-for-molecules identical, then they can’t differ with respect to any higher level property, e.g., one can’t be heavier than another. That seems like a solid idea. The computer science notion of functional dependency is the same idea. One can think of higher level properties as being functionally dependent on lower level properties. But philosophers seem to ignore aspects of supervenience arbitrarily. For ex- ample basic supervenience ignores times and space. It is impossible for two dis- tinct objects to be identical with respect to all their properties. After all, if they are distinct they are not at the same physical location. Also, unless two distinct but otherwise identical (in some sense) objects were created at exactly the same in- stant, then one will be older than the other. A favorite example along these lines has to do with how people treat objects. An exact copy of the Mona Lisa will be worth less than the original—because it isn’t the original. (How could one tell?) To get around this sort of problem, philosophers invent what they call global su- pervenience. If two worlds are identical at their lower level properties then they are identical on all their properties. This just seems to confuse the issue.

4/3/2018 4 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES

Also, supervenience misses an important aspect of the actual world. Most things don’t supervene on any fixed set of lower level things. A person, for exam- ple, is always varying with respect to what molecules make him up. So the notion of supervenience that seems at first blush so useful in relating higher and lower level properties becomes relatively useless. People persist in time even though their supervenient bases don’t remain the same over time. That’s one of the most important aspects of certain classes of entities. Yet counting on supervenience to organize one’s world makes it very difficult to see that.

executes—as it is doing as I write this—the transformation operations that my keystrokes and mouse movements cause to be performed on my document are built from statements in the programming language used in the implementation, which use instructions at the computer hardware level, which use logic operations performed by gates, which use still lower level phenomena down to the quantum level. Looked at bottom-up, each level of abstraction is built upon—has been made to emerge from—lower levels of abstraction by engineers and programmers. (For an extended discussion of the relationship between levels of abstraction and reductionism, see Abbott, 2008b.) All of this is standard practice in computer sci- ence. Causality. Philosophers worry about causality in ways that no one else does. Causality is important in this area because if both higher and lower level phenom- ena are causes, then the result if over-determined, which is unacceptable. But nei- ther scientists nor computer scientists seem to worry about causality in the way philosophers do. A standard philosophical puzzle about causality postulates a situ- ation in which two people shoot a third at the same time. Which shot was the cause if either (or both) were sufficient to cause the victim’s death? Situations like that can come up in software—there are two fault in the software each of which could cause the software to fail. There is no need to point to one of them as “the cause” of a software failure.

Terms that aren’t well defined: emergence, reduction, autonomous, natural kind, supervenience. Interactions at higher levels are definitely dependent on lower levels, e.g., GoL pattern interactions, gecko adhesion, evolutionary arms race. Two people shoot a third person at exactly the same time. Is the death of the third person over-determined? Which of the two shots was the cause? If both, this is inconsistent with notion of cause, which includes the counterfactual requirement that if the cause hadn’t occurred the effect would not have occurred. But we have such a poor understanding of cause anyway, that this seems like something not worth spending time on.

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 5

Isn’t’ it as over-determined as a single bullet piercing two vital organs or two vital brain centers? Philosophers seem to assume that everything that happens in the world fits into one or our current ways of describing things. If we can describe things, so much the better, but if we can’t don’t spend so much time running around in circles. Find myself in a swamp of meanings and shadings. It’s ok to have various meanings, but let’ just settle on a useful one and more on.

Is Fodor asking something like why are there human beings rather than just bacteria, or chemicals, or elementary physical particles? Other than creationists or intelligent design believers, no one considers this a mystery. We understand much of how it happened. Put a bunch of elementary particles together at low enough temperatures, and one gets atoms and then molecules. The step from there to bio- logical cells and organisms is still cloudy, but we are close to understanding it: we will probably be able to create a biological cell from “inert” chemicals within the next quarter century. Evolution and economics—which is really a form of evolu- tion—explain the rest. So why does Fodor call it molto mysterioso? Or perhaps he is asking how higher level entities can persist in a world of buzzing blooming micro-level phenomena. There are two cases. (I discuss this in more detail in the Section on Entities.) 1. Static entities, those that are in physical equilibrium (examples include atoms, molecules, and solar systems), persist because they are in lower energy states than their components separately. A hydrogen atom—con- sisting of a proton and an electron—is in a lower energy state, i.e., an en- ergy well, than a proton and an electron not bound together. Natural pro- cesses tend toward minimal energy states—hence the creation (i.e., emer- gence) of static entities. 2. Dynamic entities, those that require the continual importation of energy to persist (examples include biological organisms and social systems), per- sist when (a) they include processes that make them self-maintaining when supplied with the appropriate energy and materials and (b) their way of functioning in the world provides them with the energy and mate- rials they need. The emergence of these entities is typically the result of evolutionary (and economic) processes. So we know why and how higher level entities emerge from lower level enti- ties and how they persist. Where is the mystery?

How things look from the perspective of a computer scientist

Does the preceding sound naïve? Am I minimizing the intellectual difficulties? Computer science studies the conceptualization and implementation of abstrac- tions. Day after day we build higher level regularities—we call them abstractions

4/3/2018 6 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES

—from lower level regularities. (See “Abstraction Abstracted” (2008).) All you need is a programmer—or nature as a blind programmer—and abstractions will emerge. Not only do functionalist philosophers understand this computer science-based approach to abstraction, they used it as the foundation of functionalism. Putnam (1960) was one of the early philosophers to introduce computational thinking to philosophy. It has since served as one of the primary foundations of functionalism. [Two] system can have quite different constitutions [e.g. they might be made of copper, cheese, or soul] and be functionally isomorphic. Are Bedeau and Humphreys really saying that the fundamental idea of func- tionalism and the standard practice of software developers—that higher level ab- stractions/regularities may be implemented by lower level elements and processes —is “opaque and perhaps even incoherent?” Does anyone really doubt that higher level abstractions can be implemented by lower level elements and processes? Perhaps one reason that this sense of mystery persists is that Fodor (1997, end- note 5) argues that Dennett was right to have dismissed the sort of emergence that software developers produce. When macrokinds are metaphysically identical to microkinds, laws about the latter imply laws about the former; likewise when macroregularities are logical or mathematical constructions as in microregularities, as in the “Game of Life,” described by Dennett (1991). Pace Dennett, such cases do not illuminate (what functionalists take to be) the metaphysical situation in the special sciences. To repeat: autonomy implies ‘real’ (viz projectible) patterns without reduction. [I was unable to find where Dennett comes to this conclusion. But Fodor apparently does.] One of the primary examples I use below involves macro-regularities built within the Game of Life. I don’t agree with Fodor’s claim that laws about the Game of Life imply laws about constructs built within the Game of Life. Later in this paper I explain what I think is the significance of the fact that one can imple- ment a Turing machine by organizing Game of Life patterns.

The real question: reductionism

I know. There’s more to this issue than the building of abstractions. In my some- what feigned puzzlement over Fodor, Bedeau, and Humphreys’ claimed uncertain- ty about emergence, I deliberately avoided what is probably the central issue: re- ductionism. Fodor’s concern is not with just any higher level properties but with autonomous predicates, where “autonomy implies ‘real’ (viz projectible) patterns without reduction.” The problem, of course is what Fodor means by without reduction. One of Fodor’s (1974) examples of a putatively autonomous property is Gresham’s Law.

The reason it is unlikely that every natural kind corresponds to a physical natural kind is just that (a) interesting generalizations (e.g., counter-factual supporting generalizations)

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 7

can often be made about events whose physical descriptions have nothing in common, (b) it is often the case that whether the physical descriptions of the events subsumed by these generalizations have anything in common is, in an obvious sense, entirely irrelevant to the truth of the generalizations, or to their interestingness, or to their degree of confirmation or, indeed, to any of their epistemologically important properties, and (c) the special sciences are very much in the business of making generalizations of this kind. I take it that these remarks are obvious to the point of self-certification; they leap to the eye as soon as one makes the (apparently radical) move of taking the special sciences at all seriously. Suppose, for example, that Gresham's 'law' really is true. (If one doesn't like Gresham's law, then any true generalization of any conceivable future economics will probably do as well.) Gresham's law says something about what will happen in monetary exchanges under certain conditions. I am willing to believe that physics is general in the sense that it implies that any event which consists of a monetary exchange (hence any event which falls under Gresham's law) has a true description in the vocabulary of physics and in virtue of which it falls under the laws of physics. But banal considerations suggest that a description which covers all such events must be wildly disjunctive. Some monetary exchanges involve strings of wampum. Some involve dollar bills. And some involve signing one's name to a check. What are the chances that a disjunction of physical predicates which covers all these events (i.e., a disjunctive predicate which can form the fight hand side of a bridge law of the form 'x is a monetary exchange ↔ … ' ) expresses a physical natural kind? In particular, what are the chances that such a predicate forms the antecedent or consequent of some proper law of physics? The point is that monetary exchanges have interesting things in common; Gresham's law, if true, says what one of these interesting things is. But what is interesting about monetary exchanges is surely not their commonalities under physical description. A natural kind like a monetary exchange could turn out to be co-extensive with a physical natural kind; but if it did, that would be an accident on a cosmic scale. In fact, the situation for reductivism is still worse than the discussion thus far suggests. For, reductivism claims not only that all natural kinds are co-extensive with physical natural kinds, but that the co-extensions are nomologically necessary: bridge laws are laws. So, if Gresham's law is true, it follows that there is a (bridge) law of nature such that 'x is a monetary exchange ~- x is P', where P is a term for a physical natural kind. But, surely, there is no such law. If there were, then P would have to cover not only all the systems of monetary exchange that there are, but also all the systems of monetary exchange that there could be; a law must succeed with the counterfactuals. What physical predicate is a candidate for 'P' in 'x is a nomologically possible monetary exchange iff Px'? To summarize: an immortal econophysicist might, when the whole show is over, find a predicate in physics that was, in brute fact, coextensive with 'is a monetary exchange'. If physics is general - if the ontological biases of reductivism are true - then there must be such a predicate. But (a) to paraphrase a remark Donald Davidson made in a slightly different context, nothing but brute enumeration could convince us of this brute co- extensivity, and (b) there would seem to be no chance at all that the physical predicate employed in stating the coextensivity is a natural kind term, and (c) there is still less chance that the coextension would be lawful (i.e., that it would hold not only for the nomologically possible world that turned out to be real, but for any nomologically possible world at all). I take it that the preceding discussion strongly suggests that economics is not reducible to physics in the proprietary sense of reduction involved in claims for the unity of science.

4/3/2018 8 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES

There is, I suspect, nothing special about economics in this respect; the reasons why economics is unlikely to reduce to physics are paralleled by those which suggest that psychology is unlikely to reduce to neurology.

If there are higher level emergent entities, and if one can talk about “laws” to which these higher level entities conform, doesn’t that produce a redundancy be- tween that level of description and the level of description that occurs when we use the language of fundamental physics to describe the lowest level phenomena over which the higher level entities supervene? If, for example, I say that a word disappeared from my Microsoft Word document because (a) I selected it (by dou- ble clicking on it) and then (b) I pressed the delete key, how does that description relate to the description given at the level of bits and electrons? Schouten and Looren de Jong (2007) summarize the problem as follows. If a higher level explanation can be related to physical processes, it becomes redundant since the explanatory work can be done by physics. So is my higher level description just a manner of speaking—a short cut or convenience for the real action that is taking place at a far lower level? If that’s Fodor’s question, it comes to this. Unless one thinks of nature as causally redun- dant, how can there be both higher and lower level explanations of high-level phe- nomena? Kim (1998), the primary emergent spoil sport, sharpened this issue when he ar- gued that the question is not whether some higher level property is implementable (or realizable) by lower level mechanisms but whether it is what he calls theoreti- cally predictable and reductively explainable. What is being denied by emergentists is the theoretical predictability of [emergent property] E on the basis of [the microstructural properties of a system] M: we may know all that can be known about M – in particular, laws that govern the entities, properties and relations constitutive of M – but this knowledge does not suffice to yield a prediction of E. This unpredictability may be the result of our not even having the concept of E, this concept lying entirely outside the concepts in which our theory of M is couched. … [A closely related issue] is the doctrine that the emergence of emergent properties cannot be explained on the basis of the underlying processes, and that emergent properties are not reducible to the basal conditions from which they emerge. Recently Boogerd et. al. (2005) echoed this position. The central question then is … whether there are properties of systems which cannot be “deduced” from the behavior of their parts, together with a “complete knowledge” of the arrangement of the system’s parts and the properties they have in isolation or in other simpler systems. Properties that are not deducible in this way we call strongly emergent properties. As a computer scientist, I don’t see why predictability matter—or perhaps I don’t understand what philosophers mean by it. Would one say that Microsoft Word is predictable from quantum mechanics or even from logic gates? Is it re- ductively explainable from those bases?

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 9

What is an autonomous law or theory?

Fodor (1997) I will say that a law or theory that figures in bona fide empirical explanations, but that is not reducible to a law or theory of physics, is ipso facto autonomous; and that the states whose behavior such laws or theories specify are functional states

How can one predict a face from the clay from which it is molded?

In order to predict higher level properties from lower level properties, there must be a way of mapping one to the other. But I don’t see how that works. I like the following analogy. (I’ll provide a more formal example later, but this analogy serves, as Dennett likes to say, as an intuition pump.) When a sculptor molds a face from clay is she really mapping properties of the clay onto properties of the face? I don’t think so. She is shaping the clay to embody features of the face, but she is not mapping properties of the clay, i.e., its molecular structure, onto facial attributes and characteristics. (It is even harder for me to see what this might mean at a functional level.) Clay is fairly homogeneous, it’s properties the same throughout. Which property would one say maps to wrinkles and which to eyes? The distinction between mapping properties and molding a shape may seem subtle, but I think it’s important. It’s certainly true that to use clay to model a face, the clay must have certain properties, including the ability to be molded and to hold a shape. And these properties certainly depend on its molecular structure. But in using clay to model a face one is not mapping clay’s molecular structural prop- erties to structural properties of the face. One is exploiting those structural proper- ties to create a shape that resembles the face. There is no useful mapping from properties of clay molecules to facial features. To use Kim’s terms, I doubt that anyone would describe as theoretically pre- dictable the process whereby a sculptor causes a face to appear by working a block of unformed clay. Clay can be shaped into all sorts of things. Why would one predict a face, much less a particular face? Or more to the point, how would one predict (or as Boogerd, et. al. say deduce) the features and characteristics of a particular face from the properties of clay? To be sure, there are limits to what one can do with clay. One might describe some of the limitations that clay imposes on the features and characteristics that one could model. For example, it may not be possible to model the Pinocchio’s face if his nose is so long that the clay cannot support it. But that’s not the same thing as saying that one can deduce the actual features and characteristic of a face by examining the molecular structure of clay. Of course, it’s possible that I don’t understand what is meant by being able to predict or deduce higher level properties from lower level properties. What about reductive explainability? If a face has been molded from clay, are its features reducible to the properties of the clay? Certainly, once the face exists,

4/3/2018 10 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES one can explain how the clay has been used to create those features. This clay molecule is at this position in the cheek; that one at that; the whole thing holds to- gether because of the shape retaining properties of clay; etc. Is that sufficient? Is that reduction? The “basal conditions” of the clay don’t seem to me to “explain” the features of the face—at least not in any useful way. But then what would one be looking for as an explanation of the face? In what terms would one want a face to be explained by properties of clay? What might an explanation consist of? I discuss this issue further below.

The problem of new concepts

Kim also raises the question of what emergent properties, after having emerged, can do – that is, how they are able to make their special contributions to the ongoing processes of the world. It is obviously very important to the emergentists that emergent properties can be active participants in causal processes involving the systems they characterize. … We may, therefore, set forth the following as the fifth doctrine of emergentism: The causal efficacy of the emergents: Emergent properties have causal powers of their own – novel causal powers irreducible to the causal powers of their basal constituents. As Kim then argues, granting emergent properties causal efficacy leads to all sorts of scientifically unacceptable conclusions such as downward causation. Yet one could imagine face recognition software (I’m making it software to fi- nesse the problem of talking about how humans recognize faces) that would iden- tify a face in the clay as being the likeness of one person rather than another. Does one want to grant the face any sort of causal power in that case? To frame the question in causal terms, imagine (the image of) the face as operating on the soft- ware, causing it to produce a result. This is similar to a person operating on soft- ware by clicking a button. Both the image and the button-click trigger the software to act in certain ways. To explore this issue further consider Chalmers’ (2006) definition of (strong) emergence. Chalmers picks up on a point raised in the first extract from Kim. What happens if an emergent property—like being the image of a face—simply doesn’t exist at the lower (e.g., clay) level? Although not making that point explic- itly, Chalmers put it this way. [A] high-level phenomenon is strongly emergent with respect to a low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are not deducible even in principle from truths in the low level domain. If an emergent concept does not exist at a lower level, then truths about it can- not be deduced from lower level truths—at least not in the way “deduced from” is being used here. Putnam (1975) explored this question by asking how one would explain why a square peg can’t fit into a round hole (of incompatible size). Should the explanation be based on quantum mechanics, or should it be based on geome-

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 11 try? Putnam argued for geometry, pointing out that any explanation based on quantum mechanics can deal with only one specific peg, one specific hole, and one specific orientation of the peg and the hole. A geometrical explanation is much more general and hence more useful. Yet such a geometric explanation is not deducible from—and in fact has nothing to do with—quantum mechanics. It is geometry, not physics. Kim attempts to dispose of the problem of concepts that exist at a higher level but not at the lower level by arguing that such a situation can’t arise. He says that “many philosophers” want to argue (a) that the existence of such emergent con- cepts is established by multiple realization and (b) that such multiply realizable emergent properties can play important roles in the special (higher-level) sciences. He disputes that claim as follows. [If] the “multiplicity” or “diversity” of realizers means anything, it must mean that these realizers are causally and nomologically diverse. Unless two realizers of [an emergent property] E show significant causal/nomological diversity, there is no clear reason why we should count them as two, not one. It follows then that multiply realizable properties are ipso facto causally and nomologically heterogeneous. This is especially obvious when one reflects on the causal inheritance principle. All this points to the inescapable conclusion that E, because of its causal/nomic heterogeneity, is unfit to figure in laws, and is thereby disqualified as a useful scientific property. On this approach, then, one could protect E but not as a property with a role in scientific laws and explanations. You could insist on the genuine propertyhood of E as much as you like, but the victory would be empty. The conclusion, therefore, has to be this: as a significant scientific property, E has been reduced – eliminatively. I’m afraid that I don’t buy this argument. One problem is that it focuses too much on rebutting the significance of multiple realizability and not enough on the higher level abstractions/regularities themselves. A face made of clay is presum- ably multiply realizable. But why does that matter? What matters is that the face resembles someone’s face, not that it could have been sculpted in many different ways from many different materials. I have more to say about multiple realization below.

Also it seems to me that multiply realized capabilities reflect either (a) the shape of the environment that if well managed will produce an advantage—but it doesn’t matter how one realizes the management or (b) an algorithm that works well even though it doesn’t matter in what way that algorithm is realized. Birds, bats, bees, and airplanes fly. The do so in significantly different ways. So is flying multiply realized? So what? Let’s assume that aerodynamics is reduc- ible. (It’s probably about as reducible as thermodynamics, about which there is not longer consensus. But let’s assume it is completely reducible.) Does that make flying autonomous? If so, so what? What’s important is that flying is a useful function. If different organisms figured out different ways to do it, that’s probably to be expected. Yet flying reflects a theory about the environment that’s complete- ly reducible. So it’s the “other side” of the function that’s reducible, not the flying

4/3/2018 12 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES side. The same is true of virtually any other interaction with the environment. What about breathing, getting oxygen into the body? What about having a struc- ture that isn’t crushed by gravity or atmospheric pressure? Similarly, if being able to count is useful, then if there are multiple realizations of that capability, so what? Why is it surprising? It isn’t. Is it surprising that the abstraction of counting is real? I guess that’s the point. Isn’t it obvious that there are higher level laws and properties? If that’s what the fuss is all about, it seems like a lot of work for very little. I don’t see why the bonding properties of atoms and molecules don’t count. After all, Fodor (1997) thinks that’s how science operates, multiple realizability or not. The cost of the former is generalization is reifying not just the properties of being-swan-a and being-swan-b and being-swan–c … etc. but also the more abstract, higher level property of being-a-swan. The cost of the latter generalization is reifying not just the properties of being-in-neural-state-a and being-in-neural-state-b …. And being-in- silicon-state-f … and being-in-Martial-sate-g etc. but also the more abstract higher level property of being-in-pain. Pretty clearly, standard inductive practice is prepared to hypostatize in both cases. And the success of standard inductive practice suggests pretty clearly that it is right to do so. … But at least it’s apparent that we have general methodological grounds for preferring a closed law to a correspond open one, all else equal. Induction is a kind of market prudence: Evidence is expensive, so we should use what we’ve got to buy all the worlds that we can. … My story is that this policy complies with an injunction that all of our inductive practice illustrates: Prefer the strongest claim compatible with the evidence, all else being equal. Quantification over instances is one rational compliance with this injunction; reification over higher level kinds is another. [But that’s Leibniz again, information efficiency.]

A more basic point about multiple realizability is that even to claim multiple realization is to presume/acknowledge the existence of entities that are doing the realizing. But that gives away the game before one begins.

Use information efficiency (AIT) as a criterion for the existence of entities. If one does it one at a time, can use it to determine for each purported pattern whether it saves bits in the representation of the GoL state. Of course this will vary from run to run. A pattern may not appear on some runs. Visualize GoL with patterns: background plain white without lines. Each pat- tern: on: black; off: white; border grey. Pattern borders may overlap but not pat- tern interiors (by definition). Fodor (1974). I take it that these remarks are obvious to the point of self-certification; they leap to the eye as soon as one makes the (apparently radical) move of taking the special sciences at all seriously. Suppose, for example, that Gresham's 'law' really is true. (If one doesn't like Gresham's law, then any true generalization of any conceivable future economics will

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 13

probably do as well.) Gresham's law says something about what will happen in monetary exchanges under certain conditions. I am willing to believe that physics is general in the sense that it implies that any event which consists of a monetary exchange (hence any event which falls under Gresham's law) has a true description in the vocabulary of physics and in virtue of which it falls under the laws of physics. But banal considerations suggest that a description which covers all such events must be wildly disjunctive. Some monetary exchanges involve strings of wampum. Some involve dollar bills. And some involve signing one's name to a check. What are the chances that a disjunction of physical predicates which covers all these events (i.e., a disjunctive predicate which can form the fight hand side of a bridge law of the form 'x is a monetary exchange ~ . . . ' ) expresses a physical natural kind? In particular, what are the chances that such a predicate forms the antecedent or consequent of some proper law of physics? The point is that monetary exchanges have interesting things in common; Gresham's law, if true, says what one of these interesting things is. But what is interesting about monetary exchanges is surely not their commonalities under physical description. A natural kind like a monetary exchange could turn out to be co-extensive With a physical natural kind; but if it did, that would be an accident on a cosmic scale. In fact, the situation for reductivism is still worse than the discussion thus far suggests. For, reductivism claims not only that all natural kinds are co-extensive with physical natural kinds, but that the co-extensions are nomologically necessary: bridge laws are laws. So, if Gresham's law is true, it follows that there is a (bridge) law of nature such that 'x is a monetary exchange ~- x is P', where P is a term for a physical natural kind. But, surely, there is no such law. If there were, then P would have to cover not only all the systems of monetary exchange that there are, but also all the systems of monetary exchange that there could be; a law must succeed with the counterfactuals. What physical predicate is a candidate for 'P' in 'x is a nomologically possible monetary exchange iff Px'? To summarize: an immortal econophysicist might, when the whole show is over, find a predicate in physics that was, in brute fact, coextensive with 'is a monetary exchange'. If physics is general - if the ontological biases of reductivism are true - then there must be such a predicate. But (a) to paraphrase a remark Donald Davidson made in a slightly different context, nothing but brute enumeration could convince us of this brute co- extensivity, and (b) there would seem to be no chance at all that the physical predicate employed in stating the coextensivity is a natural kind term, and (c) there is still less chance that the coextension would be lawful (i.e., that it would hold not only for the nomologically possible world that turned out to be real, but for any nomologically possible world at all). I take it that the preceding discussion strongly suggests that economics is not reducible to physics in the proprietary sense of reduction involved in claims for the unity of science. There is, I suspect, nothing special about economics in this respect; the reasons why economics is unlikely to reduce to physics are paralleled by those which suggest that psychology is unlikely to reduce to neurology. So if the fundamental issue is: are there higher level kinds and properties, my answer is: obviously yes. The GoL patterns and Turing machine demonstrate as such; the theory of entities describes how it happens. So does the fact that bats, birds, bees, and airplanes fly. “Flying things” isn’t a higher level kind, but the ability to fly is a higher level property.

4/3/2018 14 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES

Not the same thing as Dennett’s (Real beliefs) intentional stance, which is just about how we think about the world, not how the world actually is. In talking about gliders: Note that there has been a distinct ontological shift as we move between levels; whereas at the physical level there is no motion, and the only individuals, cells, are defined by their fixed spatial location, at this design level we have the motion of persisting objects; it is one and the same glider that has moved southeast in figure 5.2, changing shape as it moves, and there is one less glider in the world after the eater has eaten it in figure 5.3. (Here is a warming-up exercise for what is to follow: should we really say that there is real motion in the Life world, or only apparent motion? The flashing pixels on the computer screen are a paradigm case, after all, of what a psychologist would call apparent motion. Are there really gliders that move, or are there just patterns of cell state that move? And if we opt for the latter, should we say at least that these moving patterns are real?) Notice, too, that at this level one proposes generalizations that require ‘usually’ or ‘provided noting encroaches’ clauses. Stray bits of debris from earlier events can “break” or “kill” one of the objects in the ontology at this level; their salience as real things is considerable, but not guaranteed. To say that their salience is considerable is to say that one can, with some small risk, ascend to the design level, adopt is ontology, and proceed to predict—sketchily and riskily—the behavior of larger configurations or systems of configurations, without bothering to compute the physical level. … a working model of a universal Turing machine can in principle be constructed in the Life plane! … [Such a Turing machine] can play chess—simply by mimicking the program of any chess-playing computer. … Looking at the configuration of dots that accomplishes this marvel would almost certainly be unilluminating … But from the perspective of one who had the hypothesis that this huge array of black dots was a chess-playing computer, enormously efficient ways of predicting the future of that configuration were made available. As a first step one can shift from an ontology of gliders and eaters to an ontology of symbols and machine states, and, adopting this higher design stance toward the configuration k predict its future as a Turing machine. As a second and still more efficient step, one can shift to an ontology of chess-board positions, possible chess moves, and the grounds for evaluating them; then, adopting the intentional stance toward the configuration, one can predict its future as a chess player performing intentional actions … In other words, real but (potentially) noisy patterns abound in such a configuration of the Life world, there for the picking up if only we are lucky or clever enough to hit on the right perspective. They are not visual patterns but, one might say, intellectual patterns. … The opportunity confronting the observer of such a life world is analogous to the opportunity confronting the cryptographer staring at a new patch of cipher text, or the opportunity confronting the Martian, peering through a telescope at the Superbowl Game. If the Martian hits on the intentional stance—or fold psychology—as the right level to look for pattern, shapes will readily emerge through the noise.

Is computability theory projectable? Yes. Is it reducible? Yes, and no. Is it au- tonomous? Does the answer to that matter? What’s missing from the discussion is the fact that higher level properties and kinds imply higher level entities. So what we really need is a theory of entities.

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 15

An example that demonstrates that multiple realization is not relevant but that higher level concepts matter is the example in the paper’s abstract: why does a steel-hulled boat float. (This is similar to Putnam’s peg-and-hole example.) My answer is that a steel-hulled boat floats for two reasons. 1. The theory of buoyancy tells us that any object will float if the weight of the water it displaces is greater than its own weight, i.e., if its overall av- erage density is less than that of water. 2. It is possible to make virtually anything seem to have a density less than that of water by using a trick. Add some enclosed empty space to the object one wants to float. If the empty space is large enough, the combined volume of the empty space and the object to be floated will have an overall average density less than that of water. A steel-hulled boat consists of both the materials that make up the boat along with enough empty space to dis- place more than their weight in water. Neither of these reasons has anything to do with the theory of steel microstruc- tures—or the microstructure of any of the other materials of which a boat might be made. It’s not that these ideas cannot be derived from the theory of steel. It’s sim- ply that these ideas are independent of steel. To use a term often found in discus- sions of emergence, they are autonomous. They don’t exist at the level of the mi- crostructure theory of steel. Yet added to the microstructure theory of steel, these two ideas explain why the boat floats. In Chalmer’s terms, these ideas are used to express truths about floating boats are not deducible, even in principle, from truths about the microstructure of boat components. It isn’t that the theory of steel micro-structures is not relevant. As I said about clay, what is important about the micro-structural properties of steel is that they enable one to use steel to construct a waterproof skin that encloses some empty space. By analyzing steel’s microstructure one can determine (a) whether steel’s micro-structure blocks water and (b) whether steel can be produced in a shape that encloses space. But once it is established that steel can be used to construct a wa- ter-tight boat, steel’s micro-structural properties have as little to do with whether the boat floats as the micro-structural properties of clay have to do with the fea- tures of a face. The key is that there are theories that come into play at a higher level that are independent of the properties of the lower level. Is it surprising that there are laws that express regularities relating boats to wa- ter? Perhaps it is. But it seems to me that it’s the same sort of surprise that we feel when we discover that there are laws (theorems) that express regularities about the natural numbers. One of my favorites is Lagrange’s theorem: every natural num- ber can be expressed as the sum of four or fewer squares. Why should that be? The natural numbers are simply zero and its successors. Why should they obey a constraint like that? It seems that virtually any collection of related entities em- bodies regularities that one might not at first expect. Why are things more con-

4/3/2018 16 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES strained than they seem? I don’t know. It’s this sort of thing that strikes me as molto mysterioso. Laws of this sort are not a matter of derivability. Formally, any theory T that is independent of some theory S is derivable from S. One imply ignores S and de- rives T. Adding S to the derivation presumably does no harm as long as S is not in conflict with any of the assumptions required by T. The more important point is that T is independent of S and autonomous with respect to S. How does Kim deal with this sort of higher level law? In what to me is a con- fusing passage Kim (2006, pp 556-557) seems either to say that irreducibility is not meaningful or to acknowledge that emergent properties are irreducible, but it’s ok. (As I said, this is another passage that I don’t understand.) If we know that X is reducible to Y, we know something interesting and important about the relationship between X and Y. And if we also know that U is reducible to W, we know something common that the pairs and share [namely the reducibility relationship]. I believe we can take reducibility as a genuine relation characterizing two domains of properties, or two theories. But this does not mean that irreducibility, namely the absence of reducibility, is also a genuine and informative relation. As has often been observed, being red is a property but that does not mean that being nonred is also a genuine property. There are too many diverse things that are nonred: green things, yellow things, transparent things, numbers, atoms and molecules, thoughts and ideas, propositions, and countless other sorts of things. The same applies to relations and their negations. Number theory is irreducible to hydrodynamics and vice versa. Chemical properties are irreducible to biological properties; geological properties are irreducible to economic properties and vice versa. If emergent properties are irreducible to their base properties, does this instance of irreducibility have anything in common with those other cases of irreducibility? The answer, I believe, has to be “none”. As I understand this passage, Kim is saying that irreducibility is not a well-de- fined relationship. So to say that number theory and hydrodynamics are mutually irreducible is not saying very much because irreducibility is not a well defined re- lationship. Yet since reducibility is presumably a well defined relationship it’s not clear to me why the statement “X is irreducible to Y” cannot be understood as say- ing that “X is reducible to Y” does not hold. Why isn’t this similar to saying that X is nonred means that the property red, which we presumably have a way of defining, doesn’t apply to X? Kim’s larger point is that no positive characterization of emergence has been produced. Irreducibility, he says, is a negative condition. He also says that super- venience is a negative condition. But supervenience and irreducibility are, he says, the defining characteristics of emergence. [Supervenience and irreducibility] tell us what emergence is not; they do not tell us anything—at least, not much—about what it is. I believe one pressing item on the emergentist agenda is to provide an illuminating positive characterization of emergence. I’m confused about where this leaves Kim. He has written a lot about emer- gence and reduction. Does he believe that he has established that supervenience and irreducibility don’t hold? If so then why does it matter that there are (in his

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 17 opinion) no positive conditions? Whatever they might be, they presumably won’t hold either since if supervenience and irreducibility don’t hold there can’t be an instance of emergence. On the other hand, if Kim doesn’t believe that he has established that the nega- tive conditions don’t hold—if he agrees, as he apparently does, that for example, biology and chemistry are mutually irreducible—then isn’t that all anyone has asked? What’s left to discuss? Biology and chemistry are mutually irreducible, and biology supervenes over chemistry. Isn’t that an example of what most people mean by emergence? What remains of the mystery. As I said, I’m confused.

Natural kinds and other issues about entities

When does a collection of entities define a natural kind? When it instantiates a useful abstraction? At one point I said that an entity is an instance of an abstrac- tion. But I also want to build entities constructively. A kind is a human imposed categorizing. So it’s up to us. And when there is a useful abstraction seems good enough. That’s consistent with the common philosophical understanding of a natu- ral kind anyway. What about the problem of the Many (Weatherson 2005)? It’s the pattern that is the entity, not the pieces. Natural kinds as anything that delivers information efficiency in the IAT sense. Also, I would require that a kind be defined constructively in terms of the pattern that makes it up. Actually, I’m talking about entities and not kinds. Kinds are generalized from entities—and entities must actually exist physically. So, do I get to kinds, or are there just a world full of individual entities? Brigant (2008) While species had originally been considered as classes or natural kinds, the view that species are individuals (SAI) was proposed in response to the serious problems facing a construal of species as kinds (Ghiselin 1974; Hull 1978). Most importantly, species are historical entities: a species originates, it persists across time at specific spatial locations, it can undergo substantial evolutionary change, and it can go extinct. The traditional notion of a natural kind is inadequate when applied to species as this notion was tied to kinds as found in physics and chemistry. The traditional account (used especially by metaphysicians and philosophers of language) construed a natural kind as a special type of class characterized by two features. (1) All members of a natural kind have the same characteristic properties, permitting universal generalizations, such as laws of nature (e.g., all oxygen atoms share physical properties and can undergo the same chemical reactions). (2) The identity and boundary of a natural kind is metaphysically determined by an essence; an object belongs to the kind in virtue of having this essential property. The essence is epistemologically fundamental in that it explains the characteristic properties of the kind (e.g., the essence of oxygen is its atomic structure, which explains all physical and chemical properties of oxygen). The first condition does not apply to species as there is substantial variation across the members of a species, and even a feature shared by all conspecifics at a time may be modified in evolution. In the case of the second condition,

4/3/2018 18 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES though it has never been part of the definition of an essence, an essence has typically been taken to be an intrinsic property of a kind member, as in the case of chemical structure. But no intrinsic property (= internal feature) of an organism—be it genotypic or phenotypic—can serve as the definition of its species (in contrast to merely diagnostic features), as other species members have or may evolve different features. Griffiths (2008) I use the traditional term ‘natural kind’ to denote categories which admit of reliable extrapolation from samples of the category to the whole category. In other words, natural kinds are categories about which we can make scientific discoveries. In my book I built on the work of several other philosophers and scientists to construct an account of natural kinds in psychology and biology, an account further elaborated in (Griffiths, 1999, 2001a) and briefly sketched here. The fundamental scientific practices of induction and explanation presume that some of the observable correlations between properties are ‘projectable’ (Goodman, 1954). That is, correlations observed in a set of samples can be reliably ‘projected’ to other instances of the category. Scientific classifications of particulars into categories embody our current understanding of where such projectable clusters of properties are to be found. The species category, for instance, classifies particular organisms into sets that represent reliable clusters of morphological, physiological and behavioral properties. Hence, these properties of the species as a whole can be discovered by studying a few members of the species. The traditional requirement that natural kinds be the subjects of universal, exceptionless ‘laws of nature’ is too strong and would leave few natural kinds in the biological and social sciences where generalizations are often exception-ridden or only locally valid. Fortunately, it is easy to generalize the idea of a law of nature to the broader idea that statements are to varying degrees ‘lawlike’ (have counterfactual force). This broader conception of a lawlike generalization allows a broader definition of a natural kind. A category is (minimally) natural if it is possible to make better than chance predictions about the properties of its instances. This, of course, is a very weak condition. Very many ways of classifying the world are minimally natural. The aim is to find categories that are a great deal more than minimally natural. Ideally, a natural kind should allow very reliable predictions in a large domain of properties. The classic examples of natural kinds, such as chemical elements and biological species, have these desirable features. It is important to note that categories are natural only relative to specific domain(s) of properties to which they are connected by background theories. The category of domestic pets is not a natural category for investigating morphology, physiology or behavior, but might be a natural category in some social psychological theory or, of course, in a theory about domestication. Emotion, I argue, is not a natural kind relative to the domains of properties that are the focus of investigation in psychology and the neurosciences. It is not the case that the psychological states and processes encompassed by the vernacular category of emotion form a category which allows extrapolation of psychological and neuroscientific findings about a sample of emotions to other emotions in a large enough domain of properties and with enough reliability to make emotion comparable to categories in other mature areas of the life sciences, such as biological systematics or the more robust parts of nosology.

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 19

The Game of Life Turing machine

To return to something that I understand better, here’s the more formal example I promised earlier. In “Emergence Explained” (2006) I discussed the example of a Turing machine implemented on a Game of Life grid. Such Turing machines are subject to—in fact are the subject of—the theory of computability. It may be that there are multiple ways to implement a Turing machine using a Game of Life framework. But it doesn’t matter whether there are or not. What matters is that computability theory applies to Turing machines no matter where or how they are implemented, and computability theory is independent of the Game of Life rules. Turing and others developed computability theory before Conway invented the Game of Life. Its truths have nothing to do with the truths of the Game of Life. As noted above, one could argue that since computability theory was derived from scratch, it is derivable from the rules of the Game of Life. It doesn’t depend on the Game of Life rules as a basis. But that doesn’t seem to be what Kim, Chalmers, or anyone else in this debate have in mind when they speak of deriv- ability. Computability theory is autonomous in a Game of Life world. As a theory characterizing a set of entities (Turing machines) it neither depends on nor is in conflict with the rules of the Game of Life. Because of that autonomy, if one uses Game of Life patterns to implement Turing machines there are things one can say about that those Turing machines—that they compute particular functions, that their halting problem may be undecidable, etc.—that have nothing to do with the Game of Life rules. Yet every Turing machine that runs on a Game of Life grid is completely deter- mined by (and reducible to) the Game of Life rules. But like the reducibility of fa- cial features to molecules of clay, reducibility of this sort is useless. One wants to talk about Game of Life Turing machines at the Turing machine level and not at the level of Game of Life grid cells. This is similar to my claim earlier that the ability to describe how a particular face was modeled by a particular block of clay is not useful. When speaking of faces one wants to be able to talk at the level of faces, e.g., that the nose is not symmetric, that the eyes are especially large, etc. The same is true of Turing machines. It’s a matter of being able to say which func- tions are being computed and whether the halting problem is decidable. There is simply no vocabulary available at the Game of Life level to express those ideas. I don’t see how Kim’s argument claiming that new concepts are not eligible to serve in scientific theories holds in these cases. A Turing machine in a Game of Life world is not all that different from Fodor- ’s (1974) example of Gresham’s law (that bad money drives out good). Fodor de- nied that Gresham’s law can be derived from quantum mechanics. As far as I know, he’s right. Gresham’s law cannot be derived from quantum mechanics be- cause it has nothing to do with quantum mechanics. There is nothing in Gresham’s law that depends on properties that one finds at the level of quantum mechanics.

4/3/2018 20 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES

Like computability theory it is an abstraction that stands on its own. It is autono- mous. In “The reductive blind spot” (2008) I discuss how the computer science notion of level of abstraction clarifies these issues—including how the evolution- ary process itself defines an autonomous level of abstraction. It’s important to stress, though, that higher level entities don’t spring into being de novo. They are implemented by lower level entities. Game of Life patterns are implemented by Game of Life rules. What happens when Game of Life patterns interact depends on how the Game of Life rules play out. The “laws” describing the interactions among Game of Life patterns depend entirely on Game of Life rules. As I noted in “Emergence Explained” (2006) one can build a catalog of such interactions. That catalog does not precede the rules. It results from the rules. Yet, once that catalog is in place, one can use the interactions to build constellations of patterns and interactions, leading eventually to a Game of Life Turing machine. The same is true of Gresham’s law. People exist only because we are material beings who are implemented ultimately by fundamental particles. But as dynamic entities (see the section on entities), we are subject to evolution pressures—which reflect laws that express how the fate of dynamic entities is determined. This leads ultimately to the impulse to hoard good money if bad money appears in the market place. It’s a fairly long story, but it works in the material world only because enti- ties of all levels (a) are implemented by lower level entities and (b) are subject to laws (as Chalmers says, truths) that constrain their interactions at their own level.

Is a Game of Life glider reducible to the Game of Life rules?

Supervenience doesn’t hold. What about inter-theoretic mappings? Kim dismisses Nagel’s (1961) bridge laws, which, generalized to inter-theoretic reduction have also been dismissed by Howard (2007). I quote Howard on this point below. The philosophical emptiness of Nagel reduction (at least in contexts like mind-body reduction), if it isn't already evident, can be plainly seen from the following fact: a Nagel reduction of the mental to the physical is consistent with, and sometimes even entailed by, many dualist mind-body theories, such as the double-aspect theory, the theory of preestablished harmony, occasionalism, and epiphenomenalism. It is not even excluded by the dualism of mental and physical substances (although Descartes' own interactionist version probably excludes it). This amply shows that the antireductionist argument based on the unavailability of mind-body bridge laws -- most importantly, the multiple realization argument of Putnam and Fodor -- is irrelevant to the real issue of mind-body reduction or the possibility of giving a reductive explanation of mentality. Much of the debate over the past two decades about reductionism has been carried on in terms of an inappropriate model of reduction, and now appears largely beside the point for issues of real philosophical significance. To build bridge laws between the GoL rules and glider laws one must define a glider in GoL terms. That can be done. I defined patterns. A glider is a repeating sequence of patterns.

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 21

So given a glider initial condition along with the GoL rules, one could presum- ably show that the pattern will repeat and will do so with a particular frequency.

Lessons of Game of Life Turing machines

I wish to propose the following position.  In a Game of Life world, the Game of Life rules are analogous to the fundamental laws of physics. (This is the position taken by Dennett (1991).)  Turing machines in a Game of Life world are emergent entities with emergent properties.  Not only are Game of Life Turing machines emergent, they are objectively identifiable through entropy considerations. They form distin- guishable patterns of activities on the Game of Life grid. Hence they are objectively real—to the extent that anything in a Game of Life grid is real.  Like everything else in a Game of Life world, Game of Life Turing machines are subject to and controlled by the Game of Life rules. Nothing happens in a Game of Life world other than that the Game of Life rules are applied and the consequences ensue.  Computability theory is the “special science” of Game of Life Turing machines. It provides information about Game of Life Turing ma- chines that is not available at the level of the Game of Life rules. It would be perversely unscientific not to study it.  One could make a further case for studying computability theory in a Game of life world on the basis of information efficiency. In reviewing the significance of algorithmic information theory, Chaitin (2003) discusses how it can be used to distinguish science from data. Chaitin credits Leibniz with first expressing this perspective in his Discourse on Metaphysics. Leibniz observes that for any finite set of points there is a mathematical formula that produces a curve that goes through them all, and it can be parameterized so that it passes through the points in the order that they were given and with a constant speed. So this cannot give us a definition of what it means for a set of points to obey a law. But if the formula is very simple, and the data is very complex, then that's a real law! Algorithmic Information theory (AIT) puts more meat on Leibniz's proposal, it makes his ideas more precise by giving a precise definition of complexity. If one wants to do science, one wants the most powerful, i.e., the most com- pact, expression of nature’s regularities that one can find. Algorithmic in- formation theory demonstrates that there is a way to make such a measure of compactness precise. I would take a similar position with respect to the boat example.

4/3/2018 22 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES

 Buoyancy is the upward force on an object produced by a liquid or gas in which the object is (fully or partially) immersed. The upward force is equal to the weight of liquid or gas displaced.  The theory of buoyancy summarized in the preceding bullet ab- stracts the notion of buoyancy in a gas or liquid within a gravitational field. It is parameterized—i.e., abstracted—to capture regularities at an abstract level and to be independent of any particular gravitational field, any partic- ular liquid or gas, and any particular object whose buoyancy is at issue.  The theory of buoyancy is not reducible to any lower level theo- ry. Since I’m not a physicist, I may be on shaky ground here. I base this conclusion on the following from Howard (2007) in which he disputes Nagel-style reductionism. (Howard is a Professor of the Philosophy of Sci- ence at Notre Dame and Fellow of the American Physical Society.) [O]ne is hard pressed to find a genuine example of inter-theoretic reduction outside of mathematics. … That inter-theoretic reduction might not be a helpful way to think about inter-level relationships is perhaps best shown by pointing out that everyone’s favorite example of a putatively successful reduction–that of macroscopic thermodynamics to classical statistical mechanics–simply does not work. … Are macroscopic thermodynamic phenomena, therefore, emergent with respect to the mechanical behavior of the individual molecular and atomic constituents of the systems of interest? Yes, if emergence means the failure of inter-theoretic reduction. Is that an important fact? Yes, if our aim is to undermine dogmatic reductionist prejudices or to unsettle the presupposition that physics, generally, is a paradigmatically reductionist science. Otherwise, the significance of there not being a reduction of thermodynamics to statistical mechanics is not so clear. Since macro-level buoyancy is similar to thermodynamics in that were ei- ther to be reduced it would be reduced to the statistical behavior of large numbers of atoms or molecules, I’m supposing that its reduction doesn’t work either. This is not to say that buoyancy is a new force of nature, only that it is meaningful only at a macro level.  The conclusion is that buoyancy applies to macro objects at a macro level. Like computability theory it gives us information about macro ob- jects that cannot be expressed in the language available at the micro level. Buoyancy theory lets us conclude that a certain steel-hulled boat will float.  This lets us conclude that the molecules in the steel hull will not sink. This isn’t downward causation. It is downward entailment from the conse- quences of macro phenomena to the micro entities that implement the macro objects that are subject to the macro phenomena.

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 23

Mystery solved?

So is there any mystery left to emergence? I don’t think so. The basic picture is that one can construct new things (like Turing machines, faces, or steel-hulled boats) by putting together existing things (like Game of Life patterns, clay, or steel plates). Those new things will conform to laws or have properties (computability theory, facial characterization properties, or buoyancy theory) that may have noth- ing (much) to do with the laws governing the components of which the new things are built. The parenthetical “much” reflects two qualifications. 1. The new higher level laws and properties must be consistent with the lower level laws. It is the lower level laws that enable the lower level elements to implement the higher level elements. If the high- er level were not consistent with the lower level, the implementation would fail. But given such consistency the higher level laws are autonomous with respect to the lower level and can be looked to when we want to understand how the higher level entities function in the world. 2. It matters whether it is possible to implement the higher level entities from those at the lower level. Is it possible to imple- ment a Turing machine using Game of Life patterns, a face from a block of clay, or a boat from steel plates? There is no automatic answer for that. The answer depends on the lower level elements and the capabilities they offer. (It’s certainly not easy to build a Turing machine using Game of Life pat- terns!) But once the implementation barrier has been hurdled, the properties of the lower level components have little to do with the properties of the higher level constructs—except to limit the conditions under which the im- plementation succeeds. A steel-hulled boat won’t float if the steel melts, and a Turing machine that requires more space than a Game of Life grid has available won’t run.

Non-reductive physicalism metaphysical

As I understand it, the position I’m advocating has been described by Loewer (2007a) as non-reductive physicalism metaphysical (NRPM). Loewer character- izes non-reductive physicalism (NRP) as taking either of two positions. NRP is non-reductive in that it says that the special sciences involve laws, causal relations, explanations, and so on that are, “in a certain sense”, irreducible to those of physics and it is physicalist in “a certain sense” since it says that everything is ultimately constituted micro-physically and that the laws of microphysics are complete in the domain of micro-physics. Fodor remarks that NRP is now (and has been since the 1970s) “conventional wisdom” having replaced the reductionist conventional wisdom of previous generations. Unfortunately, like much conventional wisdom it is not so clear exactly what it comes to. Advocates of NRP differ on how to understand “irreducible” and “physicalism.” The first way—which I label “NRPM” (for “non-reductive physicalism Metaphysical”)—understands the irreducibility of the special sciences as involving the existence of kinds and laws that are metaphysically over and above the kinds and laws of

4/3/2018 24 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES

physics. NRPM endorses physicalism in so far as it claims that everything that exists is physically constituted and every special science nomological/causal transaction is physically implemented. [It] doesn’t say that the fundamental laws of physics can be overridden or are gappy. However NRPM [does say] that special science laws are autonomous from the laws of physics. According to the other way of understanding NRP—which I label NRPL (nonreductive physicalism light)—the irreducibility of the special sciences is not metaphysical but merely conceptual and epistemological. According to NRPL the special sciences contain vocabulary/concepts that are conceptually independent of the concepts/vocabulary of physics. NRPL also allows that special sciences contain their own confirmation (and other epistemic) relations that are independent of physics. A biologist may have evidence that a biological generalization is lawful (think of the Mendelian laws) without having any idea how this regularity is rendered lawful or implemented by fundamental laws of physics even though the former is grounded in the latter. However, NRPL, in contrast with NRPM, holds that the nomological structure of the world is completely specifiable by fundamental physics. The special sciences don’t add to the nomological structure (as they do according to NRPM) but rather they characterize aspects of the structure generated by the fundamental physical laws that are especially salient to us and amenable to scientific investigation in languages other than the language of physics. NRPM and NRPL agree that the special sciences are conceptually, epistemologically, and methodologically autonomous/irreducible to physics but disagree about what autonomy/irreducibility consists in and how it is to be explained. NRPM says that the autonomy/irreducibility is metaphysical and seeks to explain the conceptual and epistemological autonomy in terms of the existence of special science kinds and laws of physics. If NRPL is true then the autonomy/irreducibility of the special sciences isn’t explained in terms of basic special science kinds and laws but must ultimately be due to facts and laws of micro-physics and to our epistemological situation in the world (which it says is also due to the facts and laws of micro physics). The two views also disagree about physicalism. NRPL is compatible with strong versions of physicalism on which all truths, including those of the special sciences, hold in virtue of facts and laws of fundamental physics. NRPM rejects this strong claim since it says there are facts about kinds and laws of special sciences that are independent of physics but it is compatible with token physicalism … . I believe that my position is NRPM since I claim that higher level entities “re- ally” exist and that they are constrained by laws that are independent of physics. See the section on entities for a further discussion of why I claim that higher level entities really exist.

Reducibility in Computer Science

A (higher level) predicate Ph(x) is reducible to a (lower level) predicate Pl(x) if there is a computable function g such that Ph(x) if and only if Pl(g(x)).

In other words, if one can determine whether Ph(x) holds by asking whether

Pl(g(x)) holds, then Ph is reducible to Pl. If there is an implementation of x a g(x), then that can serve as the reduction. This may seem backwards; one maps from the higher to the lower, but really g(x) is the implementation of x at the lower lev- el.

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 25

To apply this to a GoL Turing machine, one asks whether halts(TMa) by asking whether halts(gol(TMa)). If halts(gol(TMa)) is decidable, then so is halts(TMa). But we already know that halts(TMa) is not decidable. So halts(gol(TMa)) cannot be decidable.

A few more words about multiple realization

Multiple realizability seems to be central to much of the discussion of emergence. But I don’t see why. As the example of the Game of Life Turing machine showed (at least I think it showed it), multiple realizability is not relevant to whether high- er level laws are autonomous. So I don’t understand why multiple realizability plays such a central role in discussions of emergence and reducibility? Furthermore, it seems to me that most of the claimed examples of multiple real- ization are misleading at best. My basic point will be that multiple realizability does not apply in a useful way to naturally occurring entities. Since multiple real- izability comes up in the discussion of the non-reducibility of mind, and since mind (as far as we know) is a property only of naturally occurring entities, my ar- gument will be that multiple realizability is misused in those discussions. Putnam’s (1975) defines multiple realizability in terms of functional isomor- phism. [T]wo systems are functionally isomorphic if there is an isomorphism that makes both of them models for the same … theory. … [Two] system can have quite different constitutions [e.g. they might be made of copper, cheese, or soul] and be functionally isomorphic. Since Putnam is speaking at the level of abstract systems and models, I have no complaint about this statement. Putnam was applying concepts from computing. I agree that multiple realization occurs in computing. Two computing devices may implement the same function in radically different ways. Problems begin to arise when one looks more carefully at what is being claimed as being multiply realized—the theory for which two systems both serve as models. The easiest “theories” to realize in multiple ways are the input/output behaviors of systems. If two systems have well-defined interfaces, and if the (pre- sumably symbolic) inputs and outputs that traverse those interfaces can be mapped to each other, then the claim that the two systems multiply implement the same in- put/output behavior seems to be fairly straightforward. (Or at least it does on a symbolic level. In real-life it is often very important how long it takes a system to convert an input into an output. If we want to take time into consideration when we say that two systems implement the same behaviors in different ways, the problem becomes significantly harder. But let’s ignore that issue.) An alternative claim may be that two systems implement the same computa- tional process in isomorphic but different ways. That is, it may be claimed that the two systems perform the same internal computations, i.e. that they each have a set

4/3/2018 26 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES of states that can be mapped to each other and that the state transitions that they perform are identical under that mapping. If two such systems are composed of different materials, e.g., one is made of silicon chips and the other is made of cogs and wheels (or as Putnam says, copper, cheese or soul), then it makes sense to say that the two systems implement the same computation in different ways. Clearly this second version of multiple realizability requirement is significantly more stringent that the first. Multiple realizability is often used in arguing that psychological theories are multiply realized. Bickle (2008) explains this use of multiple realizability as fol- lows. In the philosophy of mind, the multiple realizability thesis contends that a single mental kind (property, state, event) can be realized by many distinct physical kinds. A common example is pain. Many philosophers have asserted that a wide variety of physical properties, states, or events, sharing no features in common at that level of description, can all realize the same pain. Since Bickle uses the term pain, a term referring to subjective experience, he is presumably requiring that multiple realization occurs at the internal state-transi- tion level and not at the behavioral level. After all, pain isn’t defined at the behav- ioral level; it is by definition a subjective experience. But it seems presumptuous to me for anyone to claim that we know what sorts of psychological states an or- ganism that is capable of feeling pain has and what state transitions occur within such an organism when it is experiencing pain. The very term “the same pain” seems to me to be so ill-defined that I don’t understand how a discussion can pro- ceed without defining it further. Subjective experience is still one of the great ar- eas of ignorance. Given that we know so little about subjective experience and how it is implemented, how could anyone argue that two organisms have isomor- phic states that they traverse in the same way when they are feeling pain? I understand that if one could establish (a) that two organisms had isomorphic pain states and state transitions but (b) were implementing them differently, one would have a case for multiple realization. But I can’t get past the step of estab- lishing what the internal subjective states of an organism in pain (especially the same pain) are—much less establishing that two organisms have isomorphic inter- nal subjective pain states and state transitions. Besides, doesn’t this become an empirical question? It will take empirical in- vestigation to establish what states and state transitions are associated with (the same) pain. Once we know that, if we ever do, it will be much easier to compare the states and state transitions of different organisms and see whether they are iso- morphic. But ultimately this seems like an empirical question. I don’t see how it has come to be accepted as a major part of the foundation for so much of current day philosophy. Even if one retreated to the behavioral, i.e., input/output definition I would still have a problem. To say that two entities realize the same input/output “theory” re-

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 27 quires that there be a theory to be realized, i.e., that the theory precede the realiza- tion. When we as human designers build an artifact, the theory does come first. This, of course is approximate. All design, natural and man-made is iterative. But in rough terms, human designers visualize a result and then build a system to real- ize that vision. That’s certainly not what happens in nature. Nature doesn’t visual- ize a theory and then implement/realize it. Nature simply strikes out in random di- rections. If something succeeds, it persists; otherwise it disappears. I understand that even though nature doesn’t realize pre-existing theories, the point is that we as observers may come up with a theory that multiple organisms all realize. Even so it’s not clear to me that looking at a biological organism as an entity that computes a function would be very useful. For one thing, biological or- ganisms tend not to be usefully seen as manipulating symbols. So it’s not clear what sort of function we want to explore. It might be more useful to examine how different biological organisms perform similar functions in their environments. For example, bats, birds, and bees all fly. What can we make of that? Since the common ancestor of birds, bats, and bees did not fly, the ability to fly must have developed independently in each group. The fact that birds, bees, and bats all fly is a consequence of the fact that they all exist in an ocean of air and that flying is a useful way to get around in that environment. Because flying can provide an evolutionary advantage, it developed a number of different times. Does this establish that flying is multiply realized? And if so, so what? What I think is more interesting is that looking at flying as an example of how an organisms interacts with its environment forces us to express ourselves at the level of the organisms. In effect, we are already operating at the macro level and describing macro properties. When talking about the fact that bats, birds, and bees, fly it makes no sense to speak at the level of bat, bird, and bee components. We could create a predicate flies(x), which we claim applies to bats, birds, and bees as groups of organisms. We may then find a number of things to which that predicate as we define it applies. flies(birds), flies(bees), flies(bats), …. But that doesn’t mean that nature started with the predicate and implemented it in multiple ways. Nature doesn’t do that. In both of these cases (flying and swimming) the specification, to the extent that there is one, is given by the environment (air and water) and the usefulness of being able to propel oneself through it. We may call what bats, birds, and bees do by the same name, and we may call what dolphins and flounder do by the same name, but these are only our names, not a formal abstraction that nature has con- tracted to implement. The point is that these activities exist at a higher level of ab- straction than the level of their implementation, but that’s not the same thing as saying that they’re the same from one implementation to another.

4/3/2018 28 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES

I think that the so what is more important. Presumably, the so what is that if flying has been multiply realized then flying is autonomous from lower level con- siderations. But is that so? One might argue that even though birds, bats, and bees all fly they fly in such different ways that it’s not clear that it makes sense to group them together as a single property (the ability to fly) that has been multiply realized. Perhaps even more to the point, the simple fact of asserting that any one of them flies declares that flying an autonomous property. It isn’t the case that the components of any of them fly. Of course we could arbitrarily define a model that they all satisfy. But is that particularly useful? I don’t see how. Of course in some sense it is. But how useful is it to say that bats, birds, and bees multiply realize flying? Does that make flying any more real than it would be if only bats (or birds or bees) flew? I don’t see why it does. Flying is a relationship between the flier and its environment. On the other hand, when human beings create things, we generally start with a specification—or at least an idea—which we then implement. Once we have a specification, it’s often clear that there are many ways to implement it. Putman’s original example of multiple realization was of a computing device, a human cre- ation that is describe by a specification. But naturally occurring entities don’t have specifications. There is no specifica- tion of a bee or a bat. As human beings we can find similarities among groups of entities and we can find ways of grouping entities together, typically by reference to their genomes. But that isn’t a specification created by nature. It’s just the way it turned out. That’s why biology is so messy. We tend to think in terms of clean divisions: a person is either a male or a female for example. But we now know that it nowhere near that clear. A person may have two X chromosomes but may also have gene switches that failed to switch on some of the male-property producing genes. Such a person looks externally like a female. According to Fausto-Sterling (2000), The concept of intersexuality is rooted in the very ideas of male and female. In the idealized, Platonic, biological world, human beings are divided into two kinds: a perfectly dimorphic species. Males have an X and a Y chromosome, testes, a penis and all of the appropriate internal plumbing for delivering urine and semen to the outside world. They also have well-known secondary sexual characteristics, including a muscular build and facial hair. Women have two X chromosomes, ovaries, all of the internal plumbing to transport urine and ova to the outside world, a system to support pregnancy and fetal development, as well as a variety of recognizable secondary sexual characteristics. That idealized story papers over many obvious caveats: some women have facial hair, some men have none; some women speak with deep voices, some men veritably squeak. Less well known is the fact that, on close inspection, absolute dimorphism disintegrates even at the level of basic biology. Chromosomes, hormones, the internal sex structures,

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 29

the gonads and the external genitalia all vary more than most people realize. Those born outside of the Platonic dimorphic mold are called intersexuals. … Consider, for instance, the gene for congenital adrenal hyperplasia (CAH). When the CAH gene is inherited from both parents, it leads to a baby with masculinized external genitalia who possesses two X chromosomes and the internal reproductive organs of a potentially fertile woman. If different organisms have evolved different mechanisms to experience pain (whatever it really means to say that), that doesn’t make pain an abstraction that they have both implemented. This is similar to the argument that even though birds, bees, and bats have evolved ways to fly, it doesn’t make sense to say that “flying” as an abstraction implemented by nature is multiply implemented. Each of the three evolved ways to fly because they all live in an environment that in- cludes flyable air and they (evolutionarily) figured out how to propel themselves through it. The same can be said for dolphin fins and flounder fins. They are both used for swimming, but they are not multiple realizations of an abstract ability to swim. A significant difference between specifications that can be implemented in multiple ways and the philosophical notion of multiple realizability is that specifi- cations are intended to describe systems from the outside. Multiple realizability is not clear about what it is trying to do. If what is being multiply realized is a speci- fication, then it is the same. But sometimes that isn’t so clear. Pain, for example, is not a specification of anything. So it isn’t clear what it means for it to be multiply realized. Even being a mousetrap isn’t very clearly specified since the interface is not well spelled out. In CS one describes multiple implementations of an interface. In multiple realizability the interface that is claimed to be multiply realized—if it is even an interface that is being claimed to be multiply realize—is generally not clearly defined. . Multiple Realization is a flawed concept with respect to naturally occurring en- tities. MR occurs only when there is a specification to multiply realize. But there are no specifications in naturally occurring entities. Nonetheless, properties converge to similar results because they must solve the same environmental problems, e.g., flying. Also because they often derive from a common starting point. They exist in the same environment.

What is this saying? Do we really know enough about how minds work to talk about “a single mental kind (property, state, event)”? How can one construct an argument about something when we know so little about that subject matter? Worse, how can one build a philosophical position on the basis of such an argu- ment? Do we have any confidence that we can all agree on what is a mental kind and what isn’t? I can’t imagine that we do. So how can one then talk about whether they are multiply realizable? Besides, even if we eventually do agree about what we are talking about, isn’t the question of whether it is multiply realiz-

4/3/2018 30 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES able an empirical question and not a theoretical one? What reason do we have for believing that a wide variety of physical properties, etc. can all realize the same pain? First of all, what do we mean by “the same pain”? Who is to say that one pain is the same as another—even if we allow that two pains may not necessarily be two pains but may possibly be “the same pain”? Besides, might that not differ from person to person? One person may be more self-aware than another. The lat- ter may not be able to distinguish a pin in the finger from a knife cut on the toe. Another may. What about visual illusions? Is a visual illusion an example of multiply realized vision? Or what if I prick you with a pin and then prick you in exactly the same place with a different pin. Is that multiply realized pain? If so, so what? If it’s not, then what about a pin and a needle? What about a pin and a needle but the pricks are a micro-millimeter separate from each other? What if one person can actually tell the difference and another can’t? What if I modify the example so that two in- ternal pains feel exactly the same, i.e., stimulate in exactly the same way the por- tion of the brain that is responsible for our subjective experience of pain? Are they an example of multiply realized pain? If not, if two different portions of the brain have to feel exactly the same for one to say that pain is multiply realized, what if of two people one can distinguish one from the other and a second can’t? So what? What if our brains were such that stimulating two different portions pro- duced the same subjective experience? So what? That seems to me to the sort of mistake that appeals to multiple realizability continually makes. It takes two dif- ferent things and generalizes them in a way that claims to make them the same and then argues that since they are the same, some other conclusion can be drawn. But that’s not valid. How can one build a philosophical position on the basis of some- thing so loosely defined? Yet people have been writing about this sort of thing for decades—and apparently no one has said that it’s all based on words whose mean- ing we barely understand. This isn’t to say that multiple realization isn’t possible. Given any non-trivial computer program, I can guarantee I can produce a different computer program with the same input output characteristics. So the program is multiply realized. So what? On the other hand, the two programs probably won’t run in exactly the same time. Certainly the two programs won’t produce the same instruction trace.

[Computer Scientists have a definition of types, which are an operational ver- sion of kinds.] Kind: collection of entities that cannot be distinguished one from another. Then more broadly, collection of entities with labels, like the same genome -- but not using arbitrary attributes as labels, whatever that means. Even hydrogen has the label one proton and one electron but any number of neutrons. All protons are

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 31 indistinguishable. Nature does not make kinds "on order." The kinds are resultant, not made to order.

Fodor, “Special Sciences: Still Autonomous After All These Years,” Noûs, Vol. 31, Supplement: Philosophical Perspectives, 11, Mind, Causation, and World, (1997), pp. 149-163. p. 159. Kim is wrong about what’s wrong with jade. … Kim is also wrong about the analogy between jade and pain. … Kim’s picture seems to be of the philosopher impartially weighing the rival claims of empirical generality and ontological transparency, and serenely finding in favor of the latter. But that picture won’t do. Here, for once, metaphysics actually matters. p. 154. Kim almost sees this [whether it matters whether a property is disjunctive or multiply realizable] in the closing sections of his paper. But then he gets it wrong: fatally in my view. What about dirigibles, airplanes, missiles, and hang gliders. These multiple re- alizations of flying? One could argue that they are: we often use that term when talking about what they do. But what value do we get out of doing so? What too many of these papers seem to do is to attribute an abstract reality to what is com- mon among them. Now I want to attribute a reality to the capability of propelling oneself through air. But that doesn’t mean that two ways of doing that are part of a common abstraction. That’s an oversimplification, that seems to have been lost. The bigger problem seems to me to be the looseness of thought—which is strange because philosophy is specifically supposed be about thinking things out carefully. For example, at the start of “Still autonomous” Fodor says that Kim is prepared to agree (at least for the sake of argument) that (1) psychological states are multiply realized (MR) and (2) MR states are ipso facto unsuitable for reduc- tion. How can one be so carefree in talking about mental constructs? Do we really know enough about psychological states that we feel comfortable using intuition about them as the basis for a careful philosophical analysis of what the world is like? Kim suggests that the following are the central claims about emergence. 1. Emergence of complex higher-level entities: Systems with a higher-level of complexity emerge from the coming together of lower-level entities in new structural configurations (the new “relatedness” of these entities). 2. Emergence of higher-level properties: All properties of higherlevel entities arise out of the properties and relations that characterize their constituent parts. Some properties of these higher, complex systems are “emergent”, and the rest merely “resultant”. 3. The unpredictability of emergent properties: Emergent properties are not predictable from exhaustive information concerning their “basal conditions”. In contrast, resultant properties are predictable from lower-level information.

4/3/2018 32 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES

4. The unexplainability/irreducibility of emergent properties: Emergent properties, unlike those that are merely resultant, are neither explainable nor reducible in terms of their basal conditions. 5. The causal efficacy of the emergents: Emergent properties have causal powers of their own – novel causal powers irreducible to the causal powers of their basal constituents. [We saw this one earlier.]

But what about causation?

The discussion so far has taken a very naïve and intuitive view of causation: some- thing can be said to cause something else. We took Kim’s requirement of causal efficiency at face value. The real question concerns the sorts of things that can be built. Shalizi (1998) put it nicely. Instead of asking how to carve Nature at its joints, we ask why Nature has those particular joints—or even has joints at all—and is not (to continue with the metaphor) a single undifferentiated hunk of inharmoniously quivering meat. Somewhere, as quantum field theory meets general relativity and atoms and void merge into one another, the fundamental rules of the game are defined. But the rest of the observable, exploitable order in the universe—benzene molecules, PV = nRT, snowflakes, cyclonic storms, kittens, cats, young love, middle-aged remorse, financial euphoria accompanied with acute gullibility, prevaricating candidates for public office, tapeworms, jet-lag, and unfolding cherry blossoms—where do all these regularities come from? Call this "emergence" if you like. It's a fine-sounding word, and brings to mind southwestern creation myths in an oddly apt way. But it’s just a label; it marks the mystery without explaining anything.. [The preceding includes some copy editing that does not change the sense.]

2. The four + 1 categories of emergent entities

This section is about entities.

Entities, laws, and mechanisms

Higher level entities are the subject of higher level laws and mechanisms. The in- teractions among higher level entities and between higher level entities and lower level entities are all determined by the lowest level of operation. But that fact doesn’t preclude the description of mechanisms in terms of higher level entities. In CS, mechanisms consist of objects and operations. At a level of abstraction opera- tions are atomic with respect to that level of abstraction. That doesn’t preclude the possibility of race conditions at a level of abstraction. Multiple threads can inter- fere with each other, and their operations can interact below the level of the opera- tion. From Bechtel (2007)

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 33

Although philosophers have generally construed reduction as theory reduction, this notion fits poorly with what is scientists typically call ‘reduction’. As Wimsatt (1976b) put it: “At least in biology, most scientists see their work as explaining types of phenomena by discovering mechanisms, rather than explaining theories by deriving them or reducing them to other theories, and this is seen as reduction, or as integrally tied to it.” … A central feature of mechanistic explanations, and the one that makes them reductive, is that they involve decomposing the system responsible for a phenomenon into component parts and component operations. Given that parts and their operations are at a lower level of organization than the mechanism as a whole, mechanistic explanations appeal to a lower level than the phenomenon being explained. For most scientists and non- philosophers, such appeals to lower levels are the hallmark of reduction. As we will see, though, lower-level components of a mechanism do not work in isolation and do not individually account for the phenomenon. Rather, they must be properly organized in order to generate the phenomenon. The most important feature of mechanistic explanation to bear in mind is that it seeks to explain why a mechanism as a whole behaves in a particular fashion under specific conditions. This strategy in no way undermines the reality of the phenomenon being explained; rather, it begins by treating the phenomenon as something that really occurs when the mechanism operates in a particular set of environments. This explanation is perfectly consistent with the Game of Life Turing machine. One understands how it works by looking at the components that make it up. These components are Game of Life patterns. One understands how these work by looking at the lowest level mechanisms, the rules that turn cells on and off. An important feature of different levels of description, especially in dynamic systems, is that the entities at one level have different life spans than those at an- other. (Even true in man-made static entities if they are embedded within a social context that maintains them.)

Naturally occurring Human Designed Static Atom, molecule, solar Table, boat, house, car, system, … ship, geo-stationary satel- lite, … Dynamic Hurricane, biological or- Designed social group ganism, biological group, such as a country govern- … ment, a corporation, a poker club, the ship of Theseus, geo-stationary satellite, …

Entity: persistent material pattern. But exclude flames, explosions, etc. by some means.

4/3/2018 34 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES

Besides the previous, computers offer an experimental entity laboratory, an en- vironment within which entities can be created without having to worry about en- ergy or resources. In all cases the entities are emergent through the implementa- tion of persistent patterns of existing entities. In naturally occurring emergence the abstraction that is implemented is typical- ly messy (quote Zimmer’s description of DNA). It is not designed, but usually it survives if it is sufficiently stable or powerful. In human designed emergence the abstraction typically comes first. Then we attempt to implement it—with varying degrees of success. Philosophical functionalism focuses on the second of these. It is always looking at abstract specifications and then noting that those abstract specifications can be implemented in any of a number of ways. That’s fine, but in focusing exclusively on the abstraction-to-implementation side, it ignores the im- plementation-to-abstraction side—and in so doing misses the source from which abstraction springs. In Section xx I discuss the various types of entities and point out that naturally occurring entities are not designed by nature in anything like the way man-made entities are designed. When we design artifacts, we generally have some idea what we want the artifact to do. To a greater or lesser extent, we conceptualize the ab- straction—and often write it down as a specification—before we create a design that we hope will realize it. Obviously this is an exaggeration. Most artifacts have their specifications changed in the course of their development as we learn more about what we are building. But the more important point is that for the most part when we design something we have in mind what we want the designed entity to do. Since nature doesn’t have a mind, it can’t have anything in it. Nature’s designs realize abstractions by chance. The ones that realize useful abstractions persist. The others don’t. So there is a fundamental difference between how emergence works for naturally and manmade entities. Higher level laws are about higher level entities. So to talk about them, must know what we mean by higher level entities. But since higher level entities don’t supervene conveniently over lower level entities, have problems with straightforward reduction. So what sort of reduction do we have? Perhaps here’s where synchronic and diachronic some it. There is also the issue of how lower level things are put together (binding forces). It’s the result of putting things together that has the higher level proper- ties. Putting things together imposes a constraint on them.

Static entities

Static entities are those that exist by virtue of being in an energy well.

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 35

Entities are nature’s way of having and remembering ideas In this section we step back from thought externalization to discuss what the thoughts that are being externalized might be about. In particular, we discuss enti- ties and the relationship between entities and ideas. We define what we mean by entity in the following section. Our conclusions will be as follows.  Entity formation, i.e., the creation of naturally occurring entities, is an objectively real phenomenon by means of which nature creates entities with new properties and functionalities.  To a great extent, idea formation is a parallel process by means of which we (i.e., human beings) create concepts that correspond to real or imagined entities and their properties.

We as human beings create ideas as a way both to understand nature and to build upon it.  When we create ideas and attempt to match them to reality we are doing science.  When we create ideas and attempt to match reality to them we are doing engineering.

Entities As discussed elsewhere [8] and [9] there are two kinds of entities: static and dynamic.  Static entities—for example, atoms, molecules, and solar systems—main- tain their structure (and hence their reduced entropy) because they exist in ener- gy wells—and hence have less mass as an aggregate than their components con- sidered separately.2  Dynamic entities—for example, biological organisms, social and political organizations, and (strikingly) hurricanes—maintain their structure (and hence their reduced entropy) by using energy they import from outside themselves— which makes them (famously) far from equilibrium. Because of the flow of im-

2 Paul Humphreys [10] suggested a similar notion, which he called fusion. The follow- ing is Timothy O’Connor’s summary [11] of Humphreys’ position. “[Emergent properties] result from an essential interaction [i.e. fusion] between their con- stituent properties, an interaction that is nomologically necessary for the existence of the emergent property.” Fused entities lose certain of their causal powers and cease to exist as separate entities, and the emergents generated by fusion are characterized by novel causal powers. Humphreys emphasizes that fusion is a “real physical operation, not a mathemati- cal or logical operation on predicative representations of properties.”

4/3/2018 36 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES

ported energy, dynamic entities have more mass as an aggregate than the com- bined mass of their components considered separately.3

Entities and specifications Entities have what are often called emergent properties, which are defined at the level of the entity itself. That a government (a dynamic entity) is democratic or that a diamond (a static entity) is hard are properties defined at the level of the government or the diamond. They are not properties of the components of a gov- ernment or diamond. Describing something in terms of its externally observable properties is com- mon in both software and systems engineering. In computer science, describing something independently of its implementation is called a specification. The spec- ifications of abstract data types and early attempts to axiomatize software are early examples. It is now commonplace to write specification documents to describe de- sired software systems prior to building them. Software specifications may be for- mal (i.e., expressed in a formal language—which is very difficult to carry out in detail) or informal (i.e., expressed in a natural language—which is common prac- tice) as in a natural language specification of a software system’s API. 4 Software specifications describe the behavior of software without prejudicing its implemen- tation. In systems engineering, the description of a system in terms of its observable properties is called a requirements specification—again a description of a system in terms that do not constrain the implementation of those properties. Familiar as we—as software and system developers—may be with using speci- fications to describe software or engineered systems, it may nevertheless seem strange to talk about naturally occurring entities such as diamonds or biological organisms in such an abstract way. One wonders how it is possible to discuss the properties of an entity independently of its components. Doesn’t its internal orga- nization matter? Do such entities spring into existence fully formed? How can something that seems altogether new—like a bird—and that has new properties— like the ability to fly, a property that seems to be defined in terms of the entity it- self—appear apparently from nowhere?

3 Speaking poetically one might refer to the energy flowing through a dynamic entity as its soul or spirit. When the energy stops flowing, the entity dies. From this perspective a soul or spirit has mass.

4 An Application Programming Interface (API) is the collection of operations that may be performed on a software system via calls to the system by other software. For each API ele - ment one specifies how that element may be called, what parameters it accepts, the effects of the call on both the system (i.e., how the system’s conceptual model will be effected by the call) and the parameters, and the results returned if any.

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 37

Because this seems so mysterious, one may be tempted to look for hitherto un- known mechanisms for self-organization—or in desperation even intelligent de- sign. This is a distraction. There is nothing mysterious about how entities form. Naturally occurring static entities form as a result of well understood physical laws: atoms are created from elementary particles; molecules form from atoms; etc. Naturally occurring dynamic entities also form as a result of natural processes. Governments form when people create them—either explicitly or implicitly. Hur- ricanes form when the atmospheric conditions are right. Self-organization is not the point. The marvel of entities is not in some seemingly magical process of self-organi- zation; the marvel is that entities exist at all and that they have properties and be- haviors that may be described independently of their implementations. Entities, their properties, and naturally occurring designs If the question is where do the new properties that we attribute to entities comes from, the answer is that these “new properties” are really nothing more than ideas in our minds. Properties as ideas don’t exist in nature. Entities are what they are no matter what properties we attribute to them. The idea of a property doesn’t exist in the mind of nature. Nature doesn’t have a mind. This is not say that an entity’s new properties are fictitious. Hemoglobin, for example, can bind to, transport, and release oxygen. This property, while true of hemoglobin, is not a label one finds attached to hemoglobin molecules. There is no little FTC-approved5 tag attached to each hemoglobin molecule that says: certi- fied capable of carrying oxygen. Yes, hemoglobin carries oxygen. But the concep- tualization of hemoglobin as having the property of being able to carry oxygen is an idea in our minds, not in some universal mind that tracks the properties of all entities. Nonetheless, hemoglobin does carry oxygen. And because hemoglobin carries oxygen, a certain form of life—creatures like us—was able to establish and main- tain itself on earth. Designs and levels of abstraction Suppose a government hired a contractor to tell it how the country actually worked—from the government on down. The contractor has been asked to pro- duce a complete engineering design description of the country as it currently ex- ists. Presumably, the description would have to include the equivalent of engineer- ing drawings of the government, the components of the government, the compo- nents of those components, etc. Since human beings are components of the gov- ernment, we would eventually find ourselves having to describe the role of hemo- globin molecules in human biochemistry.

5 The Federal Trade Commission (FTC) regulates commerce in the United States.

4/3/2018 38 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES

Each level of such a description would correspond to what in computer science is called a level of abstraction. For the sake of this example, lets suppose that he- moglobin molecules are black-box components—i.e., biological piece parts— which can be included in our design without our having to build them ourselves. All we care about is their functionality, i.e., their ability to carry oxygen. Thus all one cares about with respect to a hemoglobin molecule is its specification, not how it implements the functionality described by its specification. Similarly, when describing how the government functions, one would take the description of the kinds of things that people can do as opaque. As far as the gov- ernment’s functioning is concerned one doesn’t care that people keep themselves alive through the use of hemoglobin molecules but whether legislators are able to cast votes, for example. Even if in describing a government one were responsible for describing the design of the people who participated in it—and hence had to understand the role of hemoglobin in keeping people alive—when thinking about the functioning of the government itself, one would not be concerned with that as- pect of how people are designed.6 The important point here is that the design of one level of abstraction, e.g., a government, is expressed in terms of other levels of abstraction, e.g., people, whose designs are expressed in terms of still other levels of abstractions, e.g., he- moglobin. But each level of abstraction can (and should) be documented separate- ly, in terms of other abstractions. This is in contrast to the alternative whereby one would describe the top level design in terms of the lowest level elements. In this case, it would mean that the design of a government would be expressed in terms of biological piece parts such as hemoglobin. Clearly that makes no sense. The structure of a government is defined in terms of roles that are filled by human be- ings, not by collections of biological piece parts. One can’t explain the role of leg- islators in voting on prospective laws by talking about biological piece parts.

Designs of all naturally occurring entities7 when given in terms of such increas- ing levels of abstraction are bottom-up designs. The entities at the level below any given level already exist. One is interested only in how they come together to ac- complish what they do at the next higher level. Although the properties and abstractions of naturally occurring designs are not made explicit by nature—nature doesn’t document her designs—as they would be were they documented by well-trained engineers, it’s clear that nature’s designs are best understood in terms of such levels of abstraction.

6 This, of course, is a gross simplification. Governments are concerned, for example, with issues of air quality, which cannot be understood without knowing that human beings rely on oxygen to survive. This is one reason that modeling is so problematic.

7 We count governments as naturally occurring entities. Governments may be under- stood as more sophisticated versions of long-standing naturally occurring animal groupings such as flocks, herds, tribes and (bacterial) colonies.

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 39

 A wolf pack is a pack of wolves, not an aggregation of wolf organs and other biological piece parts.  A wolf is a system of organs and other elements, not an aggregation of molecules and atoms.  Hemoglobin is a structured pair of proteins and other components, not an aggregation of elementary particles such as electrons and quarks.

Nature is an engineer whose design make sense only when understood in terms of multiple levels of abstraction. Nature accomplishes this feat through the use of entities. It is entities that carry identity, properties, and functionality. But entities are not labeled as such, and entities do not have tags attached to them describing their properties and functions. By embodying what we term properties and func- tions entities serve as a means for nature to fix and record levels of abstractions— which can be used to build higher levels of abstraction. In other words, entities are how nature remembers its constructions. The role of entities in naturally occurring designs Another way of putting this is that entities are characterized by reduced en- tropy. Reduced entropy marks a pattern that persists in time. Patterns often have properties and functions.8 Some properties and functions can be used to build addi- tional patterns. Thus entities are a means by which patterns persist in time, i.e., are “remembered.” Patterns that persist in time (and are mutually accessible) are available to be combined to create additional patterns. The notion of “an entity” is what one might call a design meta-construct. Like a class or object in an object-oriented programming language, the notion of “an enti- ty” refers to a kind of design construct, not to any particular element in any partic- ular design. As a design meta-construct notion of “an entity” plays multiple im- portant roles in naturally occurring designs. As the following paragraphs elaborate entities (a) allow nature to build levels of abstraction; (b) provide nature a way to preserve patterns over time; and (c) serve as nature’s memory. Entities allow nature to build levels of abstraction. Once a level of abstraction has been constructed (as an entity), nature can then build new levels of abstraction by combining existing levels of abstraction and exploiting their properties and functionality. As indicated above, it simply makes no sense to speak, for example, of a colony of bacteria as if it were a colony of cell organelles and other cellular elements. In order for nature to build the level of abstraction colony-of-bacteria, nature first had to build the bacterium level of abstraction.

8 In section 3.3 we said that properties as ideas don’t exist in nature. Patterns certainly have properties (and can perform functions) that we can describe. It is these descriptions (and the names we apply to these descriptions) that exist only in our minds.

4/3/2018 40 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES

So even though nature does not label her levels of abstractions the way we as human designers do—there are no tags saying “bacterium” attached to bacteria— the levels of abstractions and the properties and functionalities that they imple- ment are real nevertheless. As argued above, a level of abstraction is a specification, a description of something from a behavioral and external perspective. Another way of putting it is that a level of abstraction is a specification (or conceptualization) of a set of prop- erties and functionalities. Informally we might refer to such a conceptualization as an idea. In this sense, entities are nature’s way of having an idea. Put another way, if nature had a mind, entities are what it would use to externalize its ideas. Entities preserve useful patterns of relationships over time. Organizing two or more entities into a structure of some sort often creates new functionality. Hemo- globin, for example, consists of two strands of protein. They must be combined into a larger organization before they can transport oxygen. An entity is such a persistent stable structure of components. Entities serve as nature’s memory. If we think of memory as the ability to re- tain structure, i.e., reduced entropy, over time, entities provide that function for nature. Both static and dynamic entities have reduced entropy (are more struc- tured) than their components would have been otherwise. The creation of an entity is the creation of a means whereby reduced entropy persists over a period of time. Consider the difference between the face one may see in a cloud and a similar face on a human being. The face in a cloud is fleeting; no mechanism exists to re- tain it. The face of a living human being persists. It changes as the person changes, but it persists as a face over time. Entities with their built-in mechanisms for per- sistence provide a way for nature to retain structures that are imposed over the ele- ments that make up the entity. Static entities impose structures over fixed collections of components. Dynamic entities impose structures over changing collections of components. The atoms in the body of a biological organism change over time. The members of a social or- ganization change over time. The citizens of a country change over time. With the development of dynamic entities nature created a way to remember structures which are separate from the components that the structures organize—quite a trick. The reductionist blind spot Isn’t it obvious that higher level entities are composed of lower level entities? Why even bother to say that a flock of birds consists of birds? Extreme reduction- ism claims that all explanations can be formulated in terms of the fundamental laws of physics as they pertain to elementary particles. Steven Weinberg [12] puts it this way.

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 41

[Reductionism is] the view that all of nature is the way it is (with certain qualifications about initial conditions and historical accidents) because of simple universal laws, to which all other scientific laws may in some sense be reduced. … Every field of science operates by formulating and testing generalizations that are sometimes dignified by being called principles or laws. … But there are no principles of chemistry that simply stand on their own, without needing to be explained reductively from the properties of electrons and atomic nuclei [emphasis added], and in the same way there are no principles of psychology that are free-standing, in the sense that they do not need ultimately to be understood through the study of the human brain, which in turn must ultimately be understood on the basis of physics and chemistry. It is this view that the notion of entities developed in this paper disputes. Con- sider two examples: a solar system and a living biological organism. We claim that neither can be understood strictly in terms of the principles of physics. For one things, neither can even be defined in terms of the principles of physics. How would one define solar system in a definition (or a cascade of defi- nitions) that contained references to nothing but elementary particles and forces? A solar system is not just a collection of elementary particles under mutual gravi- tational attraction. A solar system consists of one or more stars along with one or more planets orbiting around that (or those) stars. But what is a star, and what is a planet? Neither can be defined without implicitly or explicitly building in the no- tion of an entity, i.e., patterns that persist in time. Furthermore, if one talks about proprieties of a solar system, such as the num- ber of its planets, or the length of the year of one of its planets, or whether the or- bit of a planet is chaotic, etc., those ideas also rely on the notion of a planet as an entity. Certainly, stars and planets are made up of elementary particles, and certainly it is the force of gravity, an elementary force, that holds it all together. But it is wrong to say that notions such as a solar system are reducible to terms defined at the level of elementary physics. This is not playing with words. The very notion of a solar system is built on the notion of a star and some bodies orbiting it. If one can’t talk about those bodies as entities, the notion of a solar system has no meaning. The case for biological organisms is even more striking. How would one define the term alive using concepts from elementary physics? In our view it makes sense to define alive as a property of dynamic entities. A dynamic entity is alive as long as it persists. But of course, unless one includes the notion of entities, and espe- cially dynamic entities, within the realm of elementary physics, that sort of defini- tion is not accessible to the pure reductionist. At a more concrete level, how would one discuss the mechanism through which oxygen-breathing organisms keep themselves alive? To do so, one must talk about hemoglobin and oxygen molecules, i.e., about entities. The requirement that oxygen be carried from the lungs (how are lungs defined in terms of elemen- tary physics?) to the rest of the body and the story of how that is accomplished

4/3/2018 42 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES can’t be told in terms of elementary physical particles. It’s not a matter of quarks, electrons, etc. Entities are real, but forces and causes are epiphenomenal All the entities involved in describing the role of hemoglobin in keeping bio- logical organisms alive are made up of elementary physical particles. And all the forces involved are elementary physical forces. But the description of the design of biological organisms as dependent on the property of hemoglobin to transport oxygen simply is not a description that can be told in the language of elementary physics. Isn’t this a contradiction? We are not claiming that the particles and forces of elementary physics are not relevant to either solar systems or biological organisms. They are essential. As we discussed in [9] forces and causality (such as the motion of a hemoglobin mole- cules along the blood stream) that one might like to attribute to entities (such as the heart) found on levels higher than that of elementary physics are epiphenome- nal—i.e., they appear to be separate from their underlying causes but are really just a reflection of those causes seen in a different way. There are no higher level forces: there is no vital force. But if there are no higher level causes—if all appar- ently higher level causes are epiphenomenal—how can we make the claim that higher level entities are both real themselves and real elements of even higher lev- el designs? The answer is that higher level entities are real but that interaction among them is epiphenomenal. There are only primitive physical forces. But because entities exist, the patterns that they represent organize those forces to create functionalities that are best described at the level of the entities. Consider hemoglobin again. He- moglobin transports oxygen. One can’t just say that an aggregation of elementary particles that make up oxygen-bound-to-hemoglobin is carried along. Further- more, one must talk about the heart as the source of power that pumps the hemo- globin carried in a stream of fluid through the body along a network of arteries and veins. None of that can be described in the language of elementary physics without implicitly or explicitly importing the notion of entity and their derived functionalities. Contrary to Weinberg we claim that it is reasonable to describe the functioning of oxygen-breathing organisms as principles of biology because it is at the level of biological entities that biological functionalities come into being. The mechanisms used at the biological level are separate from and cannot be reduced to those of el- ementary physics. These are mechanisms that are described on the level of blood vessels, oxygenation, pumps, lungs, hemoglobin, etc. The structures that have been built to support oxygen-breathing organisms are new creations in much the same way that the algorithms built into computer software are new creations. Certainly the principles of biology must be implemented by mechanisms that operate on the level of elementary physics. But one could never derive the fact

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 43 that biological organisms depend on hemoglobin-carrying oxygen being pumped though the body from the principles of elementary physics. Implementation of the laws of the higher level sciences by those of elementary physics is not the same as reduction of those laws to the laws of elementary physics. (We discuss the differ- ence between implemented by and reducible to in the following section.) It is entities that serve as the ontological components in terms of which the laws of the higher level sciences are expressed. Entities are physically real, and entities obey laws that must be implemented by—but cannot be reduced to—those of elementary physics. The reductionist blind spot is the failure to see and under- stand the reality and significance of entities. The reductionist blind spot derives from the confusion caused by the fact that although entities are objectively real, interactions—i.e., forces and causal relationships—among higher level entities are epiphenomenal. Patterns are implemented by but are not reducible to the ele- ments they organize The notion of implemented by but not reducible to deserves some attention. A computer program is implemented by the operations defined by the programming language in which it is written. But the functionality of the computer program is not reducible to those operations. Although an algorithm is composed from a set of basic operations, it is neither derivable from nor a logical consequence of those operations. Similarly a musical composition is implemented by the notes of the scale. But the melodies and harmonies of a musical composition are neither deriv- able from nor reducible to the notes themselves.9 In these cases, as in nature, raw materials and fundamental operations are orga- nized into specific patterns to create something new. The patterns built into such designs are separate from and not reducible to the components and forces that those patterns arrange. As noted above one of the roles that entities play is that they are nature’s way of preserving patterns over time. As the examples have illustrated, elements ar- ranged in a pattern often have properties that are separate from the properties of the underlying elements. These pattern-level properties are often not even describ- able in the language used to describe the underlying elements. (One can’t describe what it means for a person to breath if one restricts oneself to terms from elemen- tary particle physics.) This may seem profoundly obvious, but it seems to be a point that people tend to forget.

9 Although algorithms and musical compositions are important (or useful or enjoyable), neither are entities according to our definition. Algorithms and musical compositions don’t per- sist on their own. Even though all entities embody patterns not all patterns are entities.

4/3/2018 44 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES

3. Laws from a Computer Science perspective

Loewer summarizes what he says is Fodor’s view of the place of the special sci- ences. [Each] special science taxonomizes nature into natural kinds in terms of its own proprietary vocabulary. What makes a special science a science is that it contains lawful regularities stated in its proprietary vocabulary that ground explanations and counterfactuals. [What] makes a special science regularity lawful is a fact that is irreducible to the laws and facts of fundamental physics (and other special sciences). That is, the lawfulness of special science regularities is a fact about the world as basic as and independent of the lawfulness of the laws of fundamental physics. Fodor’s view can be illustrated with the help of a souped up version of Laplace’s demon. The demon knows all the physical facts obtaining at all times and all the fundamental dynamical laws of physics, has perfect computational powers and also a “translation” manual connecting special science and physical vocabularies. The demon is thus able to tell which micro physical situations correspond to, for example, a philosophy conference and is able to determine which generalizations about philosophy conferences are true and which are false. It can do the same for all the special science. It will also be able to tell which special science regularities hold under counterfactual initial conditions and so which hold in all physically possible worlds (i.e. all the worlds at which the fundamental laws of physics obtain). But on Fodor’s view the demon will not be able to discern which regularities are laws. Because of this “blindness” the demon will be missing those counterfactuals and explanations that are underwritten by special science laws and so will not have an understanding of special science phenomena. Although the demon will be able to predict and explain the motions of elementary particles (or whatever entities are physically fundamental) from the state of the universe at any time and so could have predicted the stock market crash of 1929 it will not understand why it crashed. To do that it would need to know economics. As the extract from Loewer indicates, philosophers are quite concerned, e.g., (Carroll 2008), about when regularities represent are the result of laws and when they are accidental. The issue is often posed in terms of the criteria to require of statements for them to be considered laws. In the context of emergence and reduc- tionism, Howard suggests that the appropriate focus is not on statements but about models. [A] chief disadvantage of] thinking about the relationship between different levels of description in terms of intertheoretic reduction … is the restriction to theories represented syntactically as sets of statements or propositions, central among which are statements of laws, for there is reason to think that many important scientific theories–evolution is an often cited example–are not best understood in this way. [An alternative is] a semantic view of theories, whereupon a theory is conceived as a set of models. A computer science perspective doesn’t face that problem. A computer is under- stood to have operations that transform one state of the computer into another state. The operations are the “laws” that hold in a computer universe. An easy-to- understand example is the Game of Life. The rules that characterize which cells will be born, which will live, which will die constitute the laws of a Game of Life universe. There is no issue about discovering laws or of wondering whether there

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 45 are laws that bring about particular regularities. The laws are known, and that’s all there is to it. The only thing that happens on a Game of Life grid is that cells go on and off according to the Game of Life rules or laws. The more important question is whether the laws can be used to accomplish certain results. For example, when Conway first created the Game of Life he did- n’t know whether there was a way of arranging initial conditions so that the num- ber of live cells would increase indefinitely. Of course that question was soon an- swered with the invention of what’s come to be called a glider gun, a configura- tion of cells that traverse a cycle in the course of which a glider is generated. Each time around the cycle a new glider is created, thereby showing that the number of live cells on a grid can be made to increase indefinitely. A related question is to determine just what a particular sequence of transitions will do. There was once some hope that the field of program verification would develop to the point that one could formally prove that a sequence of transitions would produce a given result. That goal has been found to be beyond our reach— at least for now. In general one cannot always guarantee the result to be produced by a sequence of transitions. There are both theoretical and practical problems. Some questions about sequences of transitions are simply undecidable. Others are so complex that it is infeasible to attempt to answer them. Furthermore, computer programs no longer consist of single sequences of tran- sitions. Multiple asynchronous sequences of transitions occur. The results pro- duced by their interaction is even more difficult to formalize than the results pro- duced by a single sequence of transitions. Nonetheless, one has confidence that there is always a mechanism that explains in terms of fundamental computer oper- ations how a particular result was produced. Depending on the model, the laws can be higher or lower level. If one writes a program using the basic instructions of the computer (which no one does), it is those operations that constitute the laws. If one write a program in a higher level programming language (such as Java or C++), it is the primitive operations avail- able in those languages that constitute the laws. Most computer programs make use of libraries. When that’s the case, the oper- ations provides by programs in the library become part of the laws. How does this relate to the philosophical issue of laws of nature. The philo- sophical issue tends to focus on statements—which of them represent laws. In a computer world, the laws are known; statements that describe them are for the convenience of the reader, not as a way to pin them down. What pins down the laws in a computer world is the computer model itself. So what if we used that approach to characterized laws of nature? What if we developed computer models that are intended to reflect our understanding of how nature works? The laws that we write into the model are the intended laws. But those laws are not expressed as statements in predicate calculus; they are expres- sions in whichever programming language we used to build our model.

4/3/2018 46 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES

Laws obtain on multiple levels. It doesn’t matter that the laws on one level ex- plain those that hold on another. The higher level laws still hold. So why not just recognize them. It’s a matter simply of recognizing that higher level entities exist and that relationships hold among them. Those relationships are explainable by lower level considerations, but the relationships hold nevertheless.

4. Examples of emergence

Bedau and Humphreys One of the best ways to get a feel for emergence is to consider widely cited core examples of apparent emergent phenomena. The examples involve a surprising variety of cases. One group concerns certain properties of physical systems. For example, the liquidity and transparency of water sometimes are said to emerge from the properties of oxygen and hydrogen in structured collections of water molecules. As another example, if a magnet (specifically a ferromagnet) is heated gradually, it abruptly loses its magnetism at a specific temperature—the Curie point. This is an example of physical phase transitions, which often are viewed as key examples of emergence. A third example involves the shape of a sand pile. As grains of sand are added successively to the top of the pile, the pile forms a conical shape with a characteristic slope, and successive small and large avalanches of sand play an important role in preserving that shape. The characteristic sand pile slope is said to emerge from the interactions among the grains of sand and gravity. Life itself is one of the most common sources of examples of apparent emergence. One simple case is the relationship between a living organism and the molecules that constitute it at a given moment. In some sense the organism is just those molecules, but those same molecules would not constitute an organism if they were rearranged in any of a wide variety of ways, so the living organism seems to emerge from the molecules. Furthermore, developmental processes of individual organisms are said to involve the emergence of more mature morphology. A multicellular frog embryo emerges from a single-celled zygote, a tadpole emerges from this embryo, and eventually a frog emerges from the tadpole. In addition, evolutionary processes shaping biological lineages also are said to involve emergence. A complex, highly differentiated biosphere has emerged over billions of years from what was originally a vastly simpler and much more uniform array of early life forms. The mind is a rich source of potential examples of emergence. Our mental lives consist of an autonomous, coherent flow of mental states (beliefs, desires, memories, fears, hopes, etc.). These, we presume, somehow emerge out of the swarm of biochemical and electrical activity involving our neurons and central nervous system. A final group of examples concerns the collective behavior of human agents. The origin and spread of a teenage fad, such as the sudden popularity of a particular hairstyle, can be represented formally in ways similar to a physical phase transition, and so seem to involve emergence. Such phenomena often informally are said to exhibit ‘‘tipping points.’’ Another kind of case is demonstrated in a massive traffic jam spontaneously emerging from the motions of individual cars controlled by individual human agents as the density of cars on the highway passes a critical threshold. It is interesting to speculate about whether the mechanisms behind such phenomena are essentially the same as those behind certain purely physical phenomena, such as the jamming of granular media in constricted channels. …

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 47

5. One version of emergence

Do the glider gun as an example of the interaction of higher level entities. The version of emergence that I want to formulate differs from most others in that it includes an explicit intermediate construct. In many formulations of emergence one imagines that lower-level functionality is somehow directly (or mysteriously) transformed into higher-level functionality. This approach leads to the sort of mul- ti-determinant dilemma that Kim has repeatedly pointed out. My alternative is to suggest that lower level functionality (and entities) may create compound entities and that those compound entities may often have proper- ties and capabilities that are autonomous from those at the lower level. As an ex- ample consider an object that floats in water. An object floats when the water it displaces weighs at least as much as the object itself. Some objects are naturally buoyant because they are made of materials that are less dense than water. But let’s consider only objects that are made of materials that are more dense than wa- ter but that still float—objects with a concave shape that exclude water from an empty interior space and use that interior space as part of their displacement vol- ume. How does one relate the properties of the (lower level) materials of which such an object is made to the object’s ability to float? Since the construction materials are denser than water, one can’t map any sort of lower level buoyancy to the buoy- ancy of the floating object. So the ability to float is not in any traditional sense di- rectly reducible to lower level properties. The object floats (a) because of its shape and (b) because of the ability of the materials of which it is made to exclude water from its interior. The ability of such an object to float is, I would claim emergent. It is a property of the object (as a higher level construct), and that ability is not directly attribut- able to properties of the materials of which it is composed. That is, there is noth- ing about the component materials that would suggest that a construct made of those materials will float. Some definitions of emergence require that emergent properties be not be deducible, “even in principle” from lower-level properties. Chalmers put it this way. We can say that a high-level phenomenon is strongly emergent with respect to a low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are not deducible even in principle from truths in the low- level domain. It’s not clear to me what it means to say that that some truths are not deducible (even in principle) from other truths. Chalmers explains in a footnote that he means “that strong emergence requires that high-level truths are not conceptually or metaphysically necessitated by low-level truths.” I’m afraid I still don’t under- stand. Is the ability of our example object to float deducible from truths about the

4/3/2018 48 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES materials of which it is made along with truths about water, buoyancy, etc.? Naively I would think so. After all the object does float—and we can explain why. So it must be deducible from truths about its components, etc. On the other hand, in order for our object to float it had to have been construct- ed in such a way that it enclosed space that was used to displace water. Is the con- cept of such a construction among the truths of the lower level, or is it available for use in a derivation of the higher level truth that the object does float? If not, then the ability of the object to float is presumably not deducible (even in princi- ple) from the lower level truths. Another way to approach this issue is to note that the physics of buoyancy is independent of the truths about the lower level domain. So in that sense also once can’t derive the ability of the object to float from truths about the lower level do- main alone. One must add the physics of buoyancy, which has nothing to do with the lower level domain. For either or both of the two reasons just examined (that the construction of the object and the physics of buoyancy are not part of the lower level) I suspect—but obviously don’t know—that Chalmers would say that the ability of the object to float isn’t deducible from lower level domain truths. Consequently it satisfies Chalmer’s definition of (strongly) emergent. Does the ability of the object to float satisfy Kim’s requirements for emer- gence: supervenience and functional irreducibility? Certainly the object super- venes on its components. Change the components and the object changes. So it seems to me that supervenience is not an issue. What about functional irreducibili- ty? The question of functional irreducibility seems to me to raise the same issues as those raised by Chalmer’s requirement of non-deducibility. What does it mean for something not to be functionally reducible to something else? Kim doesn’t provide a definition. So it’s hard for me to say. I would guess that it means that there is no composition of lower level functions that are equivalent to the target higher level function. Kim (2006) gives as an example that “Number theory is ir- reducible to hydrodynamics and vice versa.” Since number theory and hydrody- namics are independent of each other, it’s not clear to me why they are not mutu- ally reducible. After all each can be constructed ab initio. So it is no harder to con- struct each theory if one starts by assuming the other. Perhaps what is intended by reducible in this context is that one theory is dependent on the other. Just as num- ber theory doesn’t depend on hydrodynamics the physics of buoyancy does not de- pend on the properties of materials. So in that sense the ability of the object to float satisfies Kim’s requirements for emergence. Like Kim and Chalmers, Howard (2007) also suggests that irreducibility and supervenience are central to emergence. Howard is more explicit with respect to what he means by irreducibility. He adopt Nagel’s (1961) formulation as follows. Intertheoretic reduction is a logical relationship between theories. In the classic formulation owing to Ernest Nagel, theory TB, assumed correctly to describe or explain

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 49

phenomena at level B, reduces to theory TA, assumed correctly to describe or explain phenomena at level A, if and only if the primitive terms in the vocabulary of TB are definable via the primitive terms of TA and the postulates of TB are deductive consequences of the postulates of TA. As normally formulated, this definition of reduction assumes a syntactic view of theories as sets of statements or propositions. Howard goes on to say. Thinking about the relationship between different levels of description in terms of intertheoretic reduction has the advantage of clarity, for while it might prove difficult actually to determine whether a postulate at level B is derivable from the postulates of level A…, we at least know what we mean by derivability and definability as relationships between syntactic objects like terms and statements, since we know by what rules we are to judge. The chief disadvantage of this way of thinking about inter-level relationships is that one is hard pressed to find a genuine example of intertheoretic reduction outside of mathematics, so to assert emergence as a denial of reduction is to assert something trivial and uninteresting. That inter-theoretic reduction might not be a helpful way to think about inter-level relationships is perhaps best shown by pointing out that everyone’s favorite example of a putatively successful reduction–that of macroscopic thermodynamics to classical statistical mechanics–simply does not work. Recall what is required for reduction: the definability of terms and the derivability of laws. Concede the former in this instance–as with the definition of temperature via mean kinetic energy–and focus on the latter. Foremost among the thermodynamic laws that must be derivable from statistical mechanical postulates is the second law, which asserts the exceptionless evolution of closed non-equilibrium systems from states of lower to states of higher entropy. Providing a statistical mechanical grounding of the second law was Boltzmann’s paramount aim in the latter part of the nineteenth century. Did he succeed? The answer is no. For one thing, what Boltzmann derived was not the deterministic second law of thermodynamics but a statistical simulacrum of that law, according to which closed nonequilibrium systems are at best highly likely to evolve from states of lower to states of higher entropy. More importantly, even this statistical simulacrum of the second law is derived not from mechanical first principles alone but from those conjoined to what was early termed the ergodic hypothesis, which asserts that, regardless of its initial state, an isolated system will eventually visit every one of its microstates compatible with relevant macroscopic constraints, like confinement to a surface of constant energy in its phase space. The ergodic hypothesis can be given comparably opaque equivalent formulations, such as the assertion of the equality of time and ensemble averages, but the work that it does in the foundations of statistical mechanics is clear: The theory being a statistical one, it must work with averages. The ergodic hypothesis makes the averages come out right. The crucial fact is, however, that for all but a few cases special cases or for highly idealized circumstances, the ergodic hypothesis and its kin cannot be derived from mechanical first principles. On the contrary, we can demonstrate non-ergodic behavior for a large class of more realistic models. It was of course my intent that the ability of the object to float be considered emergent. I selected this example exactly because I wanted a case of emergence that was easy to talk about. I hope that the preceding discussion has accomplished that objective. Even though there are many ways to build an object that has the ability to float, this isn’t a matter of multi-realization. Whether or not there are multiple ways to

4/3/2018 50 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES realize the ability to float is not relevant. What is relevant is that lower-level ele- ments combined to form a higher-level entity that had that new property.

Supervenience doesn’t work unless there is a mapping between the two levels. This is called multi-domain supervenience in (McLaughlin, 2006).

Consider a Game of Life glider. Over what does it supervene? All the cells that will ever be part of it? Assume the cells are colored, and that each cell selects ran- domly from red, orange, yellow, when on and randomly from black, purple, when off. Does the property colors included in glider supervene over anything but the current set of cells that make it up? Or assume the GoL grid is numbered from some origin. (It’s infinite in all directions so it can’t be numbered from a corner.) Then what does position-of-NW-corner supervene over? It depends on the map- ping from glider to cells. If the mapping is to the cells that currently make it up, then the position is the same as those cells and the NW corner is the NW corner square. If the mapping is to all cells that could ever possibly make it up, then the predicate must be time dependent. There are lots of gliders that are made up of the same set of cells. But since they don’t all suffer the same fate, the entire col- lection of cells can’t be their supervenient base. Since the supervenient base varies with time and the glider doesn’t, that makes the glider useful for mechanisms and laws that can’t be expressed in terms of cells. Mechanisms can be expressed at multiple levels without running into the prob- lem of redundant causation. The lower level explains why the higher level does what it does. But that’s science, not redundant causation.

Assume there are two molecule-to-molecule identical versions of me. Then by supervenience, there can be no higher level differences between these two beings —even leaving out the fact that we are at two different physical locations. But the world is not static. The two of us breath. If we are to stay-molecule-by-molecule identical, the air we breath must also be stay-molecule-by-molecule identical—at least to the extent that it becomes incorporated into our bodies. If one of us breathes normal air and the other breathes CO-spiked air, we will be come less and less stay-molecule-by-molecule identical until one of us it dead—at which point we diverge even faster.

Causation in computer science. In CS operations are atomic at any particular level. They may be broken down at lower levels—and even at the same level when a subprogram is called. But one of the primary constructions in CS is that of the atomic operation.

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 51

6. Why are philosophers so often wrong?

In reading many of the philosophical papers about emergence and abstraction I’ve been struck by how difficult it seems for philosophy as a discipline to reach con- clusions. I can understand why some issues are timeless: what’s a good life; how should people behave; etc. But as I understand it much of philosophy is an attempt to clarify terms and issues. I would have thought that such a process would resolve itself over a reasonable period. Also, it seems that philosophers continually find holes in each other’s arguments. That seems especially distressing. Philosophers are smart people. Do they really make so many mistakes? Also, it’s usually not that the holes are minor problems; they are at least claimed to be significant enough to destroy entire arguments. It strikes me that one possible reason for this is that many philosophical arguments are couched in terms that are so removed from empirical verification that it’s quite easy to make mistakes. In software we don’t have that problem. We make plenty of mistakes. They are called bugs. But because our arguments (our creations) must actually run and produce results, the problems generally show up. This section lists some of the examples.

References

Abbott, Russ (2006) “Emergence explained,” Complexity, Sep/Oct, 2006, (12, 1) 13-26. Preprint: http://cs.calstatela.edu/wiki/images/9/95/Emergence_Explained-_Abstractions.pdf Abbott, Russ (2007) “Bits don’t have error bars,” Workshop on Engineering and Philoso- phy, October 2007. To be included in the selected papers from the conference. http://cs.cal- statela.edu/wiki/images/1/1c/Bits_don’t_have_error_bars.doc. Abbott, Russ (2008) “If a tree casts a shadow is it telling the time?” International Journal of Unconventional Computation., (4, 3), 195-222. Preprint: http://cs.calstatela.edu/wiki/images/6/66/If_a_tree_casts_a_shadow_is_it_telling_the_time. pdf. Andersen, Peter Bøgh, Claus Emmeche, Niels Ole Finnemann and Peder Voetmann Chris- tiansen, eds. (2000): Downward Causation. Minds, Bodies and Matter. Århus: Aarhus Uni- versity Press. Anderson, Philip W. (1972) “More Is Different,” Science, 4 Aug. 1972, (177, 4047), 393- 396. Armstrong, William W. (1974) “Dependency Structures of Data Base Relationships,” in In- formation Processing 74, pp 580-583, North Holland. Bechtel, William and Andrew Hamilton (2007) “Reduction, Integration, and the Unity of Science: Natural, Behavioral, and Social Sciences and the Humanities,” in T. Kuipers (ed.), Philosophy of Science: Focal Issues Elsevier. Bedau, Mark and Paul Humphreys (2008) Emergence, MIT Press. Introduction available: http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=11341.

4/3/2018 52 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES

Bickle, John, "Multiple Realizability", The Stanford Encyclopedia of Philosophy (Fall 2008 Edition), Edward N. Zalta (ed.), forthcoming URL = http://plato.stanford.edu/archives/fal- l2008/entries/multiple-realizability. Boogerd, F. C., F. J. Bruggeman, R. C. Richardson, A. Stephan, and H. V. Westerhoff (2005) “Emergence and its place in nature: a case study of biochemical networks,” Syn- these (2005) 145: 131–164. http://www.cogsci.uni-osnabrueck.de/~acstepha/Emergence_Place_Nature_Synthese %20(2005).pdf. Brigant, Ingo (2008) “Natural Kinds in Evolution and Systematics: Metaphysical and Epis- temological Considerations, Acta Biotheoretica, forthcoming. http://philsci-archive.pitt.edu/archive/00004154/01/Natural_kinds_in_evolution_and_sys- tematics.pdf Carroll, John W. (2008) "Laws of Nature", The Stanford Encyclopedia of Philosophy (Spring 2008 Edition), Edward N. Zalta (ed.), URL = . Chaitin, Gregory (2003) “Leibniz, Information, Math and Physics,” Wissen und Glauben / Knowledge and Belief. Akten des 26. Internationalen Wittgenstein-Symposiums 2003, Her- ausgegeben von Löffler, Winfried / Weingartner, Paul, ÖBV & HPT, Wien, 2004, pp. 277- 286. http://www.umcs.maine.edu/~chaitin/kirchberg.html. Chalmers, David J. (2006) "Strong and Weak Emergence" In The Reemergence of Emer- gence. P. Clayton and P. Davis, 244-256 Clayton, Philip and Paul Davies, eds. (2006). The Reemergence of Emergence, Oxford Uni- versity Press. Deacon, Terrence W. (2007) “Emergence: The Hole at the Wheel’s Hub,” in P. Clayton and P. Davis, The Reemergence of Emergence. Oxford University Press. Dennett, Daniel C. (1987) The Intentional Stance, MIT Press. Dennett, Daniel C. “Real Patterns,” The Journal of Philosophy, (88, 1), pp 27-51, 1991. Einstein, Albert (1918) “Principles of Research,” (address) Physical Society, Berlin, reprinted in Ideas and Opinions, Crown, 1954. http://www.cs.ucla.edu/~slu/on_research/einstein_essay2.html. Fausto-Sterling, Anne (2000) “The five sexes revisited,” Sciences, New York Academy of Sciences, Jul/Aug 2000, Vol. 40 Issue 4, p18. http://www.neiu.edu/~lsfuller/5sexesrevisited.htm. Fodor, Jerry A. (1974): "Special sciences and the disunity of science as a working hypothe- sis", Synthese, 28, pp. 77-115. Fodor, Jerry A. (1997) “Special Sciences; Still Autonomous after All These Years,” Philo- sophical Perspectives, 11, Mind, Causation, and World, pp 149-163, 1997. Griffiths, Paul E. (2008) “Is Emotion a Natural Kind?” to appear in Robert Solomon (ed). Philosophers on Emotion, Oxford University Press. http://philsci-archive.pitt.edu/archive/00000566/00/Is_Emotion_a_Natural_Kind.PDF Guttag, John (1977) “Abstract data types and the development of data structures,” Commu- nications of the ACM, (20, 6) 396-404, June 1977. http://rockfish.cs.unc.edu/204/guttagADT77.pdf.

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 53

Holland, John (1997). Emergence: From Chaos to Order,. Howard, Don, (2007) “Reduction and Emergence in the Physical Sciences: Some Lessons from the Particle Physics and Condensed Matter Debate.” In Evolution and Emergence: Systems, Organisms, Persons. Nancey Murphy and William R. Stoeger, S.J., eds. Oxford: Oxford University Press, 2007, pp. 141-157. http://www.nd.edu/%7Edhoward1/Reduction%20and%20Emergence.pdf Kallestrup, J. (2006). "The Causal Exclusion Argument." Philosophical Studies 131(2): 459-485. http://www.philosophy.ed.ac.uk/staff/Kallestrup/CausalExclusion.pdf Kellar Autumn; Metin Sitti; Yiching A. Liang; Anne M. Peattie; Wendy R. Hansen; Simon Sponberg; Thomas W. Kenny; Ronald Fearing; Jacob N. Israelachvili; Robert J. Full. “Evi- dence for van der Waals adhesion in gecko setae.” Proceedings of the National Academy of Sciences of the USA 2002, 99, 12252-12256. Kim, Jaegwon (1984) "Epiphenomenal and Supervenient Causation," Midwest Studies in Philosophy, Vol. 9, pp. 257-70. Kim, J. (1992) Multiple realization and the metaphysics of reduction. Philosophy and Phe- nomenological Research, 52: 1-16. Kim, Jaegwon (1993). Supervenience and Mind. Cambridge University Press, Cambridge. Kim, Jaegwon (1999). “Making Sense of Emergence,” Philosophical Studies, 95, pp. 3-36. Kim, Jaegwon (2006). "Emergence: Core ideas and issues" Synthese 151, 547-559 Laughlin, Robert B. (2005) A Different Universe: Reinventing Physics from the Bottom Down, Basic Books. Leibniz, Gottfried Wilhelm (1686) Discourse on Metaphysics, Sections 5-6, as translated by Ariew and Garber [10, pp. 38-39]. Loewer, Barry (2008) “Why There Is Anything except Physics,” in Hohwy, J. & Kallestrup, J. (eds).. Being Reduced: New Essays on Reduction, Explanation, and Causa- tion. Oxford: Oxford University Press. Preprint: http://philosophy.rutgers.edu/FACSTAFF/BIOS/PAPERS/LOEWER/Why_There_is_Any- thing_Except_Physics.pdf. McLaughlin, Brian and Karen Bennett (2006) "Supervenience", The Stanford Encyclopedia of Philosophy (Fall 2006 Edition), Edward N. Zalta (ed.) http://plato.stanford.edu/archives/fall2006/entries/supervenience/. Nagel, Ernest (1961). The Structure of Science: Problems in the Logic of Scientific Expla- nation. New York: Harcourt, Brace & World. O'Connor, Timothy and Hong Yu Wong, "Emergent Properties", The Stanford Encyclope- dia of Philosophy (Winter 2006 Edition), Edward N. Zalta (ed.), URL = . Putnam, H. (1960), “Minds and Machines”, Dimensions of Mind, ed. Sidney Hook (New York: New York University Press, 1960), pp. 148-180. Reprinted. In Putnam, Mind, Lan- guage and Reality. Philosophical Papers, vol. 2. Cambridge: Cambridge University Press, 1975, pp. 362-385. Putnam, Hilary (1975) “Philosophy and our Mental Life.” In Mind, Language, and Reality. Cambridge University Press, pp.291-303.

4/3/2018 54 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES

Schouten, Maurice and Huib Looren de Jong (2007) The Matter of the Mind, Wiley-Black- well. Schrödinger, Erwin (1944) What is Life?, Cambridge University Press. http://home.att.net/~p.caimi/Life.doc. Schouten, Maurice and Huib Looren de Jong (2007) The Matter of the Mind, Wiley-Black- well. Searle, John (2004) Mind: a brief introduction, Oxford University Press. Shalizi, Cosma (1998) “Review of Holland, Emergence from Chaos to Order,” The Bactra Review, (personal online review series) http://cscs.umich.edu/~crshalizi/reviews/hol- land-on-emergence/ Shapiro, Lawrence A. (2000) “Multiple Realizations,” The Journal of Philosophy, Vol. 97, No. 12, (Dec., 2000), pp. 635-654 Sperry,. Roger (1970) “An Objective Approach to Subjective Experience,” Psychological Review, Vol. 77. Wayne, Andrew Z (2008) “Emergence, Singular Limits and Basal Explanation,” PhiSci Ar- chive, http://philsci-archive.pitt.edu/archive/00003933/. Weatherson, Brian, "The Problem of the Many", The Stanford Encyclopedia of Philosophy (Winter 2005 Edition), Edward N. Zalta (ed.). http://plato.stanford.edu/archives/win2005/entries/problem-of-many/. Weinberg, Steven (2001) “Reductionism Redux,” The New York Review of Books, October 5, 1995. Reprinted in Weinberg, Steven., Facing Up, Harvard University Press. http://www.idt.mdh.se/kurser/ct3340/ht02/Reductionism_Redux.pdf. Wimsatt, William. C. (1976). “Reductive explanation: A functional account.” In J. van Evra (Ed.), Proceedings of the Biennial Meeting of the Philosophy of Science Association, Vol. 1974 (pp. 671-710). Dordrecht: Reidel. Internet accesses are as of August 3, 2008.

7. Kim’s reduction

To explore this issue Kim (1998) developed what he called a functional model of reduction: a higher level property E can be reduced to lower level properties B if one can find mappings from B to E such that everything known about E can be identified with what is known about B. Kim claimed this models allows him to conclude that all higher level (i.e., emergent) properties can be explained by mapping lower level properties to them. In so doing he denied the possibility of emergent properties that are not theoreti- cally predictable or reductively explainable.

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 55

In my view, just as Fodor, Bedau, and Humphres go too far in their puzzlement about emergence, Kim goes too far in his dismissal of it. Were I to attempt to ap- ply Kim’s concept to the relationship between Microsoft Word and logic gates, I wouldn’t know how to start. Certainly one can understand how Microsoft Word has been built on top of logic gates. But I doubt that one could find the sort of mapping between them that Kim requires. From a computer science perspective, if software development were nothing more than a mapping of lower level phenome- na to higher level phenomena, we wouldn’t need programmers. Software is much more complex than the kind of functional mapping that Kim describes. Kim says that the reduction of an emergent property E to a basal domain B con- sists of three steps. Step 1: E must be functionalized – that is, E must be construed, or reconstrued, as a property defined by its causal/nomic relations to other properties, specifically properties in the reduction base B. We can think of a functional definition of E over domain B as typically taking the following (simplified) form:

Having E = def Having some property P in B such that (i) C1, . . . , Cn cause P to be

instantiated, and (ii) P causes F1, . . . , Fm to be instantiated. Step 2: Find realizers of E in B. If the reduction, or reductive explanation, of a particular instance of E in a given system is wanted, find the particular realizing property P in virtue of which E is instantiated on this occasion in this system; similarly, for classes of systems belonging to the same species or structure types. Step 3: Find a theory (at the level of B) that explains how realizers of E perform the causal task that is constitutive of E (i.e., the causal role specified in Step 1). Such a theory may also explain other significant causal/nomic relations in which E plays a role. And that’s it. Does it sound plausible? I can’t tell. It would certainly be more convincing had Kim demonstrated how it works on an instructive example, say Microsoft Word.  Step 1. If E were the properties of Microsoft Word and B were logic gates how would I apply Step 1? How does Kim even know that Step 1 can be applied? How does Kim know that properties of Microsoft Word can be construed or reconstrued as properties defined by their causal/nomic rela- tions to properties relating to logic gates? (I’m not sure I understand what he says is needed. That’s the best I’ve been able to do.) How does Kim know

that there are C1, . . . , Cn in B that cause a P in B that causes F1, . . . , Fm to be

instantiated? (I don’t see where the Fi are defined. Am I right in supposing that they are intended to represent E?) In the concrete case of Microsoft

Word, what are the P and the C1, . . . , Cn? What even might they be? If the

F1, . . . , Fm are intended to represent E, what are the F1, . . . , Fm in the case of Microsoft Word? I don’t know where to begin.

4/3/2018 56 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES

 Step 2. What are the realizers of Microsoft Word in B? Other than the

P and the C1, . . . , Cn I don’t know what this means. Why is this a separate step?  Step 3. Assuming we get past steps 1 and 2, what is the theory at the gate level that explains how the realizers of Microsoft Word perform the causal tasks that constitute Microsoft Word? Isn’t this also step 1 again? If not, what is it? I’ll admit that I’m confused about how this reduction is supposed to work. Of course that doesn’t mean that it doesn’t work, but I surely don’t understand it. Par- don the rant to follow, but when computer scientists work with a concept, not only do we explain what the concept means, we almost always demonstrate how it works by implementing it in software and running it. Concepts that can’t be imple- mented and demonstrated are looked on with suspicion. In contrast many of the philosophy papers I’ve read express ideas at very gen- eral levels and leave it at that. There is rarely an attempt to show that the ideas work when applied even to simple examples. I don’t mean to pick specifically on Kim, but this seems to be a typical case. The paper makes no effort to show that the proposed reduction process actually works. I would very much like to see how it is used to reduce a property often cited as emergent to some widely understood basal domain. Perhaps as a consequence a great many philosophy papers have ti- tles like, “Why X was wrong about Y” and “Why Z was wrong about X being wrong about Y.”

8. Supervenience and Functional Dependency

As it turns out there is a concept that is common to both computer science and philosophy. Computer science has worked it through in some detail. In philosophy it’s meaning is still under discussion. The computer science use of the concept has become clear, crisp, and concrete. In philosophy, the concept has perhaps a broad- er potential scope, but it also seems to have more of a sense of fuzziness about it. The concept is supervenience. The equivalent concept in computer science is func- tional dependency, which is used in designing relational databases. Relational databases consist of tables. The columns of a table are called at- tributes. A functional dependency is a relationship between sets of attributers. In particular, a set of attributes Y is functionally dependent on a set of attributes X, written X  Y, if will never be the case (because of what we know about the world and the information that is being stored in the database) that two rows will have the same values for the X attributes and different values for the Y attributes. Whether or not one set of attributes is functionally dependent on another is an em-

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 57 pirical or logical issue. To be sure, deciding when a functional dependency holds may not be a trivial matter. Given this notion of functional dependency one often asks which attributes are functionally dependent on which other attributes. If Y is functionally dependent on X one also says that X determines Y. For any table, the set of attributes on which the other attributes in the table are functionally dependent, i.e., the set of attributes that determine the other attributes, is called a key. A table may have multiple keys. It may also be the case that the only key is the set of all attributes. Armstrong (1974) published three derivation rules, which have come to be known as Armstrong’s axioms. Reflexivity: If X  Y then X  Y. Augmentation: If X  Y then X  Z  Y  Z. Transitivity: If X  Y and Y  Z then X  Z. Amazingly, these simple rules have been shown to be sound and complete. Pro- grams exist to compute the closure of a set of attributes under functional depen- dency. The notion of functional dependency now serves as the basis for the design of databases that are not subject to various anomalies, which might otherwise oc- cur. The point I wish to make is that not only did computer science develop the no- tion of functional dependency, but we formalized it to the extent of being able to write programs that compute results based on it and we use it regularly in the prac- tical matter of designing databases. I wouldn’t doubt that the notion of supervenience can be stretched to apply in more philosophical contexts than functional dependency. The notion of functional dependency may now be rigid, but at least one knows exactly where one stands. With supervenience in philosophy it is apparently not so clear. For example, McLaughlin’s (2006) survey article on supervenience devotes a brief section to three arguments that rely on it. He concludes that section as follows. Of course, it is controversial whether any of these arguments succeed, because it is controversial whether the alleged counterexamples to the supervenience claims are really possible. But in all three cases, the style of argument is the same—argument by appeal to a [false implied supervenience thesis]. In computer science whether or not a functional dependency holds is deter- mined either by fiat or by derivation. (The basic functional dependencies are de- termined by fiat. Does a social security number functionally determine a person’s identity? That is, are there any cases in which two people have the same social se- curity number. Although apparently there may be, it is generally presumed that there aren’t.) One way or the other, we know where we stand. In some philosophi- cal contexts it appears to be quite uncertain whether supervenience holds. McLaughlin refers to Putnam’s example: is it possible for there to be a person on a “twin earth”—a planet that has “twin water” that has a different molecular struc-

4/3/2018 58 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES ture from water but that given the state of technology is phenomenologically iden- tical to water—who is neurologically identical to her earthly twin but who is refer- ring to twin water and not water when using the term “water.” If so, intention doesn’t supervene over neurology. Isn’t the answer to this question more a matter of definition than insight? It seems to me that what this comes down to is a counterfactual with no observable differences. Isn’t it arbitrary how examples such as those are understood? If one assumes that water and twin water are phenomenologically indistinguishable, why does it matter—I was going to write “what difference does it make” and then real- ized that I had assumed it would make no difference—whether or not they are “re- ally” the same? Why muddy the notion of supervenience with cases like this? Why not at least pin down terms that can be pinned down like functional depen- dency? Again, pardon the rant.

9. Downward entailment

This perspective clarifies the issue of downward causation. One might wonder why a Game of Life Turing machine isn’t downwardly causing a particular grid cell to go on at a particular time or why Gresham’s law isn’t downwardly causing an atom in a particular unit of good money to be put in storage somewhere. Nei- ther of these is an example of downward causation. What is happening is that the Game of Life grid cell is part of the implementation of the Turing machine, and the atom is part of the implementation of the unit of good money. As long as these lower level elements continue in their roles of implementing the higher level enti- ties, their fate depends on the fate of the higher level entity. In “Emergence Ex- plained” (2006) I called this downward entailment. I was surprised to see that Kim (1999) was not bothered by this even though he thought of it as a kind of downward causation. As what he called diachronic re- flexive downward causation, he found it unremarkable. I fall from the ladder and break my arm. I walk to the kitchen for a drink of water and ten seconds later, all my limbs and organs have been displaced from my study to the kitchen. Sperry's bird flies into the blue yonder, and all of the bird's cells and molecules, too, have gone yonder. It doesn't seem to me that these cases present us with any special mysteries rooted in self-reflexivity, or that they show emergent causation to be something special and unique. For consider Sperry's bird: for simplicity, think of the bird's five constituent parts, its head, torso, two wings, and the tail. For the bird to move from point p1 to point p2 is for its five parts (together, undetached) to move from p1 to p2. The whole bird is at p1 at t1 and moving in a certain direction, and this causes, let us suppose, its tail to be at p2 at t2. There is nothing mysterious or incoherent about this. The cause -- the bird's being at p1 at t1 and moving in a certain way -- includes its tail's being at p1 at t1 and

4/3/2018 NOTES NOTES NOTES NOTES NOTES NOTES NOTES 59

moving in a certain way. But that's all right: we expect an object's state at a given time to be an important causal factor for its state a short time later. We must conclude then that … diachronic [reflexive downward causation] poses no special problems but perhaps for that reason [is] rather unremarkable as a type of causation. As I understand it, Kim is arguing that because time passes, this sort of (appar- ent?) downward causation is acceptable. I don’t follow that reasoning. It seems to me that downward causation is suspect under any circumstances, whether time passes or not. To be fair, Kim was contrasting diachronic reflexive downward causation with what he called synchronic reflexive downward causation, which he found unac- ceptable because it was circular. In his view, diachronic reflexive downward cau- sation is not. But whether or not Kim’s argument works, I believe that my explica- tion of it as downward entailment provides a better explanation. Downward entailment also explains why it’s not unreasonable to say that a glider in the Game of Life “turns on” a particular cell when it gets there. A glider in the Game of Life is nothing. It is a time-stepped sequence of patterns of on and off grid cells. It is epiphenomenal over the application of the Game of Life rules. And as we know, epiphenomena, by definition, are causally powerless. (Kim 1993) labels as epiphenomenal causation—and thereby dismissible—any apparent causation associated with epiphenomena. From the perspective of downward en- tailment I find epiphenomenal causation perfectly reasonable. I see no problem in saying that an epiphenomena such as a glider can “cause” a grid cell to be turned on. At the level of gliders as entities, that’s what happens. The cell is turned on when it becomes part of the implementation of the glider. That’s the only time that downward entailment makes sense, when a lower level element participates in the implementation of a higher level element. But this raises another issue. Since a glider skitters across an unbounded num- ber of grid cells, does it supervene over all of them during its lifetime as one after another they participate in its implementation? Yes it does—just as a biological organism supervenes over a great many atoms and molecules over its lifetime. So the fact that a bird’s molecules move with the bird, depends on those molecules being part of the bird’s implementation at that time. Since all biological organisms apparently shed matter continually, only some of the bird’s molecules move all the way with the bird. The ones that don’t make it are left behind when they cease functioning as part of the bird’s implementation. I discuss the implications of this perspective in the section on entities. Kalestrup (2006) One way to read the causal exclusion argument is that if we hold on to (Property Dualism) as our nonreductive physicalist does, then we can save (Mental Causation), only if either (Completeness) or (Exclusion) is given up. We have argued in Sec. III that there is no reason why we should deny (Completeness) and embrace the possibility of downward causation. Kim’s additional argument for this claim makes assumptions about

4/3/2018 60 NOTES NOTES NOTES NOTES NOTES NOTES NOTES NOTES the causal power of irreducible mental properties, which no nonreductive physicalist need accept. Instead we argued in Sec. IV that the nonreductive physicalist should reject (Exclusion), which is independently implausible, and hence accept (Overdetermination). But given the way (Supervenience) is cashed out, the counterintuitive consequences of some cases of an effect having two sufficient causes can be avoided by insisting that they be counterfactually dependent. To suggest the best response to the causal exclusion argument on behalf of nonreductive physicalism is however not to say anything positive about what mental causation might be on this view. The counterfactual analysis meets the homogeneity constraint on mental and physical causation, but is beset with severe difficulties. For instance, it owes an account of why mental causes are not epiphenomenal. I believe the nonreductive physicalist must deliver some account of mental causation in terms other than counterfactuals that not only respects this constraint, but also entails sufficient dependency between distinct mental and physical causes to sustain a respectable form of overdetermination. Until the nonreductive physicalist has satisfactorily advanced such an account, she may be able to respond to the causal exclusion argument in the way I have recommended, but Kim’s worries about the causal efficacy of the mental will hang on. A call for computationally grounded philosophy.

4/3/2018