Artificial Life and the Chinese David Anderson Division of Science Room Argument Mercantile House Hampshire PO1 2EG, UK [email protected] B. Department of Abstract “Strong artificial life” refers to the thesis that a sufficiently sophisticated computer simulation of a life form is Private Bag 4800 a life form in its own right. Can John Searle’s , argument [12]—originally intended by him to show that the [email protected] thesis he dubs “strong AI” is false—be deployed against strong ALife? We have often encountered the suggestion that it can be (even in print; see Harnad [8]). We do our best to transfer the argument from the domain of AI to that of ALife. We do so in order to show once and for all that the Chinese Keywords room argument proves nothing about ALife. There may Artificial life, digital life, emergence, Chinese room, Searle, Langton, Har- indeed be powerful philosophical objections to the thesis of nad, strong AI, artificial intelligence, strong ALife, but the Chinese room argument is not among ALife them.

1 Introduction

“Strong artificial life” refers to the thesis that a sufficiently sophisticated computer sim- ulation of a life form is a life form in its own right. It has been suggested that the Chi- nese room argument—originally aimed against strong AI—can be redeployed against the claim that computer simulations of life might properly be said to be alive.1 We do our best to transfer the Chinese room argument from the domain of AI to that of ALife. We do so in order to show once and for all that the Chinese room argument proves nothing about ALife.2

2 Extending the Chinese Room Example

Here is Searle’s original formulation [12, pp. 417–418] of the Chinese room argument:

One way to test any theory of the mind is to ask oneself what it would be like if my mind actually worked on the principles that the theory says all minds work on. Let us apply the test to the Schank program with the following Gedankenexperiment. Suppose that I’m locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I’m not even confident that I could recognise Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of

1 See Harnad [8]. 2 The application of the Chinese room argument to ALife is also discussed by Keeley [9] and by Sober [13].

c 2003 Massachusetts Institute of Technology Artificial Life 8: 371–378 (2002)

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/106454602321202435 by guest on 27 September 2021 D. Anderson and B. J. Copeland Artificial Life and the Chinese Room Argument

English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that “formal” means here is that I can identify the symbols entirely by their shapes.... Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view—that is, from the point of view of somebody outside the room in which I am locked—my answers to the questions are absolutely indistinguishable from those of native Chinese speakers.... I produce the answers by manipulating uninterpreted formal symbols.... I am simply an instantiation of the computer program. Now, the [claim] made by strong AI [is] that the programmed computer understands the stories.... But we are now in a position to examine [this claim] in light of our thought experiment.... [I]t seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. For the same reasons, Schank’s computer understands nothing of any stories, ... since in the Chinese case the computer is me, and in cases where the computer is not me, the computer has nothing more than I have in the case where I understand nothing.

With a number of small modifications this becomes the following argument directed against strong ALife:

One way to test any theory of life is to ask oneself what it would be like if the theory were true. Consider the following Gedankenexperiment. Suppose that I’m locked in a room where I have a set of rules that enable me to correlate one set of formal symbols with another set of formal symbols, and that all “formal” means here is that I can identify the symbols entirely by their shapes. Suppose also that after a while I get so good at manipulating the symbols and the rule designers get so good at writing the rules that from the external point of view—that is, from the point of view of somebody outside the room in which I am locked—an invertebrate life form (say) appears to be present in the room. I produce this impression by manipulating uninterpreted formal symbols. I am simply an instantiation of the computer program. Now the claim made by strong ALife is that the programmed simulation is a life form in its own right. But we are now in a position to examine this claim in light of our thought experiment. It seems to me quite obvious in the example that no invertebrate life form is present. I can have “inputs” and “outputs” that are indistinguishable from those of a real invertebrate, and any formal program you like, but there is still no invertebrate in the room with me. For the same reasons, a computer running the same program contains no life form, since in the Gedankenexperiment the computer is me, and in cases where the computer is not me, the computer has nothing more than I do in the story.

We take it that, if the Chinese room argument were successful in the sphere of artificial life, then it would show that all forms of the claim “X is alive in virtue of running such-and-such a program” are false. However, the argument would not, so far as we can see, be applicable to so-called “test-tube ALife,” where the aim is to create artificial life biochemically, rather than by computer simulation. The argument would apply to all claims that computer simulations of life are alive, and also to the claim that computer viruses and other virtual entities, including virtual robots, are alive.

372 Artificial Life Volume 8, Number 4

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/106454602321202435 by guest on 27 September 2021 D. Anderson and B. J. Copeland Artificial Life and the Chinese Room Argument

3 The Basic Error in the Argument3

Both the parent version of the Chinese room argument and the new form succumb to the same criticism: the argument is not logically valid. An argument is logically valid if and only if its conclusion is entailed by its premis(es); an argument is sound if and only if it is logically valid and each premise is true. The proposition that the formal symbol manipulation carried out by the person in the room—call him or her Clerk—does not enable Clerk to understand the Chinese story by no means entails the quite different proposition that the formal symbol manipulation carried out by Clerk does not enable the Room to understand the Chinese story. The Room is the system consisting of Clerk, Clerk’s pencils and erasable paper memory, the rule books containing the program, the input-output slots, and any other items, such as a clock, that Clerk may need in order to run the program by hand. The claim that the Chinese room argument is valid is on a par with the claim that the statement “The organization of which Clerk is a part has no taxable assets in Japan” follows logically from the statement “Clerk has no taxable assets in Japan.” It is important to distinguish this, the logical reply4 to the Chinese room argument, from what Searle calls the systems reply. The systems reply is the following claim [12, p. 419]:

While it is true that the individual person who is locked in the room does not understand the story, the fact is that he is merely part of a whole system and the system does understand the story.

As Searle correctly points out, the systems reply is worthless, since it “simply begs the question by insisting without argument that the system must understand Chinese.” The logical reply, on the other hand, is a point about entailment. The logical reply involves no claim about the truth—or falsity—of the statement that the Room can understand Chinese. Of course, any logically invalid argument can be rendered valid with the addition of further premises (in the limiting case one simply adds the conclusion to the premises). The trick is to produce additional premises that not only secure validity but are sus- tainable. In his discussion of the systems reply Searle says [12, p. 419]:

My response to the systems theory is quite simple: Let the individual ... memoriz[e] the rules in the ledger and the data banks of Chinese symbols, and [do] all the calculations in his head. The individual then incorporates the entire system.... We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn’t anything in the system that isn’t in him. If he doesn’t understand, then there is no way the system could understand, because the system is just a part of him.

This, the outdoor version of the argument, may be represented as follows:

1. The system is part of Clerk. 2. If Clerk (in general, x) does not understand the Chinese story (in general, does not F ), then no part of Clerk (x) understands the Chinese story (F s).

3 This section is based on Copeland [6]. See also Copeland [3], [4], and [5]. 4 The logical reply was first advanced in Copeland [3]. The term “logical reply” is from Copeland [6].

Artificial Life Volume 8, Number 4 373

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/106454602321202435 by guest on 27 September 2021 D. Anderson and B. J. Copeland Artificial Life and the Chinese Room Argument

3. The formal symbol manipulation carried out by Clerk does not enable Clerk to understand the Chinese story.

Therefore

4. The formal symbol manipulation carried out by Clerk does not enable the system to understand the Chinese story.

The outdoor version is logically valid. Is it sound? Premise 1 is perhaps innocent enough. Attention thus centers on premise 2, the part-of principle.5 Searle makes no mention at all of why he thinks the part-of principle is true. Yet the principle is certainly not self-evident. It is entirely conceivable that a homunculus or homuncular system in Clerk’s head should be able to understand Chinese without Clerk being able to do so. (Notice that Searle has no reservations concerning the application of predicates like “understand” to subpersonal systems. He writes (against Dennett) [12, p. 451]:

I find nothing at all odd about saying that my brain understands English.... I find [the contrary] claim as implausible as insisting “I digest pizza; my stomach and digestive tract don’t.”)

Likewise for related values of F . Conceivably, there is a special-purpose module in Clerk’s brain that produces solutions to certain tensor equations, yet Clerk himself may sincerely deny that he can solve tensor equations—does not even know what a tensor equation is, we may suppose. Perhaps it is the functioning of this module that accounts for our ability to catch cricket balls and other moving objects [10]. Clerk himself, we may imagine, is unable to produce solutions to the relevant tensor equations even in the form of leg and arm movements, say because the output of the module fails to connect owing to the presence of a lesion. Or, to move into the realm of science fiction, neuropharmacologists may induce Clerk’s liver to emulate a brain, the liver remaining in situ and receiving input directly from a computer workstation, to which the liver also delivers its output. Clerk’s modified liver performs many acts of cognition that Clerk cannot. One example: Clerk stares uncomprehendingly at the screen of the computer as his liver proves theorems in quantified tense . Of course, one might respond to these and similar examples as follows. Since a part of Clerk is proving a theorem of quantified tense logic (solving a set of tensor equations, etc.) then so is Clerk—there he is doing it, albeit to his own surprise. This response cannot be available to Searle. If Clerk’s sincere denial that he is able to solve tensor equations (or what have you) counts for nothing, then likewise in the case of the Chinese room. However, it is a cornerstone of Searle’s overall case that Clerk’s sincere report “I don’t speak a word of Chinese” suffices for the truth of premise 3. One might call this Searle’s incorrigibility thesis. It, like the part-of principle, is left totally unsupported by Searle. (Searle sometimes says, as if to give independent support to premise 3, “there is no way [Clerk] could come to understand Chinese in the [situation] as described, since there is no way that [Clerk] can learn the meanings of any of the symbols.” This rhetoric simply begs the question, since the matter at issue is whether “just having the symbols by themselves ... [is] sufficient for semantics” and that this cannot be sufficient is, allegedly, “the point that the Chinese room demonstrated” [11, pp. 20–21].)

5 The term “part-of principle” is from Copeland [6].

374 Artificial Life Volume 8, Number 4

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/106454602321202435 by guest on 27 September 2021 D. Anderson and B. J. Copeland Artificial Life and the Chinese Room Argument

If the part-of principle is taken to be a modal claim equivalent to NOT POSSIBLY((some part of Clerk understands Chinese) & NOT(Clerk understands Chinese)), then, assuming the incorrigibility thesis, possible scenarios such as the foregoing do more than bear on the plausibility of the principle: they settle its truth value. If, on the other hand, the part-of principle is said to be a purely contingent claim (i.e., a claim that happens to be true in the actual world but is not true in possible alternatives to the actual world), then Searle’s difficulty is to produce reasons for thinking the principle true.

4 The Proper Role of Introspection in Theory Testing

Another unacceptable feature of the Chinese room argument is its opening claim that introspection offers a way to test any theory of mind—or, in the new version, any theory of life. Why should we think this is true? Certainly, there is little inclination to suppose that introspection is useful for testing scientific theories in general. No one suggests, for example, that a good way to test a theory of motion would be to ask ourselves what it would be like if we were Flying Joe, the human cannonball. The suggestion that, in the case of artificial minds, introspection is a useful tool perhaps owes its plausibility to the idea that we really know minds only from the first-person perspective. All minds other than one’s own, it is often said, are known indirectly by means of a combination of observation of the behavior of others and similarity assumptions. Is being alive in some sense unknowable except from the inside? If so, and counterintuitively, the attention that has been given to devising objective criteria for life has presumably been misdirected. The onus is clearly on advocates of this view to present arguments in its support. In general, introspection seems able to offer little help with key questions about mind and life. The best theories of human consciousness all take thinking to be bound up with physicochemical processes in the brain. (Searle rightly emphasizes the issue of the physical implementation of a computer program: he urges that we consider the “stuff” out of which the putative thinker is constructed and asserts correctly, but unhelpfully, that only materials having the same causal powers as brains could support thinking.) However, any attempt to use introspection to evaluate the parts of our theories of mind that deal with physicochemical transformations seems doomed to failure. And it is not only here that introspection lets us down. Does anyone know how it would be from the first-person perspective if one’s own brain processes were implemented in some other material than the usual? What is it like to be the instantiation of an entirely physical biological system engaged in a constant series of physicochemical transformations? Does this feel different from being the instantiation of an immaterial soul? Introspection and thought experiment might have a useful, if limited, role to play at some points of the enquiry. But it is up to anyone who attempts to invoke those techniques against some theory to explain why the techniques are useful and trustworthy at that point.

5 What Are “Weak” and “Strong” ALife?

According to Steven Harnad, ALife comes in two strengths, strong and weak.6 Strong ALife is the thesis that a sufficiently sophisticated computer simulation of a life form is a life form in its own right. Weak ALife (which Harnad favors) is the view that “virtual life is no more alive than virtual planetary motion moves or virtual gravity attracts”7

6 Harnad notes [8, p. 5] that Sober introduced this terminology at the 1990 Artificial Life II meeting. 7 Harnad [8, p. 7].

Artificial Life Volume 8, Number 4 375

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/106454602321202435 by guest on 27 September 2021 D. Anderson and B. J. Copeland Artificial Life and the Chinese Room Argument

Harnad reports that Chris Langton argues as follows concerning strong ALife:

Suppose ... we could encode all the initial conditions of the biosphere around the time that life evolved, and in addition, we could encode the right evolutionary mechanisms—genetic algorithms, game of life, what have you—so that the system actually evolved the early forms of life, exactly as it had occurred in the biosphere. Could it not in principle go on to evolve invertebrates, vertebrates, mammals, primates, man ... ? And if it could do all that, and if we accept it as a premise ... that there would not be one property of the real biosphere, or of real organisms ... that would not also be present in the virtual world in which all this virtual life, and eventually these virtual minds, including our own had “evolved,” how could I doubt that the virtual life was real? Indeed, how could I even distinguish them?8

The view attributed here to Langton looks rather weak. As stated, it appears to be an example of begging the question. If we accept Langton’s premise that “there would not be one property of the real biosphere, or of real organisms ... that would not also be present in the virtual world,” then, since one of the properties of the real world is that life exists here, the virtual world must also contain life. Langton shifts his ground in the course of the argument. At first, he makes the sug- gestion that “we could encode all the initial conditions of the biosphere ...” [emphasis added]. Encodings do not necessarily have all the properties of the items encoded—the representation “H2O” is not wet. Yet the premise displayed above makes the stronger claim that all the properties of the real biosphere, as opposed to encodings of these properties, would be present in the virtual world. In the absence of some compelling argument for this claim, there is no reason to think it true. If anyone wants to insist that virtual heat must really be hot, we are willing to debate the matter with them before two large fires, one real, one virtual, so long as we get first choice of fire. As Duns Scotus remarked: “those who deny what is manifest to the senses ... should be exposed to the fire, for to be burnt and not to be burnt is the same to such men.”9 How, Langton asks, could we “even distinguish” the real from the virtual under the conditions that he describes? As the example of the fires makes clear, the distinction is sometimes easily drawn, even where all the properties of the fire are faithfully rep- resented in the virtual world. Why may not the distinction always be as easily drawn? From the real world one may alter any aspect of a virtual world, just by modifying lines of code. The basic laws of any virtual physics may be modified, or nullified altogether. Whole worlds in a virtual universe can be snuffed out at the touch of a few buttons. The situation is not symmetrical, however. A virtual human sitting at a virtual keyboard cannot alter the way physics works in the real world. The real heavens are safe from a programming update.

6 It Must Be a Duck: The Boundary between Simulation and Duplication

Langton’s suggestion is that the indistinguishably of the virtual from the real lends support to the view that virtual life is real. However, the boot seems rather to be on the other foot. Since, in principle, the distinction can always be made (or so we have suggested), what reason could there be to think that virtual life is real?

8 Harnad [8, p. 5]. 9 Duns Scotus [7]. Thanks to Nicole Wyatt (University of Calgary) for bringing this quotation to our attention.

376 Artificial Life Volume 8, Number 4

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/106454602321202435 by guest on 27 September 2021 D. Anderson and B. J. Copeland Artificial Life and the Chinese Room Argument

The answer one often hears is that if it walks like a duck and quacks like a duck then it must be a duck—a sufficiently accurate simulation must be conceded to be a duplication. However, even if one were to accept the must-be-a-duck principle, a great deal of work would remain if a case is to be pressed home for strong ALife. For a start, we need to be clear about what counts as a sufficiently accurate simulation.10 In reality, more is expected of ducks than just walking and quacking, and any putative duck would need to satisfy us on more grounds than these alone. The burden of proof rests with adherents to the must-be-a-duck principle. For any given simulation, it must be explained what is being simulated, which tests have to be passed in order for the simulation to count as duplication, and why these are the right tests. Moreover, this must not be done simply on a piecemeal basis. The tests that are specified in the case of any particular simulation must be responsive to some general account of what properties simulations must exhibit in order to be deemed alive. Only when all of this has been done might we be in a position to say that a simulation is a contender for the status of duplication. Concerns over the difference between mere simulation and genuine duplication are not confined to discussions in AI and ALife. The way we handle these concerns in more prosaic situations may guide us. Is a painting really the work of Picasso? Is a supposed gemstone really a diamond? Two things are apparent. First, that where the origin of a given object is important, the must-be-a-duck principle is rejected. Thus, even if a painting were an atom-for-atom copy of one by Picasso, it would not necessarily count as a painting by Picasso. Secondly, as the case of the diamond illustrates, the object in question must yield up appropriately identical results under testing to a sample known to be genuine. So, if an undisputed diamond does not fluoresce then neither should a duplicate. Yet even a simulated diamond that passes all such tests need not be conceded to be real: here, too, it is arguable that origin matters. The history of a real diamond involves carbon undergoing certain processes over time. How do virtual diamonds compare to the usual sort? On the one hand, we have an object that can be weighed, measured, held up to the light, used to cut glass, or shaped into a thing of beauty, while on the other we have a printout or perhaps a screen display. We cannot even use the same test equipment to carry out a comparison. Only virtual tools can weigh a virtual diamond, because in the real world they are just code. Langton demands to know how we could tell apart the virtual and the real. In the case of paintings, diamonds, and a whole host of other relatively everyday items, the answer seems to be that, in principle at any rate, the real world can be distinguished from the virtual by almost any test you like. We maintain, contra Searle, that an appropriate simulation of a mind really is a mind.11 Is life more like a mind or more like a diamond? Although strong ALife survives the Chinese room argument, it may yet perish on this dilemma.

Acknowledgments Thanks are due to Kelly Smith and Brian L. Keeley for their helpful comments on an earlier draft of this paper.

References 1. Anderson, D. (1989). Artificial intelligence and intelligent systems: The implications. Ellis Horwood. 2. Anderson, D. (1988). When is a simulation not a simulation? Philosophy, 63, 389–394.

10 See Anderson [2]. 11 See Anderson [1] and Copeland [3].

Artificial Life Volume 8, Number 4 377

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/106454602321202435 by guest on 27 September 2021 D. Anderson and B. J. Copeland Artificial Life and the Chinese Room Argument

3. Copeland, B. J. (1993). Artificial intelligence: A philosophical introduction. Oxford, UK: Blackwell. 4. Copeland, B. J. (1993). The curious case of the Chinese gym. Synthese, 95, 173–186. 5. Copeland, B. J. (1998). Turing’s O-machines, Searle, Penrose and the brain. Analysis, 58, 128–138. 6. Copeland, B. J. (2002). The Chinese room from a logical point of view. In J. Preston & M. Bishop (Eds.), Views into the Chinese room. Oxford, UK: . 7. Duns Scotus, J. Contingency and Freedom: Lectura I 39, 1–5 n. 40. Commentary and translation by A. Vos Jaczn et al. (1994). Dordrecht, The Netherlands: Kluwer. 8. Harnad, S. (1994). Artificial life: Synthetic vs. virtual. In C. Langton (Ed.), Artificial Life III (pp. 539–552). Redwood City, CA: Addison-Wesley. 9. Keeley, B. L. (1994). Against the global replacement: On the application of the philosophy of artificial intelligence to artificial life. In C. G. Langton (Ed.), Artificial Life III (pp. 569–597). Redwood City, CA: Addison-Wesley. 10. McLeod, P., & Dienes, Z. (1993). Running to catch the ball. Nature, 362, 23. 11. Searle, J. (1990). Is the brain’s mind a computer program? Scientific American, 262(1), 20–25. 12. Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3, 417–457. Reprinted in Hofstadter, D., & Dennett, D. (1981). The Mind’s I (pp. 353–373). Hammondsworth, Middlesex: Penguin. 13. Sober, E. (1992). Learning from functionalism: Prospects for strong artificial life. In C. G. Langton, C. Taylor, J. D. Farmer, & S. Rasmussen, (Eds.), Artificial Life II (pp. 749–765). Reading, MA: Addison-Wesley.

378 Artificial Life Volume 8, Number 4

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/106454602321202435 by guest on 27 September 2021