Artificial Life and the Chinese David Anderson Division of Computer Science Room Argument University of Portsmouth Mercantile House Hampshire PO1 2EG, UK [email protected] B. Jack Copeland Department of Philosophy University of Canterbury Abstract “Strong artificial life” refers to the thesis that a sufficiently sophisticated computer simulation of a life form is Private Bag 4800 a life form in its own right. Can John Searle’s Chinese room Christchurch, New Zealand argument [12]—originally intended by him to show that the [email protected] thesis he dubs “strong AI” is false—be deployed against strong ALife? We have often encountered the suggestion that it can be (even in print; see Harnad [8]). We do our best to transfer the argument from the domain of AI to that of ALife. We do so in order to show once and for all that the Chinese Keywords room argument proves nothing about ALife. There may Artificial life, digital life, emergence, Chinese room, Searle, Langton, Har- indeed be powerful philosophical objections to the thesis of nad, strong AI, artificial intelligence, strong ALife, but the Chinese room argument is not among ALife them. 1 Introduction “Strong artificial life” refers to the thesis that a sufficiently sophisticated computer sim- ulation of a life form is a life form in its own right. It has been suggested that the Chi- nese room argument—originally aimed against strong AI—can be redeployed against the claim that computer simulations of life might properly be said to be alive.1 We do our best to transfer the Chinese room argument from the domain of AI to that of ALife. We do so in order to show once and for all that the Chinese room argument proves nothing about ALife.2 2 Extending the Chinese Room Example Here is Searle’s original formulation [12, pp. 417–418] of the Chinese room argument: One way to test any theory of the mind is to ask oneself what it would be like if my mind actually worked on the principles that the theory says all minds work on. Let us apply the test to the Schank program with the following Gedankenexperiment. Suppose that I’m locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I’m not even confident that I could recognise Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of 1 See Harnad [8]. 2 The application of the Chinese room argument to ALife is also discussed by Keeley [9] and by Sober [13]. c 2003 Massachusetts Institute of Technology Artificial Life 8: 371–378 (2002) Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/106454602321202435 by guest on 27 September 2021 D. Anderson and B. J. Copeland Artificial Life and the Chinese Room Argument English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that “formal” means here is that I can identify the symbols entirely by their shapes.... Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view—that is, from the point of view of somebody outside the room in which I am locked—my answers to the questions are absolutely indistinguishable from those of native Chinese speakers.... I produce the answers by manipulating uninterpreted formal symbols.... I am simply an instantiation of the computer program. Now, the [claim] made by strong AI [is] that the programmed computer understands the stories.... But we are now in a position to examine [this claim] in light of our thought experiment.... [I]t seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. For the same reasons, Schank’s computer understands nothing of any stories, ... since in the Chinese case the computer is me, and in cases where the computer is not me, the computer has nothing more than I have in the case where I understand nothing. With a number of small modifications this becomes the following argument directed against strong ALife: One way to test any theory of life is to ask oneself what it would be like if the theory were true. Consider the following Gedankenexperiment. Suppose that I’m locked in a room where I have a set of rules that enable me to correlate one set of formal symbols with another set of formal symbols, and that all “formal” means here is that I can identify the symbols entirely by their shapes. Suppose also that after a while I get so good at manipulating the symbols and the rule designers get so good at writing the rules that from the external point of view—that is, from the point of view of somebody outside the room in which I am locked—an invertebrate life form (say) appears to be present in the room. I produce this impression by manipulating uninterpreted formal symbols. I am simply an instantiation of the computer program. Now the claim made by strong ALife is that the programmed simulation is a life form in its own right. But we are now in a position to examine this claim in light of our thought experiment. It seems to me quite obvious in the example that no invertebrate life form is present. I can have “inputs” and “outputs” that are indistinguishable from those of a real invertebrate, and any formal program you like, but there is still no invertebrate in the room with me. For the same reasons, a computer running the same program contains no life form, since in the Gedankenexperiment the computer is me, and in cases where the computer is not me, the computer has nothing more than I do in the story. We take it that, if the Chinese room argument were successful in the sphere of artificial life, then it would show that all forms of the claim “X is alive in virtue of running such-and-such a program” are false. However, the argument would not, so far as we can see, be applicable to so-called “test-tube ALife,” where the aim is to create artificial life biochemically, rather than by computer simulation. The argument would apply to all claims that computer simulations of life are alive, and also to the claim that computer viruses and other virtual entities, including virtual robots, are alive. 372 Artificial Life Volume 8, Number 4 Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/106454602321202435 by guest on 27 September 2021 D. Anderson and B. J. Copeland Artificial Life and the Chinese Room Argument 3 The Basic Error in the Argument3 Both the parent version of the Chinese room argument and the new form succumb to the same criticism: the argument is not logically valid. An argument is logically valid if and only if its conclusion is entailed by its premis(es); an argument is sound if and only if it is logically valid and each premise is true. The proposition that the formal symbol manipulation carried out by the person in the room—call him or her Clerk—does not enable Clerk to understand the Chinese story by no means entails the quite different proposition that the formal symbol manipulation carried out by Clerk does not enable the Room to understand the Chinese story. The Room is the system consisting of Clerk, Clerk’s pencils and erasable paper memory, the rule books containing the program, the input-output slots, and any other items, such as a clock, that Clerk may need in order to run the program by hand. The claim that the Chinese room argument is valid is on a par with the claim that the statement “The organization of which Clerk is a part has no taxable assets in Japan” follows logically from the statement “Clerk has no taxable assets in Japan.” It is important to distinguish this, the logical reply4 to the Chinese room argument, from what Searle calls the systems reply. The systems reply is the following claim [12, p. 419]: While it is true that the individual person who is locked in the room does not understand the story, the fact is that he is merely part of a whole system and the system does understand the story. As Searle correctly points out, the systems reply is worthless, since it “simply begs the question by insisting without argument that the system must understand Chinese.” The logical reply, on the other hand, is a point about entailment. The logical reply involves no claim about the truth—or falsity—of the statement that the Room can understand Chinese. Of course, any logically invalid argument can be rendered valid with the addition of further premises (in the limiting case one simply adds the conclusion to the premises). The trick is to produce additional premises that not only secure validity but are sus- tainable.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-