<<

Searle: Locked Inside His Own Chinese Room Roshan Shah ‘01

Abstract

I shall discuss a specific area of the artificial (AI) debate, the Chinese Room Argument, and attempt to show that it is inherently wrong. After establishing a sufficient background for the problem by discussing the and its role in Searle’s argu- ment, I shall address the Chinese room directly. My aim is to show that Searle’s move from the homunculus to the wider system is unjus- tified and fallacious. I shall expound on this view by means of a quasi-dialogical form by incorporating Searle’s retorts as I proceed through my argument. It will become apparent that his responses to the Systems Reply do not introduce any new evidence for his case. After exposing our fundamental disagreement, I shall then address his assumption that is insufficient for , showing that he again makes a premature claim on an empirical question. Further, I shall propose an illustration that supports the case that seman- tics can conceivably result from syntax.

In 1950, the classic debate of whether machines could think My main objection to Searle’s argument deals with his appli - became fueled with the insights of AI genius . He pro- cation of the premise that the man does not understand the con- posed a scientific test for determining the success or failure of a versation. (Searle refers to this anticipated objection as the thinking machine, or more specifically, a thinking computer. The Systems Reply.) He cannot validly make the move of assuming is simple: if a computer can perform in such a way that an that the system as a whole understands nothing. I concede that expert interrogator cannot distinguish it from a human then the there is no way to prove the whole system understands, but there computer can be said to think. Since then, it has been the goal of is, likewise, no way to prove the whole system does not under- AI to design a system to pass this test. Thirty years after Turing stand, which is what Searle would have us believe. If a part fails first formulated his test, proposed a thought experi- to display some quality, there is no logical method available to ment of a system that he argued would pass the famed Turing Test; infer that the greater system also fails to display that quality—to he asserted, however, that any observer would clearly see that the do so would be to fall prey to the fallacy of composition (Horn). system would not be able to think. I aim to expose the subtle fal- Consider, for example, the analogy with a pilot flying an airplane. lacies in Searle’s , the Chinese Room A pilot by the laws of nature surely cannot fly on his own, but that Argument, and show where his reasoning fails. does not mean to imply that when he is in a plane the airplane is Searle’s Chinese Room Argument has prima facie merit and also unable to fly. Even Searle would hesitate to argue that the strength. He offers the following scenario. Place an English plane cannot fly, but he would have us believe that that same speaking man ignorant to some language, say, Chinese, in a room faulty logic is valid when applied to his Chinese room. The pilot, with only a rulebook (written in English) and an input/output slot after all, is the symbol manipulator of the cockpit controls, just as for communicating with the surrounding world. Now pretend this the man is the symbol manipulator of the room. Be careful not to man is asked questions written in Chinese and passed through the misunderstand the example. I am not drawing a parallel for the slot. He is told to follow the instructions in the book and then to room’s ability to think. As I have said, there is no logical way to output a response for the Chinese interrogators. We assume that support that claim. The point is we cannot assume that the room the instruction book has codified all the rules needed to speak flu- does not understand merely based on the fact that the man does ently by mere Chinese symbol manipulation. The man follows the not understand. rules perfectly and produces flawless Chinese answers to the Searle anticipated the Systems Reply when he first intro- questions, yet notice that the exchange of “squiggles and squog- duced his Chinese room argument and attempts to dismiss it, but gles” means nothing to him. The interrogators outside of the once again, his counter-argument is simply wrong. Searle adjusts room, however, believe that the man inside the room understands the scenario to internalize the system within the man in the room, Chinese. The man, of course, symbolizes a computer, and the so now the man does not even need to be in a room. He suggests book symbolizes the . Searle asserts that even that instead of the rulebook being a separate entity, we can imag- though this room would pass the Turing Test, the little man inside ine the man following the rules completely from memorization. would not understand the dialogue in the way that the interroga- Not surprisingly, the man still has not any of the tors believe. He then concludes that “just manipulating the sym- Chinese exchange with the interrogator. Searle, here, tries to bols is not by itself enough to guarantee , perception, evade the objection by making the system exist within the man, understanding, thinking and so forth” (Searle, 1993). Searle states but he misses the point. It remains that the man is not the system, his position with the following axioms and conclusion: so his lack of understanding cannot be projected onto the system. Searle’s statement that “there is nothing in the ‘system’that is not 1. Computer programs are formal (syntactic). in me, and since I don’t understand Chinese, neither does the sys- 2. Human have mental contents (semantics). tem” is unsubstantiated (Searle, 1990). Copeland points out that 3. Syntax by itself is neither constitutive of nor “Searle himself makes no mention of why he believes [the pro- sufficient for minds. jection onto the system to logically work]” (Copeland, 1993). For \ Thus, programs are neither constitutive of nor similar reasons described above, Searle cannot disregard the sufficient for minds. (Searle, 1993) Systems Reply by merely asserting otherwise. In this case, the man has not become the system, so this adjustment of the Chinese

Dartmouth Undergraduate Journal of Science 9 room accomplishes little. Now the man encompasses the system “Suppose further that its internal states are connected to elements rather than being encompassed by it. Searle still cannot logically of its environment in just the way demanded by our psychose- draw conclusions about the system based on the man. Copeland mantics. Assuming these things were possible at all, then, pre- formulates this bad move by Searle as the Part-Of Fallacy sumably, the setup would understand Chinese” (Bynum & Moor, (Copeland, 1993). I shall return to the systems reply shortly, but 1998). It is this causal connection with the world, which the first I will attempt to refute the Chinese room argument in anoth- android undeniably possesses, that is important to understanding er way. meaning. This idea is important, and I shall return to this idea Searle’s basic Chinese room and even his modified, internal- later, but for now let us consider Searle’s rebuttal. ized system Chinese room are both assumed to pass the Turing Searle counter-argument offers no new insight to the debate. Test. This simple question, however, will show otherwise: He falls into the same fallacy of composition: Looking out of the window, what color would you say the sky is right now? (Or another example, “What time is it?”) Note that Nor does it help the argument to add the causal the- this is a perfectly valid question for an interrogator to ask in the ory of reference, for even if the formal tokens in the Turing Test. Obviously, the man cannot produce a correct answer program have some causal connection to their without making the observation himself—and that is impossible alleged referents in world, as long as the since he is only manipulating symbols according to the rulebook. agent has no way of knowing that, it adds no inten- It is an inherent flaw in the rulebook to not be able to deal with tionality whatever to the formal tokens. Suppose, changing conditions. The best the room could do is answer with for example, that the symbol for egg foo yung in the the generic blue, or even some creative twist on blue. But it Chinese room is actually causally connected to egg would be a guess, whereas the human counterpart in the Turing foo yung. Still, the man in the room has no way of Test would correctly identify the sky’s hue. This objection, how- knowing that. For him, it remains an uninterpreted ever, does not cause Searle to break stride; he retorts with the formal symbol. (Bynum & Moor, 1998) robot version of his Chinese room argument. Suppose that the room is located inside a “humanoid robot with a soft pink plastic Again, he maintains that since the man has no way of knowing the coating stuffed full of pressure-sensitive tactile sensors. [It has] object causing the formal symbol, the system also does not know arms, legs, cosmetically finished TV eyes, electronic ears, a fine the object causing the symbol. But that cannot be—the symbol is speaking voice, etc.” (Copeland, 1993). Searle argues that this the result of the system knowing the object in the first place! It is type of android would satisfy the condition of fully interacting in surprising that Searle would still rest his entire argument on the the world and would in fact pass the Turing Test. I will not argue benighted homunculus, but again and again he comes back to this with his modification. In fact, it is the response I want. Now that same point of contention. It is here that our dialogue comes to a this new form of the Chinese room has been established, let us halt. Searle staunchly defends his argument by asserting that his return to the original objection. logic is sound, while I remain steadfast in my belief that it is log- Searle still holds that the robot does not think because the ically impossible to infer qualities of the system from qualities of man within is still benighted with regard to the external reality. the man. The man merely manipulates the Chinese symbols (or 0’s and 1’s) Let us proceed to the greater issue for Searle and his Chinese to form convincing body movements and convincing conversa- room argument: the claim that syntax is not sufficient for seman- tion. Searle would hold that the basic principle has not changed tics (see Axiom 3, above). Copeland explains the two as follows: and that even with its new powers of senses, the fact remains that “To have a mastery of syntax is to have a mastery of some set of it is only manipulating symbols. But wait, everything has rules for performing symbol manipulations; and to have a mastery changed! In fact, this new case should make clear the shortcom- of semantics is to have an understanding of what the symbols ings of Searle’s response to the Systems Reply. Now that the actually mean” (Copeland, 1993). Simply, my point is that the meaningless symbols result from direct observation of the world, truth of Axiom 3 cannot be ascertained from his thought experi- it should appear more obvious that the symbol manipulator’s ment with the Chinese symbols. It remains an empirical issue, understanding bears little on the entire system’s understanding. and this axiom therefore begs the question. It is true that the man The 0’s and 1’s are still 0’s and 1’s and the Chinese symbols are in the room is semantically challenged, but even if we were to still Chinese symbols, but now there is a certain causal relation- accept Searle’s fallacious claim that the entire system is semanti- ship between the signifier and the signified (i.e. symbol and cally challenged, he is in no position to claim that syntax will object). The entire system then is much more than the man with never be sufficient for semantics. If the proper program can be his rulebook, and while at its foundation lies a symbol manipula- created, one that is causally connected to the world—perhaps tor, it becomes harder to call the system ignorant based only on even the android in Searle’s modified Chinese room above—then the man’s ignorance. Hypothetically, lets imagine the android we would have to reevaluate our definitions of syntax and seman- functions identically to a human and conceivably feels love, fear tics and concede that the former is sufficient for the latter. and other distinctly human states. Granted the man within Searle’s mistake is shown by an intriguing example offered by would not feel anything or be aware that the android feels any- Paul and called the Luminous Room thing, but that doesn’t change the fact that the entire system feels. Argument (Churchland & Churchland, 1990). They draw a paral- When its vision sensors are triggered and the appropriate symbols lel between Searle’s fallacious argument and an argument raised are transmitted at the sight of a loved or feared object, the android against ’s nineteenth century suggestion that would feel the corresponding sensations even though the man light and electromagnetic waves are identical: knows nothing. William Lycan discusses this same view:

10 Vol. 1, No. 1 1. Electricity and magnetism are forces. understanding? Is there, in fact, semantics within Joe resulting 2. The essential property of light is luminance. from the syntactical manipulations of the gym? It seems plainly 3. Forces by themselves are neither constitutive of obvious to me that Joe has semantic understanding regardless of nor sufficient for luminance. what Chinese Gym Sub-Compartment-1 understands. I believe a \ Electricity and magnetism are neither constitu- “Joe” would shed light on our own mysterious cognitive abilities. tive of nor sufficient for light. (Churchland & Humans are conceivably symbol manipulators without any addi- Churchland, 1990). tional special mystical quality as Searle would have us believe. Searle’s Chinese room argument contains a basic logical fal- In this obviously empirical case, further scientific research lacy, namely, the system does not understand because the man revealed that this argument was wrong and fallaciously based on does not understand. Further, it is apparent that he relies heavily . This analogy shows us that we will only know whether on this faulty claim in his subsequent rebuttals and modifications syntax is sufficient for semantics after science guides us into that of the Chinese room. At best, Searle can only assert the validity realm. Moreover, I believe that it will be possible one day to phys- of this claim and the soundness of his argument, and it is here that ically show semantic understanding as a result of symbol manip- we have a direct irreconcilable disagreement. His argument, there- ulation. Here I return to the idea of causal connection and intro- fore, fails to discount the Turing Test as an indicator of artificial duce the promising area of parallel-distributed-processing (PDP) intelligence. Furthermore, Serle’s argument that syntax is not suf- systems. ficient for semantics is a premature claim on an empirical ques- PDP systems offer the potential of achieving the level of tion. symbol manipulation required for semantic understanding. They are bottom-up style systems resembling a self-learning algorithm. The Churchlands discuss in length the potential this system has for AI. For now, I will rest my argument on the fact that the issue depends on science to demonstrate that syntax is sufficient for References semantics. Searle does not accept this and maintains that it is not an empirical question. He adjusts his Chinese room argument to Broadbent, D. (Ed.) (1993). The of Human Intelligence. address the PDP system by creating the Chinese Gym argument. Oxford: Blackwell. Bynum, T. & Moor, J. (1998). The Digital Phoenix: How Computers are Changing Philosophy. Oxford: Blackwell. I have a Chinese gym: a hall containing many Churchland, P. & Churchland, P. (1990). Could a machine think? monolingual, English-speaking men. These men Scientific American, January, 32-37. would carry out the same operations as the nodes Copeland, J. (1993). . Cambridge: Blackwell. and synapses in a connectionist architecture as Horn, Mapping Great Debates. described by the Churchlands, and the outcome Searle, J. R. (1990). Is the brain’s mind a computer program? would be the same as having one man manipulate Scientific American, January, 25-31. symbols according to a rulebook... there is no way for the system as a whole to learn the meanings of any Chinese words. (Searle, 1990).

Searle resorts to his basic assumption that the understanding of a homunculus—in this case, multiple homunculi—can be extrapo- lated onto the entire system. I have already discussed the lack of logic in this move and how the issue is not reconcilable. I shall About the Author refute his gym example by again illustrating that the objections against semantic understanding in a symbol manipulator are Roshan Shah ’01 is a chemistry major with invalid. minors in philosophy and public health policy. Meet Joe. Joe is a rather average chap with a rather average He hopes to work in some area of public life. He lives in an apartment with a roommate and 2.5 pets. He health. Some of the possibilities he is consid- likes to relax on Sunday afternoons with his friends and a televi- ering are working in a refugee camp through sion, and has a penchant for the fine arts. He reads Shakespeare, Doctors Without Borders, the CDC, or a health enjoys classical concerts, looses himself in a Dali masterpiece, policy organization in Washington DC. His and mellows out to Radiohead. Joe, however, is slightly different short term goal, however, is to attend medical than a normal Chinese bachelor of 25—he has a tiny gym of school. Roshan’s campus activities include the homunculi in his cranium. These homunculi are mindlessly fol- Jacko, the Refugee Fund Group, the Cancer lowing their rulebooks and struggling against the pro-suicide Awareness Organization, and the Upper Valley environment caused by their boredom. How would Searle Wilderness Response Team. He is also a mem- describe this fellow? Searle would not argue that a “Joe” could ber of the Alpha Delta fraternity. not possibly exist, but that this “Joe” would be nothing more than a zombie. Well how can that be? Joe is causally connected with the world and behaves as you and I. Can Searle still rely on his fallacy of composition to explain away Joe’s apparent semantic

Dartmouth Undergraduate Journal of Science 11