Is Nothing More Than Input/Output?

The Physicalist claims that consciousness is nothing more than some physical events. For instance, in humans, the experience of pain is nothing over and above some neurons firing. If that’s right, then it should in theory be possible for us to construct a purely physical thing which is conscious! Call it (A.I.) or a robot. The question is, then, COULD such a thing ever be truly conscious?

1. The Turing Test: Let us start by considering a story:

Raymond Smullyan, “An Unfortunate Dualist” (1980) Once upon a time there was a dualist. He believed that and matter are separate substances. Just how they interacted he did not pretend to know—this was one of the “mysteries” of life. But he was sure they were quite separate substances. This dualist, unfortunately, led an unbearably painful life—not because of his philosophical beliefs, but for quite different reasons. … He longed for nothing more than to die. But he was deterred from suicide by such reasons as … he did not want to hurt other people by his death. … So our poor dualist was quite desperate. Then came the discovery of the miracle drug! Its effect on the taker was to annihilate the soul or mind entirely but to leave the body functioning exactly as before. Absolutely no observable change came over the taker; the body continued to act just as if it still had a soul. Not the closest friend or observer could possibly know that the taker had taken the drug, unless the taker informed him. … [O]ur dualist was, of course, delighted! Now he could annihilate himself (his soul, that is) in a way not subject to any of the foregoing objections. And so, for the first time in years, he went to bed with a light heart, saying: “Tomorrow morning I will go down to the drugstore and get the drug. My days of suffering are over at last!” With these thoughts, he fell peacefully asleep. Now at this point a curious thing happened. A friend of the dualist who knew about this drug, and who knew of the sufferings of the dualist, decided to put him out of his misery. So in the middle of the night, while the dualist was fast asleep, the friend quietly stole into the house and injected the drug into his veins. The next morning the body of the dualist awoke— without any soul indeed—and the first thing it did was to go to the drugstore to get the drug. He took it home and, before taking it, said, “Now I shall be released.” So he took it and then waited the time interval in which it was supposed to work. At the end of the interval he angrily exclaimed: “Damn it, this stuff hasn’t helped at all! I still obviously have a soul and am suffering as much as ever!”

What is the moral of this story? At the end, Smullyan asks us, “Doesn’t all this suggest that perhaps there might be something just a little wrong with dualism?” If dualism were the correct view, then it should be in principle possible for a being to exhibit the external behavior of consciousness without actually BEING conscious (i.e., philosophical zombies would be metaphysically possible). Smullyan’s story (supposedly) illustrates that this is absurd. What is this mysterious thing called consciousness that our protagonist believes to be separable from conscious behavior, if not the behavior itself?

1

In 1950, Alan Turing proposed that an A.I. would be “just like us” mentally if it could pass a certain test. (A nice video about it can be found here.) The test was to have a human being ask it a series of questions via something like text message. If the human couldn’t tell whether she was texting a human or a robot, then the A.I. passed the test—that is, we would judge that the robot was CONSCIOUS.

Thus, the proposal here is that consciousness is present wherever there is a certain kind of external (i.e., observable) behavior. (Call this proposal ‘’).

Behaviorism: Conscious mental states are reducible to behavior; e.g., to be in pain just is to say “Ouch!” in response to certain stimuli. And so on.

The scientists tell us that such A.I.’s will someday be a reality (and maybe they already ARE! See here). But, some philosophers have questioned Turing’s reasoning.

2. The : Imagine the following scenario, described by (1971):

The Chinese Room You know absolutely nothing about the Chinese language. You do not speak a word of it. Some scientists stick you in a room all by yourself and give you a giant 3-ring binder. In the binder are thousands of pages with 2 columns. In both columns are weird strings of symbols. The scientists tell you that they will be slipping pieces of paper under the door. These papers will have weird strings of symbols on them too. When you receive these slips of paper, you are to consult your 3-ring binder and find that string of symbols somewhere in the left- hand column. You are told that you must then copy down onto the piece of paper whatever corresponding string of symbols is written in the right-hand column, and slip the paper back under the door. (Videos here and here.)

In the story above, it turns out that the “weird symbols” are Chinese characters. The strings of symbols in the left-hand column of the binder are simply sentences in the Chinese language, and the symbols in the right-hand column are merely appropriate responses that one might give (in Chinese) if someone communicated that sentence in the left-hand column to them. Thus, in this way, the person in “The Chinese Room” is able to “communicate” with someone on the other side of the door in Chinese.

But, here, Searle notices a problem with Turing’s test for the presence of a mind: The person in The Chinese Room WOULD be able to pass a version of Turing’s test. That is, a Chinese speaker outside of The Chinese Room would be convinced that the person inside of the room was fluent in Chinese. And yet, it is clear in the example that the person inside of The Chinese Room doesn’t understand Chinese. They don’t speak a single word of it! And they don’t understand one bit of what they are doing.

2

Against A.I.: Many Physicalists claim that the human brain is no different than a really advanced computer. For, our cognitive processes are nothing more than a series of inputs and outputs. One day, it is said, computers will be able to mimic these inputs and outputs and become conscious themselves. Once they begin to process information in a certain way (i.e., the way WE do), computers will begin to have conscious experiences.

Searle’s story challenges this conclusion. Searle’s position is that, even if a computer could IMITATE a mind (by processing information just as we do; i.e., by mastering our language, claiming to have certain thoughts and beliefs, etc.), it would be merely that: An imitation. An artificial intelligence would have no more conscious understanding than the person in The Chinese Room. In order to have CONSCIOUSNESS, something more than a mere input-output pattern is needed. (To illustrate, it may help to simply interact with Mitsuku or Cleverbot for a while, and consider whether all the bot needs in order to become conscious is to become more convincing. What do you think?)

An argument for this conclusion would look something like the following:

1. Physicalist Hypothesis: Whenever an individual behaves as if it is conscious—that is, whenever the system has the appropriate inputs and outputs—then that thing is conscious. (A specific example: A conscious mind’s belief that she is happy JUST IS her reply—or being disposed to reply—that she is happy, when asked.) 2. The man in The Chinese Room has the appropriate inputs and outputs. 3. The man in The Chinese Room is not conscious (that is, he does not have the conscious mental states that Behaviorism would attribute to him; for instance, though he IS disposed to report to his Chinese-speaking partner that he knows Chinese when asked, this is NOT a belief that he actually has). 4. Therefore, the physicalist hypothesis (from premise 1) is false.

Searle has not proved that consciousness/understanding is NOT reducible to physical processes—but his example IS somewhat persuasive in convincing the reader that consciousness cannot be reduced to THAT sort of physical process; i.e., consciousness is not MERELY a kind of behavior (i.e., it is not reducible to appropriate inputs & outputs).

Objection: Some suggest that premise 2 is false, because Searle’s example is too simple. Imagine that the person in the Chinese room gains access to the outside world (e.g., via cameras and microphones), and then begins to connect the Chinese symbols with the things she sees on her monitors. She even learns to form NEW, increasingly complex strings of symbols that were not in the 3-ring binder. In short, imagine that:

(1) she connects the abstract symbols with things in the tangible world, (2) she begins to learn on her own, and (3) the original output that she creates on her own becomes very complex.

3

Would you say that the person in the Chinese room was still merely IMITATING the Chinese language? Or would we conclude that she now has TRUE UNDERSTANDING?

Alternatively, perhaps premise 3 is false. Perhaps the SYSTEM as a whole DOES have understanding? We would not expect the PERSON in the room to have understanding, because she is only a COMPONENT of a system that understands, just as we would not expect a set of neurons to have understanding, because they are only a COMPONENT of a system that understands. (What do you think?)

3. The China Brain: Perhaps certain kinds of behavior are a pretty good INDICATION that consciousness is present, but that’s not what consciousness IS. Instead, perhaps consciousness occurs whenever a physical system FUNCTIONS in a certain kind of way (e.g., in the way that our brains function).

Functionalism: Conscious mental states are identical to system-functions; e.g., pain is just whatever part of any particular input-output system causally interacts with other parts of that system and functions in the same way as our brains function when we are in pain. (See John Searle explain it here.)

But, imagine a case from , 1978:

China Brain The entire population of China (1.4 billion) are asked to engage in an experiment. The motions of a giant robot are dictated by a network of radio signals. Every single person is given a walkie-talkie and a series of instructions. The range of the walkie-talkies only extends to those people nearest to them. The instructions are things like, “When the person in front of you signals you with a beep, signal the person behind you with a beep,” etc. (Or something.) The point of the experiment is to perfectly simulate the sorts of signals that NEURONS give to each other in your brain. So, the population of China perfectly copies the functionality of a human brain, and the robot body that they are controlling performs the actions that its “brain” tells it to.

[Note: Scientists have since discovered that there are more than ONE billion neurons in the brain—there are more like 86 billion. But, in any case, rats are clearly conscious, and they only have 200 million neurons—the population of China has 7 times that many people!]

Anyway, if the hypothesis is correct that conscious states are identical to FUNCTIONAL states, then the country of China would BE CONSCIOUS in the scenario above. Some sort of conscious mind would arise out of the Chinese people!

4

Block expects you to have the that a mind would NOT arise in his example. And, since it is absurd to think that the China Brain would give rise to a conscious mind, it must not be the case that consciousness is reducible to a system of signals and impulses following a certain pattern. His argument runs as follows:

1. Physicalist Hypothesis: Whenever a physical system functions in a certain way (namely, in the way that conscious do), then that system will be conscious. 2. The population in the China Brain case does function in the appropriate way. 3. But, the population in the China Brain case (or, alternatively, the robot that they are controlling) is clearly not conscious. 4. Therefore, the physicalist hypothesis (from premise 1) is false.

Objection: But, is Block right? Some suggest that premise 3 is false, and that a mind WOULD arise. This is not that strange. Consider the ways in which collections of bees or slime mold (also here) make seemingly intelligent decisions, for example. Is it so terribly counter-intuitive to think that a sort of “hive mind” arises in those cases? Some suggest that the individual bees act as individual neurons in such a way that the entire hive becomes one conscious being. Perhaps this same thing would occur in China.

Block’s intuition likely results from an issue regarding scope. To the Chinese people it would not SEEM like there was a China Mind. But, that would be because they would be acting at the level of the neuron. Similarly, at the level of the neuron, it would not SEEM like there was a conscious mind. If we looked at one little ant, it would not SEEM like there was a hive mind. But, that is just because the hive mind is not at the level of the ant. The conscious human mind is not at the level of the neuron. If we ARE the neurons in one giant hive mind, we will never be able to get a glimpse of the mind itself. The conscious experiences would be occurring at an entirely different level.

Reply: Sure, hives might ACT like a single being, just as an artificial body connected to the China Brain might ACT like a human. But, would it BE CONSCIOUS? Is there some feeling or sensation that it is like TO BE a hive? Is there some qualitative feel that is what it is like TO BE the China Brain?

5