The of II: and

(I) The

In his “Intentional Systems,” argues that there are three different stances that we can adopt when trying to understand (or come to know) something: (a) a design stance (b) a physical stance (c) an intentional stance. When we adopt the design stance, we attempt to predict the future actions of a thing or a system of things by appeal to the underlying design of that thing. Dennett’s example throughout is of a -playing computer: “one can predict its designed response to any move one makes by following the computation instructions of the program.” (p. 337b) And our predictions based on the design stance all rely on the of function. When we adopt the physical stance, we make predictions based on the actual physical state of the particular (along with our of the laws of nature). But, according to Dennett, chess-playing computers have advanced to such a degree that predicting their based on the design stance or the physical stance is very difficult. “A man’s best hope of defeating such a machine in a chess match is to predict its responses by figuring out as best he can what the best or most rational move would be, given the rules and goals of chess.” (p. 338b) In other words, one ought to adopt the intentional stance with respect to the computer. This stance assumes . “One predicts in such a case by ascribing to the system the possession of certain and supposing it to be directed by certain goals, and then by working out the most reasonable or appropriate action on the basis of these ascriptions and suppositions.” (p. 339a) But now here’s the interesting move on Dennett’s part: “It is a small step to calling the information possessed the computer’s beliefs, its goals and subgoals its .” (ibid.) What do you think? Well, according to Dennett, you need not be bothered by this because, he claims, he is not saying that the computers really have beliefs and desires, only that “one can explain and predict their behavior of the computer by ascribing beliefs and desires to them.” (p. 339b) In the end, “the decision to adopt the strategy is pragmatic, and is not intrinsically right or wrong.” (ibid.) This claim goes back to what Dennett says in the beginning; namely, that “a particular thing is an intentional system only in relation to the strategies of someone who is trying to explain and predict its behavior.” (p. 337b) A potential problem enters, however, when we realize that intentional systems don’t always obey the rules of rationality (i.e., ). Eventually, Dennett says, we end up having to look at things from a design stance. This is actually OK, because the design stance is more reliable. “In the end, we want to be able to explain the of man, or beast, in terms of his design, and this in turn in terms of the of this design…” (p. 342a) And, ultimately, the intentional stance presupposes rationality and intelligence, it doesn’t explain it. (p. 344a) Introduction to Philosophy: Knowledge and 2 Dr. Brandon C. Look, University of Kentucky What does all this mean? Well, it means, first of all, that we might not have to worry about the question “Can a computer think?” only whether we are justified in treating a computer as an intentional system. Further, like Ryle (perhaps), it seems that Dennett is not saying anything about What there is only about how we can consider things in scientific explanation. One remaining question is this, however: Is there anything that Dennett is leaving out of the picture of seemingly-intelligent or systems? One answer might be that what’s left out are – the feelings of my 1st-person perspective. It does seem that this model is fine – but only from the 3rd-person perspective.

(II)

The main claim in ’s piece is that all the of “folk ” – e.g. beliefs, desires, fear, sensation, etc. – will be (or can be) eliminated by a completed neuro-scientific . In other words, at some point in the future, we will cease to recognize their real just as we have ceased to recognize any number of concepts in scientific theory.

Churchland’s gives three to believe that the concepts of should be abandoned. (pp. 351a-352b) First, folk psychology often fails to predict and explain things. Second, our early in other fields of were confused and unhelpful. So why think our crude folk psychological theories are more accurate. Third, the prospects of adequately making one-to-one correspondences between folk psychological states and states (as expected by theories) are not great.

Against the counter-argument that “one’s reveals directly the existence of , beliefs, desires, fears and so forth” (p. 352b) Churchland points out that “all observation occurs within some system of concepts, and our observation judgments are only as good as the in which they are expressed.” (ibid.) The point is that eliminative materialism will produce a wholesale trashing of the conceptual framework of beliefs, desires, and so on. (His second criticism draws a similar response.) The third counter-argument to eliminative materialism is that it exaggerates the defects of folk psychology and presents a romantic (?!) and enthusiastic picture of possible . Perhaps, Churchland says. But it is clear that one should try to go the hard-core materialist route.

Introduction to Philosophy: Knowledge and Reality 3 Dr. Brandon C. Look, University of Kentucky

The Nature of the Mind III:

Is materialism true? Or, better: can we explain all phenomena purely in terms of the states and interactions of and physical laws? It might have seemed that mental phenomena are resistant to such explanations. But our question for today is this: can we legitimately say that a computer (which is a material thing) thinks? If so, then we have at least one kind of material object whose mental properties are simply the effects of its material components. If that material thing can be said to think, why can’t we simply be material things that think?

I. “Leibniz’s Mill”

Leibniz was one of the first to take seriously the challenge of materialism with respect to the mind. In §17 of his “Monadology” (1714), he writes the following:

[P]erception, and what depends on it, is inexplicable in terms of mechanical reasons, that is, through shapes and motions. If we imagine that there is a machine whose structure makes it think, sense, and have , we could conceive it enlarged, keeping the same proportions, so that we could enter into it, as one enters into a mill. Assuming that, when inspecting its interior, we will only find parts that push one another, and we will never find anything to explain a .

In other words, the mind and its contents cannot be explained solely in material terms. The material cannot give us examples of perceptions and .

II. The “

I have asked before in class “Can a machine think?” or “Can a computer think?” In his classic article, “Computing Machinery and Intelligence” (1950), is dismissive of this formulation of the question. Instead, he offers a famous approach to the general problem of the nature of and computation: an “ game” (which has since come to be known as a “Turing Test”).

The game has three players: A, B, and C. Let A be a computer, B a person. C acts as an interrogator, who tries to determine who (or what) A is and who (or what) B is, but who cannot see A or B. Turing’s question becomes “Could a computer fool the interrogator into thinking that it is a person?” Or, in other words, “Could a computer be programmed so that its answers to ordinary questions in natural were so like the answers of native speakers that a blind observer couldn’t determine if it was a computer?”

In his article, Turing predicts that within 50 years machines ought to be able to do well in an imitation game (i.e., they ought to be able to deceive judges 70% of the in five minute conversations). (p. 361a) To the best of my knowledge, this has not happened. But there is a yearly Loebner Prize for computers that imitate conversation. And, if you like, you can chat on-line with a computer, “A.L.I.C.E.” (follow link on the course website). Introduction to Philosophy: Knowledge and Reality 4 Dr. Brandon C. Look, University of Kentucky

By the way, if you’ve seen Blade Runner, then you know that Blade Runners (special police, whose job it is to “retire” replicants) interrogate others, trying to determine whether or not they are androids.

III. Searle and the “” Experiment

A. The Original and Argument

Searle’s goal in “, , and Programs” is to show that the claims of “strong AI” are false. “Strong AI” is characterized by the beliefs that “the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.” (p. 368b)

He asks us to consider the case in which he is in a room and given English instructions on how to manipulate Chinese characters – that is, when given a certain input in Chinese, he has directions on how to give an output in Chinese characters. Assume that he is able to convince a native speaker outside of his room that there is a native speaker in the room. In other words, he acts as (part of) a that successfully passes the Turing Test. But, Searle says, this shows that he is able to pass the Turing Test without Chinese. Therefore, strong AI must be false.

We could represent the argument this way:

(1) If strong AI is true, then there could be a program for “speaking Chinese” such that the computer running the program could be said to understand Chinese. (2) I could run a program for Chinese and still not understand Chinese. (3) Therefore, strong AI is false.

Now, Searle is not arguing is that machines cannot think, for on his view a brain is a thinking machine. But he does see very strong arguments against attributing thought and mental capacities to a machine “where the operation of the machine is defined solely in terms of computational processes over formally defined elements.” (p. 377a) As he puts it a little later: “the main point of the present argument is that no purely formal model will ever be sufficient by itself for intentionality because the formal properties are not by themselves constitutive of intentionality, and they have by themselves no causal powers except the power, when instantiated, to produce the next stage of the formalism when the machine is running.” (p. 377b)

In his concluding paragraphs, Searle makes the interesting point that there is a residual form of dualism underlying the ambitions of strong AI, for it attempts to arrive at the nature of the mental by divorcing it from the actual material properties of the brain.

B. Another Version

In a later article, Searle claims that his argument can be put more formally. We start with the following three axioms: Introduction to Philosophy: Knowledge and Reality 5 Dr. Brandon C. Look, University of Kentucky

(A1) Programs are formal. [They have a certain syntactical structure.] (A2) Minds have mental contents. [That is, minds contain concepts and meanings.] (A3) by itself is neither constitutive of nor sufficient for .

These axioms allow us to conclude the following:

(C1) Programs are neither constitutive of nor sufficient for minds.

Searle adds a fourth axiom:

(A4) Brains cause minds.

This in turn allows us to derive the following conclusions:

(C2) Any other system capable of causing minds would have to have causal powers (at least) equivalent to those of brains. (C3) Any artifact that produced mental phenomena, any , would have to be able to duplicate the specific causal powers of brains, and it could not do that just by running a formal program. (C4) The way that human brains actually produce mental phenomena cannot be solely by of running a computer program.

IV. Some Counter-Arguments

What do you think so far? The most common rejoinder from defenders of strong AI is some version of the “Systems Reply” (discussed by Searle, pp. 371-74). The is that Searle, in the room, only plays a part in the entire system, which could indeed be said to understand Chinese.

Consider the following case: Imagine the brain of a native speaker of any language; let the individual neurons or synapses (or whatever) be replaced one by one with little programs (with input x, give output y); eventually we will have a brain that is only these little programs, none of which can be said to understand even though the entire brain does.