Intentionality and Materialism

Intentionality and Materialism

The Nature of Mind II: Intentionality and Materialism (I) The Intentional Stance In his “Intentional Systems,” Daniel Dennett argues that there are three different stances that we can adopt when trying to understand (or come to know) something: (a) a design stance (b) a physical stance (c) an intentional stance. When we adopt the design stance, we attempt to predict the future actions of a thing or a system of things by appeal to the underlying design of that thing. Dennett’s example throughout is of a chess-playing computer: “one can predict its designed response to any move one makes by following the computation instructions of the program.” (p. 337b) And our predictions based on the design stance will all rely on the notion of function. When we adopt the physical stance, we make predictions based on the actual physical state of the particular object (along with our knowledge of the laws of nature). But, according to Dennett, chess-playing computers have advanced to such a degree that predicting their behaviors based on the design stance or the physical stance is very difficult. “A man’s best hope of defeating such a machine in a chess match is to predict its responses by figuring out as best he can what the best or most rational move would be, given the rules and goals of chess.” (p. 338b) In other words, one ought to adopt the intentional stance with respect to the computer. This stance assumes rationality. “One predicts behavior in such a case by ascribing to the system the possession of certain information and supposing it to be directed by certain goals, and then by working out the most reasonable or appropriate action on the basis of these ascriptions and suppositions.” (p. 339a) But now here’s the interesting move on Dennett’s part: “It is a small step to calling the information possessed the computer’s beliefs, its goals and subgoals its desires.” (ibid.) What do you think? Well, according to Dennett, you need not be bothered by this because, he claims, he is not saying that the computers really have beliefs and desires, only that “one can explain and predict their behavior of the computer by ascribing beliefs and desires to them.” (p. 339b) In the end, “the decision to adopt the strategy is pragmatic, and is not intrinsically right or wrong.” (ibid.) This claim goes back to what Dennett says in the beginning; namely, that “a particular thing is an intentional system only in relation to the strategies of someone who is trying to explain and predict its behavior.” (p. 337b) A potential problem enters, however, when we realize that intentional systems don’t always obey the rules of rationality (i.e., logic). Eventually, Dennett says, we end up having to look at things from a design stance. This is actually OK, because the design stance is more reliable. “In the end, we want to be able to explain the intelligence of man, or beast, in terms of his design, and this in turn in terms of the natural selection of this design…” (p. 342a) And, ultimately, the intentional stance presupposes rationality and intelligence, it doesn’t explain it. (p. 344a) Introduction to Philosophy: Knowledge and Reality 2 Dr. Brandon C. Look, University of Kentucky What does all this mean? Well, it means, first of all, that we might not have to worry about the question “Can a computer think?” only whether we are justified in treating a computer as an intentional system. Further, like Ryle (perhaps), it seems that Dennett is not saying anything about What there is only about how we can consider things in scientific explanation. One remaining question is this, however: Is there anything that Dennett is leaving out of the picture of seemingly-intelligent beings or systems? One answer might be that what’s being left out are qualia – the feelings of my 1st-person perspective. It does seem that this model is fine – but only from the 3rd-person perspective. (II) Eliminative Materialism The main claim in Paul Churchland’s piece is that all the concepts of “folk psychology” – e.g. beliefs, desires, fear, sensation, etc. – will be (or can be) eliminated by a completed neuro-scientific theory. In other words, at some point in the future, we will cease to recognize their real existence just as we have ceased to recognize any number of concepts in scientific theory. Churchland’s gives three reasons to believe that the concepts of folk psychology should be abandoned. (pp. 351a-352b) First, folk psychology often fails to predict and explain things. Second, our early theories in other fields of science were confused and unhelpful. So why think our crude folk psychological theories are more accurate. Third, the prospects of adequately making one-to-one correspondences between folk psychological states and brain states (as expected by identity theories) are not great. Against the counter-argument that “one’s introspection reveals directly the existence of pains, beliefs, desires, fears and so forth” (p. 352b) Churchland points out that “all observation occurs within some system of concepts, and our observation judgments are only as good as the conceptual framework in which they are expressed.” (ibid.) The point is that eliminative materialism will produce a wholesale trashing of the conceptual framework of beliefs, desires, and so on. (His second criticism draws a similar response.) The third counter-argument to eliminative materialism is that it exaggerates the defects of folk psychology and presents a romantic (?!) and enthusiastic picture of possible progress. Perhaps, Churchland says. But it is clear that one should try to go the hard-core materialist route. Introduction to Philosophy: Knowledge and Reality 3 Dr. Brandon C. Look, University of Kentucky The Nature of the Mind III: Artificial Intelligence Is materialism true? Or, better: can we explain all phenomena purely in terms of the states and interactions of matter and physical laws? It might have seemed that mental phenomena are resistant to such explanations. But our question for today is this: can we legitimately say that a computer (which is a material thing) thinks? If so, then we have at least one kind of material object whose mental properties are simply the effects of its material components. If that material thing can be said to think, why can’t we simply be material things that think? I. “Leibniz’s Mill” Leibniz was one of the first to take seriously the challenge of materialism with respect to the mind. In §17 of his “Monadology” (1714), he writes the following: [P]erception, and what depends on it, is inexplicable in terms of mechanical reasons, that is, through shapes and motions. If we imagine that there is a machine whose structure makes it think, sense, and have perceptions, we could conceive it enlarged, keeping the same proportions, so that we could enter into it, as one enters into a mill. Assuming that, when inspecting its interior, we will only find parts that push one another, and we will never find anything to explain a perception. In other words, the mind and its contents cannot be explained solely in material terms. The material cannot give us examples of perceptions and thoughts. II. The “Turing Test” I have asked before in class “Can a machine think?” or “Can a computer think?” In his classic article, “Computing Machinery and Intelligence” (1950), Alan Turing is dismissive of this formulation of the question. Instead, he offers a famous approach to the general problem of the nature of thought and computation: an “imitation game” (which has since come to be known as a “Turing Test”). The game has three players: A, B, and C. Let A be a computer, B a person. C acts as an interrogator, who tries to determine who (or what) A is and who (or what) B is, but who cannot see A or B. Turing’s question becomes “Could a computer fool the interrogator into thinking that it is a person?” Or, in other words, “Could a computer be programmed so that its answers to ordinary questions in natural language were so like the answers of native speakers that a blind observer couldn’t determine if it was a computer?” In his article, Turing predicts that within 50 years machines ought to be able to do well in an imitation game (i.e., they ought to be able to deceive judges 70% of the time in five minute conversations). (p. 361a) To the best of my knowledge, this has not happened. But there is a yearly Loebner Prize for computers that imitate human conversation. And, if you like, you can chat on-line with a computer, “A.L.I.C.E.” (follow link on the course website). Introduction to Philosophy: Knowledge and Reality 4 Dr. Brandon C. Look, University of Kentucky By the way, if you’ve seen Blade Runner, then you know that Blade Runners (special police, whose job it is to “retire” replicants) interrogate others, trying to determine whether or not they are androids. III. Searle and the “Chinese Room” Experiment A. The Original Thought Experiment and Argument Searle’s goal in “Minds, Brains, and Programs” is to show that the claims of “strong AI” are false. “Strong AI” is characterized by the beliefs that “the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.” (p. 368b) He asks us to consider the case in which he is in a room and given English instructions on how to manipulate Chinese characters – that is, when given a certain input in Chinese, he has directions on how to give an output in Chinese characters.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us