One of major divisions in AI (and you can see it in those There is another group separate from the Cognitive definitions above) is between: Scientists and Engineers we just distinguished: it is those who are interested in attributing mental capacities to For Dennett machines and people are in roughly the * those who think AI is the only serious way of finding machines--and this group could overlap with either of the same position: we have a language for talking about out how WE work (since opening heads doesnt yet first two. how they work and why, which he calls FOLK tell you much) Their interest is the mentality of machines, not the machine-likeness of ---i.e. the propositional attitudes humans. Here is Dennett, the major US philosopher concerned with and AI: BELIEVE, INTEND etc. those who want computers to do very smart things, In a recent conversation with the designer of a chess- independently of how WE work. playing program I heard the following criticism of a rival Cognitive scientists vs. Engineers. program: It thinks it should get its queen out early. This ascribes a propositional attitude to the program in a very Think about a reading computer that read English useful and predictive way, for the designer went on to say But he says that in neither case should we (very well) from Right to Left! one can usually count on chasing that queen around a assume those correspond to anything What follows, if anything, from its success? board. But for all the many levels of explicit representation to be found in that program, nowhere is real inside, in the brain or the program. there anything roughly synonymous with ‘I should get my queen out early.’ explicitly tokened.

Strong vs. Weak AI Contrast Dennett, who doesnt really think people or machines have mental states--they are the same position An important distinction we shall need later, due to the with respect to ‘as if’ explanation---it behaves AS IF it The Turing Test philosopher . wants to get its queen out early For him, WEAK AI is like above Q Turing in 1950 published a (I.e. about people): it uses the machine philosophical paper designed to stop representations and hypotheses to mimic human mental function, but never ascribes those properties people arguing about whether or not to the machine. machines could think. For Searle, STRONG AI is the claim that machines programmed with the appropriate behaviour, are Q He proposed that the question be having the same mental states as people would who replaced with a test, which was not had the same behaviour--i.e. that machines can have MENTAL STATES. quite what is now called the Turing . |Test.

Turing’s own objections: Turing’s test was about whether or not an Q If, after some agreed time, the interrogator interrogator could tell a man from a woman! cannot distinguish situations where a machine has been substituted for the Q Turing considered, and dismissed, possible Q An interrogator in another room asks objections to the idea that computers can think. questions of a subject by teletype(!), man/woman, we should just agree to say Some of these objections might still be raised today. the machine can think (says Turing). Some objections are easier to refute than others. trying to determine their sex. Objections considered by Turing: Q NOTICE: the question of whether it is a 1. The theological objection Q The subject is sometimes a man and 2. The ‘heads in the sand’ objection sometimes a woman. machine never comes up in the questions. 3. The mathematical objection Q Nowadays, the ‘Turing Test’ is precisely 4. The argument from Q 5. Arguments from various disabilities about whether the other is a machine or 6. Lady Lovelace’s objection not. 7. Argument from continuity in the nervous system (8.) The argument from informality of behaviour (9.) The argument from extra-sensory perception

1 The theological objection Heads in the sand objection The mathematical objection

Q Results of mathematical logic which can be Q ‘…Thinking is a function of man’s immortal Q i.e. The consequence of machines used to show that there are limitations to the soul. God has given an immortal soul to thinking would be too dreadful. Let us powers of discrete-state machines. eg halting problem: will the execution of a every man and woman, but not to any other hope and believe that they cannot do so. program P eventually halt or will it run for animal or to machines. Hence no animal or - related to theological argument; idea ever? Turing (1936) proved that for any machine can think…’ that Humans are superior to the rest of algorithm H that purports to solve halting creation, and must stay so……... problems there will always be a program Pi Q Why not believe that God could give a soul to such that H will not be able to answer the a machine if He wished? Q ‘.. Those who believe in ..(this and the halting problem correctly. previous objection).. would probably not i.e. Certain questions cannot be answered be interested in any criteria..’ correctly by any formal system. Q But, similar limitations may also apply to the human intellect.

Argument from consciousness Lady Lovelace’s objection: Consciousness Q ‘…This argument is very well expressed in Professor Q Thought and consciousness do not always go together. Q (memoir from Lady Lovelace about Babbage’s Jefferson’s Lister Oration for 1949, from which I quote. Freud and unconscious thought. Analytical Engine) “Not until a machine can write a sonnet or compose a Thought we cannot introspect about. (eg searching for Babbage (1792-1871) and Analytical Engine: concerto because of thoughts and emotions felt, and not forgotton name) general purpose calculator. Entirely mechanical. by the chance fall of symbols, could we agree that (Weiskrantz) – removal of visual cortex, blind in Entire contraption never built – engineering not up machine equals brain – that is not only write it but know certain areas, but can still locate spot without to it and no electricity! that it had written it. No mechanism could feel (and not consciousness of it. merely artificially signal, an easy contrivance) pleasure ‘..The Analytical Engine has no pretensions to Arguments from various disabilities ie ‘I grant that you can originate anything. It can do whatever we know at its successes, grief when its valves fuse, be warmed make machines to all the things you have mentioned but by flattery, be made miserable by its mistakes, be you will never be able to make one do X’. how to order it to perform..’ charmed by sex, be angry or depressed when it cannot eg be kind, resourceful, beautiful, friendly, have initiative, A computer cannot be creative, it cannot originate get what it wants”..’ have a sense of humour, tell right from wrong, make anything, only carry out what was given to it by the Only way one could be sure that a machine thinks is to mistakes, fall in love, enjoy strawberries and cream, make programmer. be that machine and feel oneself thinking. someone fall in love with it, learn from experience, use But computers can surprise their programmers. – - similarly, only way to be sure someone else thinks, is words properly, be the subject of its own though, have as ie by producing answers that were not expected. to be that person. much diversity of behaviour as a man, do something really Original data may have been given to computer, but How do we know that anyone is conscious? solipsism. new. may then be able to work out its consequences and Instead, we assume that others can think and are These criticisms often disguised forms of argument from implications (cf. level of chess programs and their conscious----it is a polite convention. Similarly, could consciousness. assume that machine which passes Turing test is so programmers). too

Argument from continuity in the nervous Q Other objections 2. Too easy: Sense organs objection system Q Copeland (1993) [see ‘Artificial : a philosophical introduction’] discusses 4 further Q Turing Test focuses only on verbal responses: Q Nervous system is continuous: the digital objections to Turing Test. The first three of these the computer is able to use words without really computer is discrete state machine. he dismisses, and the fourth he incorporates into a knowing their meanings. I.e. in the nervous system a small error in modified version of the Turing Test. (like taking a driving test that consists only of the information about the size of a nervous 1. Too conservative: Chimpanzee objection Chimpanzees, dolphins, dogs, and pre-linguistic answering questions). impulse impinging on a neuron may make infants all can think (?) but could not pass Turing Equip it with sense organs, and you can then a large difference to the size of the Test. test knowledge of world (remember turning the outgoing impulse. But this only means that Turing Test cannot be a turtle over in ‘’ when spotting Discrete state machines: move by sudden litmus test (red = acid, not red = non acidic). robots. jumps and clicks from one state to - nothing definite follows if computer/animal/baby fails the test. Q But there many words which can be investigated another. For example, consider the without computer having sense organs. Eg ‘convenient fiction’ that switches are either Ie negative outcome does not mean computer cannot think. ‘teasing’. definitely on, or definitely off. (In philosophical terms: TT gives a sufficient not a Q Think about getting to know people through However, discrete state machine can still necessary condition of thought) give answers that are indistinguishable webchat--can you? from a continuous machine.

2 3. Simulation objection 4. Black box objection. He says I’m depressed much of the time. Q This objection is presented by Copeland (1993), and not I AM SORRY TO HERE THAT YOU ARE DEPRESSED. Q The simulation of X is never an X. rejected by him. It’s true that I am unhappy. In the original Victorian parlour game, if - device whose inner workings are unknown. DO YOU THINK COMING HERE WILL HELP YOU man misleads the interrogator into thinking Turing Test: relying only on outward behaviour. NOT TO BE UNHAPPY? But might look inside computer, to see how it ‘thinks’. I need some help, that much seems certain. he is a woman, doesn’t mean he IS a ie might have program which just has list of sentence to WHAT WOULD IT MEAN TO YOU IF YOU GOT SOME woman. be output in response to sentences it encounters. All HELP? But there are two kinds of simulation: possibilities covered, and computer not thinking at all, Perhaps I could learn to get along with my mother. just matching sentences. a) simulation1 which lacks essential Later you will see more of programs like ELIZA: Joseph TELL ME MORE ABOUT YOUR FAMILY. features of what is being simulated. Eg in Weizenbaum’s program with a therapeutic interviewer. My mother takes care of me. simulated death the person is not dead. WHO ELSE IN YOUR FAMILY TAKES CARE OF YOU? My father. b) simulation2, exactly like what is being Q Men are all alike. IN WHAT WAY? YOUR FATHER simulated, but not produced in standard They’re always bugging us about something or You are like my father in some ways. way (e.g. aircraft in windtunnels). other. WHAT RESEMBLANCE DO YOU SEE? Could claim that a program that passed the CAN YOU THINK OF A SPECIFIC EXAMPLE? You are not very aggressive, but I think you don’t want me Turing Test was a simulation2 Well, my boyfriend made me come here. to notice that. YOUR BOYFRIEND MADE YOU COME HERE? WHAT MAKES YOU THINK I AM NOT VERY AGGRESSIVE?

You don’t argue with me. WHY DO YOU THINK I DON’T ARGUE WITH YOU? You are afraid of me. Pattern *need* [can you do without] DOES IT PLEASE YOU TO THINK I AM AFRAID OF Eg I need some help, that much seems certain. But notice here the AI theory YOU? WHAT WOULD IT MEAN TO YOU IF YOU GOT My father is afraid of everybody. SOME HELP? being snuck in……... WHAT ELSE COMES TO WHEN YOU THINK OF *Mother* [tell me more about your family] YOUR FATHER eg. Q We don’t have any INDEPENDENT Bullies Perhaps I could learn to get along with my mother. evidence that people do assign DOES THAT HAVE ANYTHING TO DO WITH THE TELL ME MORE ABOUT YOUR FAMILY. FACT THAT YOUR BOYFRIEND MAKE YOU COME grammatical structures to what they hear HERE? Eliza lacks in dialogue, But, when we look at how Eliza works, this conversation seems less impressive. • Means of recognizing grammatical structure of Q nor that they use a great deal of 1. Transform input to change point of view. language background knowledge when carrying on Eg change you to I etc. • Means of converting users query into a If pattern matches then give stored response representation of the underlying knowledge from which a superficial conversations….. Otherwise give general purpose response inferences can be drawn.

Parry, the paranoid program: What do they do instead? Consider PARRY, a much better Colby, K.M., Weber, S., Hilf, F.D. (1971) COPS ARREST THE WRONG PEOPLE. Artificial Paranoia, , program: 2, 1-25. They arrest the right ones sometimes. THE LAW SHOULD GET THOSE ITALIAN Q PARRY had far more to say than The computer is in capital letters ELIZA and wanted to tell you things CROOKS. Q It modelled a psychiatrically disordered Why are you in the hospital? Are you Italian? person (PARanoid) and hence it was I SHOULDN’T BE HERE. MY BACKGROUND IS BRITISH BUT I easy to excuse when it didn’t make WAS BORN IN THIS COUNTRY. sense. Who brought you here? THE POLICE. Do you know any Italian crooks? Q It remembered what you had said and got ‘annoyed’ and ‘frightened’ What trouble did you have with the police? I TRY TO AVOID THE UNDERWORLD. Q It was claimed it had passed the TT. COPS DON’T DO THEIR JOB. Are you afraid of the underworld?

3 THE MAFIA ARE THE WORST What for? GANGSTERS. I HAVE INCRIMINATING EVIDENCE Are you connected with the Mafia? AGAINST THE MAFIA. YOU KNOW, THEY KNOW ME. Are you sure of all this? How do they know you? YOU DON’T BELIEVE ME? I am sorry. THEY SPY ON ME. I am doing my best. FORGET THE EXCUSES, YOU GUYS ARE ALL ALIKE. Why do they? …silence THEY WANT TO GET FALSE EVIDENCE CAN’T YOU THINK OF A DECENT TO FRAME ME. QUESTION TO ASK? What do they have against you? You seem hostile to me. THEY ARE OUT TO GET ME. SOME OF YOUR REMARKS ANNOY ME.

Loebner Test [Turing test + money] Colby’s PARRY How is this achieved in PARRY? Competition started in 1991 by Hugh Loebner. Judges type questions in at terminal: Q Perhaps best ever performance, many users, Q WHAT IS YOUR MAIN PROBLEM replied to either by human or a robust, but not a normal subject (I.e. a ______paranoid) computer contestant. Q WHAT BE YOU MAIN PROBLEM Q primitive individual models, some control of Judges rate responses according to how dialogue process; but it had lots to say! Q BE YOU MAIN PROBLEM human-like they appear: Prize of 2000 Q Primitive simulation of Q WHAT BE MAIN PROBLEM dollars to most human-like. Grand prize Q not syntax but fast pattern matching Q WHAT BE YOU PROBLEM ← of 100,000 dollars if a program does Q much better than ELIZA Q WHAT BE YOU MAIN better than one of the humans.

1997 winner: Intelligent Research Ltd, Judge2: No. I’m sorry I missed that. How Program: Well I suppose he’s doing his best like designed and largely programmed by did the Prez take it? most of us, but he does seem to attract an members of Natural Language Program: Ellen Degeneres was one of UNUSUAL amount of bad publicity, wouldn’t Processing in this Department. them - she was kissing her lover in front you say -- can it Fragment of winning transcript of Bill Clinton. Do you think he was very Judge2: Do you know what sculpture is? Program: hi, my name is Catherine, what’s embarassed? Program: Well you’re more of a fan of his than I am I guess --- though I can’t SWEAR what I’d yours? Judge2: You tell me, I didn’t see it. Judge2: My name is Nathan. have done in that locked hotel room, I must Program: Are you still a friend of Bill say. Program: Well, its nice to meet you Clinton Nathan. Did you see that story on CNN last night about the lesbian couple who Judge2: Yes. I would like to talk about SOME DAYS IT’S BETTER THAN OTHERS AT came out at a White House party on something else. Do you know anything CHANGING THE SUBJECT! Sunday. about sculpture.

4 Maybe the idea of Turing Test should be abandoned. Reason 2: Too anthropocenctric. Perhaps for now (till opening Reason 1: Unitary notion of ‘intelligence’ too Too anthropocentric to insist that program simplistic. Too simplistic to think that it is useful to heads helps) behaviour is all we assess whether computers possess ‘intelligence’, or should work in same way as humans. the ability to think. Dogs are capable of cognition, but would have. Better to break down this question into smaller not pass Turing Test. Still, producing Q Increasingly complex programs means questions. machine with cognitive and communicative - similar to idea that unitary measure of intelligence that looking inside machines doesn’t tell (ie intelligence as measured by IQ tests) is not very abilities of a dog would be (another) you why they are behaving the way they useful challenge for AI. are. - better to have tests that reveal the relative strengths But how can we NOT be anthropocentric Q Those who don’t think the TT effective and weaknesses of individuals. about intelligence? We are the only really must show why machines are in a Could assess computers in terms of more specific intelligent things we know, and language is different position from our fellow humans abilities; eg ability of robot to navigate across a room, ability of computer to perform logical closer to our intelligence than any other (I.e. not from OURSELVES!). Solipsism reasoning, metaknowledge (knowledge of own function we have…? again. limitations).

Stanford-Binet test makes use of concept of mental age versus chronological Potted history of IQ tests age. Turing Test (as now interpreted!) suggests that we base our Early research begun into individual differences: (IQ) produced as ratio of mental age to chronological decision about whether a machine can think on its outward age. 1796: assistant at Greenwich Observatory recording when stars behaviour, and whether we confuse it with humans. crossed the field of the telescope. Consistently reported Items in the test are age-graded, and mental age corresponds to level achieved observations eight-tenths of a second later than Astronomer in test. Bright child’s mental age is above his or her chronological age, slow Royal. Concept of Intelligence in humans child’s mental age is below his or her mental age. Discharged! Later realized that observers respond to stimuli at Move of emphasis from general to specific abilities different speeds – the assistant wasn’t misbehaving, he just couldn’t do it as quickly as the Astronomer Royal. World War 1: US test ‘Army Alpha’. Tested simple reasoning, ability to We talk about people being more or less intelligent. Perhaps Francis Galton, in latter half of 19th century: interested in follow directions, arithmetic and information. Used to screen thousands of examining the concept of intelligence in humans will provide individual differences. recruits, sorting into high/low/intermediate responsibilities. an account of what it means to be intelligent. He developed measures of keenness of senses, and mental Beginning of measures of specialized abilities: What is intelligence? Intelligence is what is measured by imagery: early precursors of intelligence tests. Found Realisation that rating on single dimension not very informative. ie different evidence of genius occurring often in certain families. jobs require different aptitudes. intelligence tests. Stanford-Binet IQ test eg 1919 Seashore: Measures of Musical Talent. Alfred Binet (1857-1911) tried devising tests to find out how Tested ability to discriminate pitch, timbre, rhythm etc. “bright” and “dull” children differ. 1939: Wechsler-Bellevue scale: goes beyond composite performance to His aim was educational – to provide appropriate education separate scores on different tasks. eg mazes, recall of information, memory depending on ability of child. for digits etc. Emphasis on general intelligence. Items divided into performance scale and verbal scale. Idea of quantifying the amount of intelligence a person has. eg Performance item:

Nature of Intelligence Binet, and Wechsler, assuming that intelligence is a general capacity. Block design: pictured Spearman: also proposed individuals possess a general intelligence IQ tests: provide one view of what intelligence designs must be copied factor g in varying amounts, together with specific abilities. is. Thurstone (1938): believed intelligence could be broken down into a History of intelligence testing shows that our with blocks; tests ability to number of primary abilities. Used factor analysis to identify 7 factors • verbal comprehension conception of what is intelligence is subject to perceive and analyse • word fluency change. patterns. • number • space Change from assuming there is a general Verbal item: • memory intelligence factor, to looking at specific abilities. • perceptual speed But emphasis is still on quantification, and • reasoning Arithmetic. Verbal problems Thurstone devised test based on these factors; measuring how much intelligence a person testing arithmetic Test of primary mental abilities. possesses – doesn’t really say what intelligence reasoning. But the predictive power of Test for primary mental abilities was no is. greater than for Wechsler and Binet tests, and several of these factors Specific and general theories seem to have correlated with each other.. similar predictictive abilities about individual outcomes.

5 Limitations of ability tests: 1. IQ scores do not predict achievement very well, although they can make gross discriminations. The predictive value of tests is better at Try this right now: school (correlation between .4 and .6 between IQ scores on Stanford- Arguments about meaning and PICK OUT THE ODD ONE Binet and Wechsler and school grades), but less good at university. (and programs) • Possible reasons for poor prediction: Difficult to devise tests which are culturally fair, and independent of educational experience. Q Searle’s argument E.g. pick one word that doesn’t belong with the others. Q Cello Q The Symbol Grounding argument Cello harp drum violin guitar Q Harp Q Bar-Hillel’s argument about the Children from higher income families chose ‘drum’; those from lower impossibility of machine Q Drum income families picked ‘cello’. • Tests do not assess motivation or creativity. Q Violin 2. Human-centred: Animals might possess an intelligence, in a way that a computer does not, but it is not something that will show up in an Q IQ test. Guitar 3. Tests only designed to predict future performance; they do not help to define what intelligence is., but again, the search for definitions is rarely helpful.

Searle’s Example The Chinese Room The Chinese Room

An operator sits in a room; Chinese symbols come in which O. does not understand. He has explicit instructions (a program!) in Q Read chapter 6 in Copeland (1993): English in how to get an output stream of The curious case of the Chinese Room. Chinese characters from all this, so as to generate “answers” from “questions”. But of Q Clearer account: pgs 292-297 in course he understands nothing even though Sharples, Hogg, Hutchinson, Torrance Chinese speakers who see the output find it and Young (1989) ‘Computers and correct and indistinguishable from the real Thought’ MIT Press: Bradford Books. thing. Q Original source: , Brains and Programs: John Searle (1980)

Important philosophical critic of Artificial Can digital computers think? Intelligence. See also recent book: Searle, J.R. (1997) The Mystery of Could take this as an empirical argument - wait and see if AI researchers manage consciousness. Granta Books, London Searle is an opponent of strong AI, and to produce a machine that thinks. the Chinese room is meant to show Weak AI: computer is valuable tool for what strong AI is. Empirical means something which can be study of mind, ie can formulate and test settled by experimentation and hypotheses rigorously. evidence gathering. Strong AI: appropriately programmed It is an imaginary Gedankenexperiment Example of empirical question: computer really is a mind, can be said to like the Turing Test. understand, and has other cognitive Are all ophthalmologists in New York over states. 25 years of age?

6 Chinese Room Example of non-empirical question: Example of rule: are all ophthalmologists in New York eye |Operator in room with pieces of paper. if the pattern is X, write specialists? 100001110010001001001 on the next Symbols written on paper which operator Searle - ‘can a machine think’ is not an empty line of the exercise book labelled cannot understand. empirical question. Something following ‘input store’ a program could never think. Slots in wall of room - paper can come in once input transformed into sets of bits, and be passed out. then perform specified set of Contrast this with Turing, who believed Operator has set of rules telling him/her manipulations on those bits. Then pair ‘Can machines think?’ was better seen as how to build, compare and manipulate final result with Chinese characters, in a practical/empirical question, so as to symbol-structures using pieces of paper ‘Output store’ and push through Output avoid the philosophy (it didn’t work!). in room, together with those passed in slot. from outside.

But symbols mean nothing to operator. According to Searle, behaviour of Searle: Operator does not understand Instructions correspond to program which operator is like that of computer running Chinese - only understands instructions simulates linguistic ability and a program. What point do you think for manipulating symbols. understanding of native speaker of Searle is trying to make with this Chinese. Behaviour of operator is like behaviour of example? computer running same program. Sets of symbols passed in and out correspond to sentences of meaningful Computer running program does not dialogue. understand any more than the operator More than this: Chinese Room program is does. able to pass the Turing Test with flying colours!

Searle: operator only needs syntax, not Example: from Copeland. What would I sentence and N sentence semantics. Arabic sentence corresponding to Arabic sentence be. Semantics - relating symbols to real (sentence is reduplicative and its world. Jamal hamati indaha waja midah 2 syntax rules for arabic: predicate consists of everything Syntax - knowledge of formal properties following ‘hamati’)? of symbols (how they can be a) To form the I-sentence corresponding combined). to a given sentence, prefix the whole Jamal hamati indaha waja midah Mastery of syntax: mastery of set of rules sentence with the symbols ‘Hal’ for performing symbol manipulations. b) To form the N-sentence corresponding Mastery of semantics: to have to any reduplicative sentence, insert the understanding of what those symbols particle ‘laysa’ in front of the predicate mean (this is the hard bit!!) of the sentence.

7 Strong AI: Machine can literally But syntax rules tell us nothing about the semantics. Hal forms an interrogative, and Remember back to PARRY be said to understand the laysa forms a negation. Question asks responses it makes whether your mother-in-law’s camel has belly PARRY was not designed to show ache: understanding, but was often thought to Hal jamal hamati indaha waja midah do so. We know it worked with a very Searle’s argument is that like the and second sentence answers in the negative: simple but large mechanism: operator in the Chinese Room, • Why are you in the hospital? Laysa indaha waja midah PARRY’s computer does not • I SHOULDN’T BE HERE. understand anything it responds--which According to Searle, computers just engaging in • Who brought you here? syntactical manoeuvres like this. is certainly true of PARRY but is it true • THE POLICE. in principle, as Searle wants? • What trouble did you have with the police? • COPS DON’T DO THEIR JOB.

Searle: Program carries out certain Questions: is Searle’s argument Suppose for a moment Turing had operations in response to its input, and convincing? believed in Strong AI. He might have produces certain outputs, which are Does it capture some of your doubts argued: correct responses to questions. about computer programs? a computer succeeding in the imitation But hasn’t understood a question any game will have same mental states that more than an operator in the Chinese would have been attributed to human. Room would have understood Chinese. Eg understanding the words of the language been used to communicate. But, says Searle. the operator cannot understand Chinese.

Responses to Searle: Treat the Chinese Room system as a 1. Insist that the operator can in fact 2. Systems Response (so called by Searle) black box and ask it (in Chinese) if it understand Chinese - concede that the operator does not understand Chinese, but that system as a whole, of which understands Chinese - Like case in which person plays chess operator is a part, DOES understand “Of course I do” who does not know rules of chess but is Chinese. operating under post-hypnotic Copeland: Searle makes an invalid argument suggestion. Ask operator (if you can reach them!) if (operator = Joe) he/she understands Chinese - Compare blind-sight subjects who can Premiss - No amount of symbol manipulation see but do not agree they can---- on Joe’s part will enable Joe to understand “search me, its just a bunch of consciousness of knowledge may be the Chinese input. meaningless squiggles”. irrelevant here!

8 Therefore No amount of symbol Premiss: Bill the cleaner has never sold manipulation on Joe’s part will enable Searle’s rebuttal of systems reply: if pyjamas to Korea. the wider system of which Joe is a symbol operator doesn’t understand component to understand the Chinese Therefore the company for which Bill Chinese, why should you be able to say input. works has never sold pyjamas to Korea. that symbol operator (Joe) plus bits of paper plus room understands Chinese. Burlesque of the same thing clearly doesn’t follow. System as a whole behaves as though it understands Chinese. But that doesn’t mean that it does.

Recent restatement of Chinese Room Argument Step 1: - just states that a program written down Step 3: - states the general principle that consists entirely of rules concerning Chinese Room thought experiment From Searle (1997) The Mystery of syntactical entities, that is rules for illustrates. Merely manipulating formal Consciousness manipulating symbols. Physics of symbols does not guarantee presence 1. Programs are entirely syntactical implementing medium (ie computer) is of semantic contents. 2. Minds have a semantics irrelevant to computation. ‘..It does not matter how well the system Step 2: - just says what we know about human 3. Syntax is not the same as, nor by itself can imitate the behaviour of someone thinking. When we think in words or other who really does understand, nor how sufficient for, semantics symbols we have to know what those words complex the symbol manipulations are; Therefore programs are not minds. QED mean - a mind has more than uninterpreted you can not milk semantics out of formal symbols running through it, it has syntactical processes alone..’ mental contents or semantic contents. (Searle, 1997)

Searle says this shows the need The Internalised Case for semantics but semantics Suppose the operator learns up all these rules means two things at different Q You cannot really contrast a person with and table and can do the trick in Chinese. On this version, the Chinese Room has nothing in rules-known-to-the person times: but the operator. Q We shall return at intervals to the Q Access to objects via FORMAL objects Can one still say the operator understands Chomsky view that language behaviour (more symbols) as in logic and the formal nothing of Chinese? in humans IS rule following (and he can semantics of programs. determine what the rules are!) Q Access to objects via physical contact Consider: a man appears to speak French and manipulation--robot arms or fluently but say, no he doesn’t really, he’s just learned up a phrase book. He’s joking, isn’t prostheses (or what children do from a he? very early age).

9 Semantics fun and games Remember Strong AI is the straw

Programs have access only to syntax (says S.). man of all time Consider the internalised Chinese “speaker”: is If he is offered a formal semantics (which is of he mentally ill? Would we even consider he “computers, given the right programs can be one interpretation rather than another) – didn’t understand? What semantics might he literally said to understand and have other that’s just more symbols ( S’s silly reply ). lack? For answering questions about S’s cognitive states”. (p.417) Soon you’ll encounter the ‘formal semantics of paper? ; for labels, chairs, hamburgers? programs’ so don’t worry about this bit. The residuum in S’s case is intentional states. Searle has never been able to show that any AI If offered access to objects via a robot prothesis person has actually claimed this! from inside the box: Searle replies that’s just more program or it won’t have reliable [Weak AI – mere heuristic tool for study of the ostension/reference like us. mind]

Later moves: The US philosopher Putnam Dennett: I-state is a term in S’s vocabulary for made it hard to argue that things which he will allow no consistent set of Q S makes having the right stuff necessary for criteria – but he wants people/dogs in and having I-states (becoming a sort of must have certain properties. machines out at all costs. biological materialist about people; Q He said: suppose it turned out that all cats thinking/intentionality requires our biological were robots from Mars. Suppose an English speaker learned up make up i.e. carbon not silicon. Hard to Chinese by tables and could give a good Q What would we do? argue with this but it has no obvious performance in it? (And would be like the plausilility). Q Stop calling cats ‘cats’--since they didn’t have operator OUT OF THE ROOM) the ‘necessary property’ ANIMATE? Q He makes no program necessary – This is just circular – and would commit him to Q Just carry on and agree that cats weren’t Would Searle have to say he had no I-state withdrawing intentionality from cats if …. animate after all? about things he discussed in Chinese? etc. (Putman’s cats).

Symbol grounding Not enough for symbols to be ‘hooked up’ Is there any solution to the issues raised Harnad, S. (1990) The Symbol to operations in the real world. (See by Searle’s Chinese Room? Are there Grounding Problem. Physical D 42, Searle’s objection to robot answer.) any ways of giving the symbols real 335-346. Symbols need to have some intrinsic meaning? Copy of paper can be obtained from: semantics or real meaning. (http://www.cogsci.soton.ac.uk/harnad/genpub.html) For Hanard, symbols are grounded in computation consists of manipulation of iconic representations of the world. meaningless symbols. Alternatively, imagine that symbols For them to have meaning they must be emerge as a way of referring to grounded in non-symbolic base. representations of the world - represent- Like the idea of trying to learn Chinese ations that are built up as a result of from a Chinese dictionary. interactions with the world.

10 Does Harnard’s account of symbol grounding really provide an answer to Child eventually form concept of what For instance, a robot that learns from scratch the issues raised by Searle’s Chinese ‘roundness’ is, but this is based on long how to manipulate and interact with objects in Room? history of many physical interactions the world. with the object. (Remember Dreyfus argument that intelligent What symbol grounding do humans things MUST HAVE GROWN UP AS WE DO) have? Perhaps robotic work in which symbols In both accounts, symbols are no longer empty Symbols are not inserted into our heads emerge from interactions with the real and meaningless because they are grounded ready-made. world might provide a solution. in non-symbolic base - i.e. grounded in For example, before a baby learns to See work on Adaptive Behaviour e.g. meaningful representations. apply the label ‘ball’ to a ball, it will Rodney Brooks. (Cf. formal semantics on this view!) have had many physical interactions with it, picking it up, dropping it, rolling it etc.

Another famous example linking meaning/knowledge to Famous example from history of Bar-Hillel’s argument: understanding: (MT) Q The words are not difficult nor is the structure Q This is the argument that we need stored Q Bar-Hillel’s proof that MT was IMPOSSIBLE knowledge to show understanding. (not just difficult) Q To get the translation right in a language where pen is NOT both playpen and Q Remember McCarthy’s dismissal of Q ------writing pen, PARRY--not AI because it did not know Q Little Johnny had lost his box who was president. Q He was very sad Q You need to know about the relative sizes of playpens, boxes and writing Q Is knowledge of meaning different from Q Then he found it pens. knowledge? ‘The Edelweiss is a flower Q The box was in the PEN that grows in the Alps’. Q I.e you need a lot of world knowledge Q Johnny was happy again

One definition of AI is: knowledge based processing Q Bar-Hillel and those who believe that in AI, look at the ‘box’ example and Q AGREE about the problem (needs knowledge for its solution) Q DISAGREE about what to do (for AI it’s a task, for B-H impossible)

11