Quick viewing(Text Mode)

Re- Visiting the Chinese Room: Boden's Reply Considered

Re- Visiting the Chinese Room: Boden's Reply Considered

COPYRIGHT AND CITATION CONSIDERATIONS FOR THIS THESIS/ DISSERTATION

o Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

o NonCommercial — You may not use the material for commercial purposes.

o ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.

How to cite this thesis

Surname, Initial(s). (2012) Title of the thesis or dissertation. PhD. (Chemistry)/ M.Sc. (Physics)/ M.A. (Philosophy)/M.Com. (Finance) etc. [Unpublished]: University of Johannesburg. Retrieved from: https://ujcontent.uj.ac.za/vital/access/manager/Index?site_name=Research%20Output (Accessed: Date). Revisiting the Chinese Room: Boden’s Reply considered

by

Clarton Fambisai Mangadza

A mini-dissertation submitted in partial fulfilment of the requirements

for the degree of

Magister Artium (Philosophy) by Coursework

in the

FACULTY OF HUMANITIES

UNIVERSITY OF JOHANNESBURG

Johannesburg

January 2017

Supervisor: Prof CF Botha

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

Contents 1. Introduction ...... 1 1.1. Justification ...... 3 1.2. Outline ...... 4 2. Literature Review ...... 5 3. Essential and Background ...... 7 3.1. Functionalism ...... 7 3.2. Artificial (AI) ...... 8 3.3. Intelligence ...... 9 3.4. The ...... 9 3.5. Digital Computer ...... 11 3.6. Computers as Intelligent ...... 11 4. Searle and the Chinese Room argument ...... 12 4.1. Strong (AI) ...... 12 4.2. Weak Artificial Intelligence (AI) ...... 13 4.3. The Chinese Room Argument in Focus ...... 13 5. Replies to Searle ...... 14 5.1. The Replies in Brief ...... 14 6. Interrogating Boden’s Reply to Searle ...... 21 6.1. Interrogating Boden’s critique of Searle’s Positive Claim – must be Biologically Grounded ...... 23 6.1.1. Searle’s claim that intentionality is biologically grounded depends on bad analogies ...... 25 6.1.2. My Objections to Boden’s attack on Searle’s Analogy ...... 27 6.1.3. Searle’s claim that intentionality is biologically grounded depends on unreliable ...... 30 6.1.4. My Reply to Boden’s attack on Searle’s Claim ...... 32 6.2. Interrogating the “Negative” Claim: Formal-computational theories cannot explain ...... 33 6.2.1. The First Prong: Attacking the Chinese Room argument via the Robot Reply .... 35 6.2.2. The Second Prong: versus and the English reply ...... 41 7. Conclusion ...... 63 Bibliography ...... 66

Acknowledgements

To my supervisor Prof. Catherine F. Botha: for your patience and hard work, I am deeply grateful. As for your generosity with time, your engaging comments, your support and guidance throughout the marathon of writing this mini-dissertation, suffice it to say,

NDINOTENDA ZVIKURU MWARI VAKUKOMBOREREYI!

The journey has been rewarding, difficult at times but the support of my family, friends, my fellow students and staff from Department of Philosophy at the University of Johannesburg has been a source of resilience; your support is acknowledged.

THANK YOU ALL

i

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

1. Introduction

[T]he point is not that the computer gets only to the 40-yard line and not all the way to the goal line. The computer doesn’t even get started. It is not playing the game (Searle 1990: 31).

John Rogers Searle’s famous Chinese Room argument asks us to imagine Searle as a person who cannot speak or read Chinese, locked in a room with two openings - one to allow inputs and the other opening for outputs. Searle is equipped with a list of rules that are written in English (Searle in Boden 1990: 69) that explain how to manipulate Chinese characters. Since he does not understand Chinese the characters are all “meaningless squiggles” (ibid.) to him. The characters are in fact Chinese text and constitute questions coming in from the input opening. Using the rules, Searle can manipulate the characters and produce a reply in Chinese to every question in Chinese that comes through the opening. The replies that he produces are so good that no one can tell that he is not a native Chinese speaker. As Searle says, “…my answers to the questions are absolutely indistinguishable from any answer that a native Chinese speaker would give” (ibid.). Searle’s argument is that despite this, the person in the Chinese room cannot claim to understand Chinese, or said otherwise, the manipulation of syntactic rules does not imply an understanding of meaning or semantics.

This is claimed to have significant implications for both the philosophy of artificial intelligence and the philosophy of . Specifically, David Cole (2014) points out that the “narrow” conclusion of Searle’s argument is that a digital computer programmed in a certain way may appear to understand language, but it does not have any real understanding. This would prove the “Turing Test”1 inadequate. Cole (ibid.) also claims that the “broader” conclusion of the argument is to refute the theory that human are computer-like systems. For Searle, minds result from biological processes and machines can only simulate these processes.

1 A more detailed explanation of the Turing test (named after ) will follow under the section in which I discuss “Essential Concepts” since the Chinese Room is often depicted as a logical objection to the Turing test.

1

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

Searle’s Chinese room argument has generated many replies, one of which is known as the Systems Reply.2 This reply acknowledges that Searle in the room does not understand Chinese, but then claims that he is in fact a part in a larger system that does. The larger system is comprised of the enormous database of symbols, the memory (scratchpads), and the set of rules that Searle uses to manipulate the symbols he receives. The whole system is needed for Searle to answer the Chinese questions, and not just Searle himself. So, the Systems reply is the claim that even though the man running the program does not understand Chinese, the system as a whole does.

There are several philosophers who have contributed to the Systems reply, including , John Haugeland and . Margaret Ann Boden also develops a reply of this sort, and it is her specific “English reply” (1988: 242) that will be the focus of this mini-dissertation. My intention is to critically evaluate whether Boden’s position, as set out in her paper “Escaping from the Chinese Room” constitutes a cogent argument against Searle’s standpoint – the argument that the Chinese Room, disproves the claim of strong AI, when considering her treatment of what she calls his “positive” and “negative” claims (Boden 1988: 240). It is important to note that this paper “Escaping from the Chinese Room” (1990) is a summarised version of her chapter “Is Computational Psychology possible?” in her 1988 book titled Computer Models of the Mind, and so I will, in this mini-dissertation, refer to both texts where appropriate.

As I explain, Boden develops a critique of what she calls Searle’s “positive claim - his view that intentionality3 must be biologically grounded” (ibid.), as well as a critique of what she calls his “negative claim - that purely formalist theories cannot explain mentality” (ibid.). What Boden does is to call into question Searle’s argument, and not the claims the argument is intended to support.

She provides a double-pronged critique of Searle’s positive claim, attempting to demonstrate that (1) Searle’s claim that intentionality is biologically grounded depends on bad “biological analogies” (Boden 1990: 92) and that (2) his claim that intentionality is biologically grounded depends on unreliable intuitions (ibid.). She

2 I discuss this in greater detail later. 3 This will be explained in chapter 6.1

2

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

concludes that his “‘positive’ claim, […] is at best a promissory note, at worst mere mystery-mongering” (Boden 1990: 94). I argue that she comes to this conclusion too hastily, by showing that her criticism of his analogy is questionable.

Boden’s critique of Searle’s “negative” claim is also double-pronged. The first prong is aimed directly at the example of Chinese Room, whilst the second prong attacks the background assumption upon which the Chinese Room depends, the point that “computer programs are pure syntax” (ibid.).

I argue that her criticism of his negative claim can be countered by an appeal to Searle’s understanding of “understanding,” attempting to demonstrate how Ronald Chrisley’s (1995) defence of Boden’s position can also be countered to some extent using this approach.

In showing that Boden’s argument against both his positive and negative claims does not constitute a completely convincing rebuttal of Searle’s position, it must, however, be noted that I contend that her focus on the concept of intentionality and the of the brain as the “causal basis of intelligence” raises important considerations for the Philosophy of Artificial Intelligence, and so remains significant.4

1.1. Justification Even though Boden’s paper was published in 1988, nearly a decade after Searle’s publication of “Minds, Brains, and Programs”, the questions raised by his Chinese Room argument and Boden’s reply to his argument remain important in the contemporary . This is so specifically because the focus of their difference lies, in my view, in the concept of ‘intentionality’, a concept that is, in my view, central in the contemporary debates. In addition, there has been comparatively little5 direct attention in the literature given to Boden’s reply, and so my paper is aimed at correcting this.

4 I reserve a fuller discussion of this for another occasion due to space limitations. 5 Ronald L Chrisley’s (1995) article “Weak Strong AI: An elaboration of the English Reply to the Chinese Room” is one of the very few papers that deals directly with Boden’s reply. I devote attention to Chrisley’s position later in the mini-dissertation.

3

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

1.2. Outline The mini-dissertation takes the following form: I begin by providing a detailed explanation of two thought experiments; one by Alan Turing (1912-1954) - now famously called the Turing Test - which forms the background to my research question, and Searle’s Chinese Room argument. Searle’s Chinese room, as I show, is an argument against the possibility of Strong Artificial Intelligence (AI) and so is related to functionalism as an approach in the Philosophy of Mind. As a result, I briefly discuss functionalism, as well as the distinction between strong and weak AI. I then set out the exact terms of Searle’s position in order to, in later sections, show why some of the replies to Searle are inadequate. I also provide a short overview of the research and development in the field of Artificial Intelligence (AI) in general.

I then interrogate Boden’s position by providing a detailed explanation of her arguments against what she calls Searle’s positive position, and show that there are at least two problems with her position.

Subsequently, I provide an evaluation of Boden’s critique of Searle’s negative claim – “that purely formalist theories cannot explain mentality” (ibid.), a claim that she acknowledges will not be easy to invalidate (ibid.). My initial aim is to set out Boden’s position, explicitly concentrating on the two points she raises viz., Searle’s reply to the robot reply and what she calls the English reply.

Searle’s response to the-robot reply is rejected by Boden, since for her his reading of the reply is based on some wrong similarities being made - similarities between the Chinese room and the claims made by computational psychology.

I then focus on Boden’s so-called English reply. In giving the English reply to the Chinese room argument, Boden argues that there is some understanding that takes place in the Chinese room: the understanding required to recognize 1) the symbols and 2) the understanding of English required to read the rulebook, and so on (Chrisley 1995: 1, my emphasis). I show how Boden seeks support from the work of the Canadian computer scientist and philosopher, Brian Cantwell Smith (1982) and Zimbabwean Philosopher and Researcher on Artificial Intelligence (AI) and , Aaron Sloman. Boden takes Smith and what is implied in Sloman’s argument to be proof that computer programmes are NOT all syntax and no

4

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

semantics. My focus is to clearly show that Searle’s position remains defensible, especially when Boden’s “initial”6 critique is considered, as well as Ronald Chrisley’s (1995) later development in defence of her position.

2. Literature Review Due to the overwhelming amount of literature that has been generated, it is not possible to discuss all of the work that has been completed in this regard. I focus here rather only upon texts that relate most directly to my research question.

To be able to provide the background to Searle’s position, I use the texts that follow to explain functionalism, the Turing test, and an explanation of computers and intelligence: Janet Levin’s 2013 article on “Functionalism”; Alan Turing’s 1950 seminal article “Computing Machinery and Intelligence” on Artificial Intelligence; and Carl. S. French’s celebrated 1996 book Computer Science.

In terms of the literature by Searle, I draw upon the following texts in order to provide a detailed explanation of his Chinese Room argument: his 1980 “Minds, Brains and Programs”; his 1983 book Intentionality: An Essay in the Philosophy of Mind; his 1990 paper “Is the Brain's Mind a ?” and lastly, his 1999 The Problem of . I also draw on sections from his 1997 book The Mystery of Consciousness particularly; chapter 1, “Consciousness as a biological problem” which is an attempt to consolidate and clarify what he calls “a mess” (1997:5) regarding an understanding of this concept. These are, in my view, sufficient for setting out his position in detail in my essay.

For the section in which I discuss the replies to the Chinese Room, I limit the texts I use to the following articles in John Preston and Mark Bishop (eds.), 2002, Views into the Chinese Room: New Essays on Searle and Artificial Intelligence:

(1) The “Introduction” by John Preston gives summaries of the different concepts that are used in the book: The Turing test, computationalism, strong AI, the Chinese room and replies, syntax and semantics; and so on.,

6 Referring to Boden’s 1988 position where she takes B.C. Smith’s argument as anchoring support for her view that there is some understanding taking place in the execution of the program.

5

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

(2) “Twenty-One years in the Chinese Room” by , in the article that he reflects on some implications of the Chinese Room argument for cognitive science and in the larger intellectual culture (Preston and Bishop 2002: 51);

(3) “Understanding, Orientations, and Objectivity” by Terry Winograd whose “argument is not against Searle, but is against the grounds on which the debate about his position has been conducted” (2002: 80) his issue with Searle is particularly on the issue of understanding. For him, Searle is not entitled to claim that there are ‘clear cases in which “understanding” literally applies and clear cases in which it does not apply’ (Searle in Boden 1990: 71);

(4) “The Chinese Room from a Logical Point of view” by Jack Copeland which challenges only the “logical validity of the Chinese Room argument” (Preston and Bishop 2002: 29). Copeland looks closely at the various versions of the Chinese Room argument viz., “the vanilla version, the outdoor version, the simulator version, and the gymnasium version” (Preston and Bishop 2002: 109)7;

(5) “Syntax, Semantics, Physics” by John Haugeland which seeks to

show that serious AI, while not committed to denying Searle’s logical truth (that syntax is not sufficient for semantics), can respond to the CRA by denying that computer programs are purely syntactical. To do so, he outlines the conceptual foundations of AI in a way that takes account of the causal powers of programs and data (Preston and Bishop 2002: 29). David Cole’s article in 2014, “The Chinese Room Argument” which synthesizes several responses to the argument and focuses especially on the system reply is additionally useful to me.

I also consult Ronald L. Chrisley’s 1995 article “Weak Strong AI: An elaboration of the English Reply to the Chinese Room”. This article is one of the very few papers that deals directly with Boden’s English reply. The article elaborates, defends and

7 According to Jack Copeland “there are four principal versions of the [Chinese Room] argument: the vanilla and outdoor versions (Searle 1980a) [apply mutatis mutandis to the derivative robot version-in Boden 1990: 76-7] are directed against traditional symbol-processing AI; the simulator and gymnasium versions (Searle 1990) against connectionism. All are unsatisfactory. […] according to Copeland, the argument could not possibly establish -as Searle claims it does- his key thesis that whatever is ‘purely formal’ or ‘syntactical’ is neither constitutive of nor sufficient for mind” (2002:109)

6

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

“supports”8 Boden’s response to Searle’s “negative” claim “that purely formalist theories cannot explain mentality” (Boden 1990: 92), and so is of great importance for the current study.

To present Boden’s position, I use her 1988 book chapter “Is Computational Psychology possible?” and the shorter version thereof entitled, “Escaping from the Chinese Room” (1990). I also consult some of Boden’s other works, including other sections of her 1988 Computer Models of the Mind: Computational Approaches in Theoretical Psychology which discusses fundamental theoretical issues and relates them to a diverse range of specific computer models in showing how computational can contribute to psychological research. I also consult her 1977 Artificial Intelligence and Natural Man; and her 2004 The Creative Mind: Myths and Mechanisms.

3. Essential Concepts and Background In order to understand Searle’s position better, it is necessary to begin with a brief outline of the position he rejects – functionalism.

3.1. Functionalism Functionalism,

[…], is the view that mental states are functional states and functional states are physical states; [and] those physical states are defined as functional states in virtue of their causal relations” (Chalmers in Searle 1997: 141). A conviction is an example of a , for example, I could be convinced that the Madibeng9 building will be there tomorrow. This means that my conviction, as a mental state, would be seen by a functionalist as a functional state10 because it affects the way I function (i.e. I would not cancel my appointment at a room in the Madibeng building because of my conviction that it would still be there tomorrow).

8 As I will explain later, Chrisley (1995) aims at strengthening the English reply by discussing and refuting a number of objections to it. 9Madibeng (a Tswana word) meaning ‘the place of water’, is the main administrative building that houses the Vice-Chancellor and other important functionaries of the University of Johannesburg. 10 A functional state can be defined in terms of what the organism does rather than the actual composition of the internal state. In other words a functional state is a “state defined in terms of some of its causes and effects” (Piccinini 2009: 2). For example: being thirsty. I am thirsty in reference to my tendency to move towards a water source, to drink and quench my thirst, etc. Hillary Putnam (1967) in an article “The Nature of mental states” discusses this in greater detail when he looks at “functional states versus brain states” (in Cooney 2000: 224).

7

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

There are several varieties of functionalism, the most relevant of which for this paper is called computational functionalism. Computational functionalism as described by Levin (2013), is a theory that claims the mind is the computational organization of the brain. Computational functionalism has been described as the most influential variety of functionalism; sometimes it is called the “computational ” (ibid.). This theory contends that mental states correspond to what is called software in a computer, and the brain corresponds to the hardware.

The argument that arises then is, if one accepts the comparison with computers, then the possibility of a machine having mental states becomes real. If machines process the right kind of software, then they will be able to achieve the right mental or software states. Functionalism thus raises the question of the meaning of the term “Artificial Intelligence”.

3.2. Artificial Intelligence (AI) Artificial Intelligence can be very broadly defined as “…the study of a certain class of computational techniques” (Hasemer and Domingue 1989:3). This area of research has developed rapidly since the Dartmouth Summer Research project on Artificial Intelligence in 1956. However, more narrowly, AI can be defined as “… the science of making machines do things that would require intelligence if done by [humans]” (Minsky 1968: v). Boden defines the philosophy of AI as,

[…] the use of computer programs and programming techniques to cast light on the principles of intelligence in general and human thought in particular (Boden 1977: 5). From these definitions, one can glean a general trend about Artificial Intelligence; it is something to do with intelligent machines. For Boden,

[i]ts goal is to understand, whether for theoretical or technological purposes, how representational structures can generate behaviour and how intelligent behaviour can emerge out of unintelligent behaviour (1988: 6). As I will discuss in a later section, Searle makes a distinction between strong and weak AI, which is significant for his position. However, before I can discuss that, I turn first to a brief discussion of intelligence in general.

8

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

3.3. Intelligence The quest to understand intelligence remains an ambitious project. Alan Turing, the twentieth century English computer scientist, mathematician, logician, and cryptanalyst, was one of the prominent people who attempted to elucidate a clearer understanding of what intelligence is. What intelligence is11, therefore, remains highly elusive. Stuart Shieber explains:

How do you tell if something is a meter long? You compare it with an object postulated to be a meter long. If the two are indistinguishable with regard to the pertinent property, their length, then you can conclude that the tested object is the given length. Now, how do you tell if something is intelligent? You compare it with an entity postulated to be intelligent. If the two are indistinguishable with regard to the pertinent properties, then you can conclude that the tested entity is intelligent. A test of intelligence such as this, based on indistinguishability, has a certain plausibility to it, and a long history” (2004: 1). In its modern form, intelligence has become synonymous with a test now known as the Turing Test.

3.4. The Turing Test In 1950, Turing wrote what has proved to be a seminal article titled “Computing Machinery and Intelligence” in the journal Mind. According to James Moor,

Turing believed that computers, if properly designed and educated, could exhibit intelligent12 behaviour, […] that would be indistinguishable from human intelligent behaviour (Moor 2003: vi). Turing’s article considered the question, “Can machines think?”, which provoked new ways of enquiring into how people think. In that epoch-making paper Turing found the “original” question13 “… too meaningless to deserve discussion14” and sought to

11 N.B. Testing for intelligence and defining intelligence are two different things. 12 Jack Copeland (Copeland in Moor 2003: 5) poses what I find to be an interesting question, but one that I cannot discuss in detail in this paper due to space limitations: “Did Turing offer a definition of ‘Thinking’ (or ‘Intelligence’)?” His concerns arise from his view that many commentators do not distinguish between ‘thinking’ and ‘intelligence’. In my reading, Turing used the terms “think” and “be intelligent” as if they were synonyms, as one can tell by a simple comparison of his article’s title and first sentence. However, the two words often mean quite different things. As Shieber (2004:6) notes, when I say that my son is intelligent, I usually mean more than the mere fact that he is capable of thought. Many authors, however, follow Turing’s practice, taking the notion of “being intelligent” under which it means “being capable of thought”, rather than “being smart” (ibid.). 13 By ‘original question’, Turing meant: “Can machines think?” (Turing in Boden 1990: 40). 14 A very interesting comment is put forward by Vincent John Mooney III in his paper “Searle’s Chinese Room and its Aftermath.” He says “…when Turing says that the question of whether or not machines can think is “too meaningless to deserve discussion”, he is asserting a philosophical, not scientific, position. He has nowhere proven scientifically that this question is meaningless - at least, I am not aware of any such claimed proof.” (1997: 2).

9

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

replace it with “…something more concrete” (Turing in Shieber 2004: 67). He therefore proposed a shrewd perspective on dealing with the idea of thinking:

He found this concrete form in a game-theoretic crystallization of Descartes’s observation that flexibility of verbal behaviour is the hallmark of humanness. Turing went on to propose an “imitation game” (Shieber 2004: 67). In the imitation game, an interrogator attempts to determine which of two agents is human and which the machine is. The interrogator makes her determination based on purely verbal interaction with both agents.

According to Searle,

[T]he Turing test, as currently understood, is simply this: if a computer can perform in such a way that an expert cannot distinguish its performance from that of a human who has certain cognitive ability - say, the ability to do addition or understand Chinese - then the computer also has that ability (1990: 26). What is crucial, according to Boden, is to keep in mind that Turing argued that the question of whether a machine can think should not be made on the basis of a prior (and possibly question-begging) definition of ‘thinking’ but by enquiring whether some conceivable computer could play the ‘imitation game’ (1990: 4).

The Turing Test and its approach remain important, even if one would want to disagree that a machine can think. As Hasemer and Domingue (1989: 7) argue,

the realization that while some apparently intelligent activities, such as arithmetic, could be described via a simple set of program instructions, others such as understanding the meaning of a sentence could not. In other words, if some intelligent activities can be represented as computational techniques, perhaps all of them can […] if not, then in the process of trying we stand a good chance of discovering something very fundamental about the human mind. In this section, I have given a very brief explanation of the Turing test understood as a conversational ability contest. For the success of the test, “a particular kind of machine, […] called a […] ‘digital computer’” (Turing in Boden 1990: 43) would be a key agent, especially because of its potential to compute. It could possibly replace the ‘human’ computer (http://crgis.ndc.nasa.gov/historic/human_ computers). Prior to the digital or electronic computer, the “computer” referred to people as a job title

10

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

designating someone who performed mathematical equations and calculation by hand (ibid.). I now investigate in some detail the concept of a digital computer.15

3.5. Digital Computer Simply put, a general-purpose digital computer is a “device that manipulates formal symbols” (Searle 1997: 9). Tim Crane in his book The Mechanical Mind says,

Computers are usually made out of a combination of metal and plastic, and most of us know that they have things inside them called ‘silicon chips’, which somehow make them work […] a device which processes representations in a systematic way (2003: 85). According to Crane the idea of “representation” is implicit in the idea of the - what Boden calls “a theoretical ideal” (1988: 259). In this context,

a representation is a configuration of some kind that, as a whole or part by part, corresponds to, is referentially associated with, stands for, symbolizes, interacts in a special manner with, or otherwise represents something else (Palmer in Goldin and Kaput 1996: 398). Computers are at times referred to as information processors or what Mark Halpern labels “general-purpose algorithm executors” (2006: 55.). For Crane, computers process representations (2004: 102), where representations are understood as that which “carry information in the sense that they ‘say’ something, or are interpretable as ‘saying’ something. That is what computers process or manipulate” (ibid.).

The computer is a two-sided artefact - on one side is the hardware (what we touch and see) and the second side is the software, in other words the program. This is the “invisible” side of a computer.

3.6. Computers as Intelligent Computers as machines are very interesting mainly because they seem to function in a way similar to how people do - they store, retrieve, manipulate and communicate data as information. They are unlike other physical machines like steam engines and cars. The software capability taken as similar to human thought has resulted in computers being presented as intelligent, and resulted in the debate on Artificial Intelligence. The idea that the computer is capable of having intelligence

15 ‘Computer’ and ‘machine’ will be used interchangeably here, but later I will highlight Searle’s problematisation of this.

11

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

indistinguishable from humans suggests that machines (computers) could support or enhance human reasoning.

The question is how the ‘plastic, ‘silicon’ and ‘the invisible factor inside’ are able to perform an action that is ordered, interactive and intelligent. The answer: computers are programmed. That means they follow a sequence of instructions taking input and giving output in a structured way.

A “structured execution”; that is, processing representations in a systematic way, suggests a similarity to how humans do information processing. Based on this similarity, computers are also proposed as things that think. However, the contentious point is not that they are viewed as things that think, but that it is claimed not only that they think but that this activity they carry out is indistinguishable to what humans and other sentient biological organisms16 do.

4. Searle and the Chinese Room argument In “Minds, Brains and Programs”, Searle seeks to show that Strong (AI) is false. Searle begins by distinguishing ‘strong AI’ from what he terms ‘weak’ or ‘cautious’ AI (Searle in Boden 1990: 67). For Searle, Strong AI is untenable.

4.1. Strong Artificial Intelligence (AI) According to Searle, Strong AI is the theory of the mind that states; “the mind is just a computer program” (1997: 9). I find the definition that he put forward in his article published in 1980 to be clearer,

the appropriately programmed computer is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states (Searle in Boden 1990: 67). The running of the program would therefore be classified as a mental activity.

16 An ancillary point that Searle makes is explained in the book The Mystery of Consciousness where he says, “Humans and higher animals are obviously conscious, but we do not know how far down the phylogenetic scale consciousness extends” (1997: 5). It could be contentious but as an example, the present state of neurobiological knowledge implies we would not worry about fleas as conscious (ibid.). For the purpose of explaining his idea of intentionality; Searle limits it to humans and what he calls higher animals. Searle (2014) points out that consciousness is open to humans and non-human animals. Since my focus in this essay is human intelligence, I will not discuss non-human animal intelligence.

12

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

4.2. Weak Artificial Intelligence (AI) According to Boden, Searle describes Weak AI as that which attaches “the principal value of a computer in the study of the mind to be that it only gives us a very powerful tool” (1990: 67). In a work published years later, Searle puts it across more clearly,

the computer is a useful tool, in doing of the mind, as it is useful in doing simulations of just about anything we can describe precisely, such as weather patterns or the flow of money in the economy (Searle 1997: 9) A tool is something that one uses to achieve a required or desired result. A tool, for example, a trowel, is not the builder, but it is used to build the wall which is the end result. In a way it is used to achieve the end, it does not have the skill but is important in achieving the required result.

However, Weak AI is not what interests Searle. He takes it as not presenting a bold challenge worth exploring, and so his focus is on challenging Strong AI.

4.3. The Chinese Room Argument in Focus As was briefly outlined in the Introduction, in the thought experiment Searle imagines himself locked in a room with two openings: one to allow inputs and the other opening for outputs and a guideline or list of rules that are written in English (Searle in Boden 1990: 69).

The guidelines that Searle has are about how to manipulate Chinese characters. Since Searle does not understand Chinese, the characters are all “meaningless squiggles” (ibid.) to him. The characters are in fact Chinese texts and questions coming in from the input opening. Using the rules, he manipulates the characters and produces each reply. Armed with a reply, he pushes it out through the output opening. Lo and behold, the answers coming out of the opening are in Chinese and they are what Searle produces and they are very good. In fact, the replies are so good to the extent that no one can tell that he (Searle) is not a native Chinese speaker; “my answers to the questions are absolutely indistinguishable” (ibid.). So what does this mean?

Basically, what Searle has done is to conduct symbol manipulation, with no understanding of Chinese. Importantly, he produced very good results despite not knowing Chinese in the same way as a native Chinese speaker. Hence his

13

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

conclusion: syntax is not equal to semantics (Searle in Boden 1990: 83). In brief, syntax according to George Luger is about the “rules for combining words into legal phrases and sentences, and the use of those rules to parse and generate sentences” (2009: 623) and so is related to the manipulation of rules and symbols. On the other hand, semantics “considers the meaning of words, phrases, and sentences and the ways in which meaning is conveyed in natural language expressions” (ibid.), and so is related to understanding.

5. Replies to Searle As I mentioned in the literature review, Searle’s argument has generated an enormous literature of criticism. Here I provide a brief overview of the main replies17 to Searle. In this essay, I discuss the six replies briefly, but my main focus is on the Systems reply18, and the Robot Reply, since this will assist me in introducing the main focus of my paper – Boden’s response. Boden’s response deals specifically with the robot reply, and because her argument has been characterised as an example of the systems reply, I put greater emphasis on these two responses.

5.1. The Replies in Brief 5.1.1. The Systems Reply The first objection called the Systems reply arose from Berkeley19 during a colloquium.20 As I have already briefly explained in the Introduction, it agrees that the man in the room does not understand the language (Chinese) but only because “he is merely part of a whole system, and the system does understand the story” (Searle in Boden 1990: 72). In other words, “understanding is not being ascribed to the mere individual; rather it is ascribed to the whole system of which he is a part” (ibid.).

Several responses were proposed to counter the Systems Reply. David Cole (2014) gives what I think is a lucid summary of the important point that Searle makes to the

17 By “main replies” I refer to the six replies that he included in his published 1980 article. Searle points out that in total the article provoked twenty-seven simultaneously published responses, almost all of which were hostile to the argument, and some downright “rude” (Searle 2009: n.p. http://www.scholarpedia.org/article/Chinese_room_argument). 18 This reply is what Searle claims to be the most common attack (Searle 2009). According to Joel Walmsley, in his book titled Mind and Machine; “Legend has it that the response was made by Herbert Simon - who happened to be visiting at the time - when he pointed out that John-in-the-room was much like a single neuron in a brain” (Walmsley 2012: 86). 19 John Searle’s home institution, University of California at Berkeley, California. 20 Joel Walmsley, in his 2012 Mind and Machine, provides a detailed discussion.

14

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

Systems Reply. According to Cole, Searle responds to assertions of the Systems Reply in a modest way. The principle is that for Searle the occupant can

internalize the entire system, memorizing all the instructions and the database, and doing all the calculations in his head. He could then leave the room and wander outdoors, perhaps even conversing in Chinese. But he still would have no way to attach ‘any meaning to the formal symbols’ (Cole 2014: n.p.). Searle’s argument hinges on the point that even if we grant that the occupant is the entire system, he still would not understand Chinese. An interesting example given by Cole (ibid.) is that despite the man being the entire system, he would not know the meaning of the Chinese word for hamburger. He still cannot get semantics from syntax.

Another response was proposed by Jack Copeland (2002), in his paper “The Chinese Room from a Logical Point of View”. Copeland considers Searle's response to the Systems Reply, arguing that a little man inside “Searle's head might understand even though the room operator himself does not, just as modules in minds solve tensor equations that enable us to catch cricket balls” (ibid.). Copeland’s view is that even if:

…the individual players [do not]21 understand Chinese […] there is no entailment from this to the claim that the as a whole does not come to understand Chinese. The fallacy involved in moving from part to whole is even more glaring here than in the original version of the Chinese Room Argument (2002: 116). Copeland denies that connectionism22 implies that a room of people can simulate the brain. Searle believes connectionism poses no new challenges to the Chinese room argument (Mooney 1997: 27).

Yet another response comes from John Haugeland (2002). For Haugeland, who accuses Searle of a form of the description fallacy23, Searle's response to the Systems Reply is flawed since,

21 Cole’s emphasis 22 According to Jonathan Waskan, connectionism is an approach to the study of human that utilizes mathematical models, known as connectionist networks or artificial neural networks. Often, these come in the form of highly interconnected, neuron-like processing units. It is closely linked to computational . (Waskan 2016: n.p.) 23 The fallacy that all statements can be understood purely in terms of their descriptive meaning.

15

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

…what [Searle] now asks is what it would be like if he, in his own mind, were consciously to implement the underlying formal structures and operations that the theory says are sufficient to implement another mind (2002:380). In Haugeland’s view, failure to understand Chinese is irrelevant: the man in the room is just the implementer. The larger system implemented would understand.

Contrary to the above-mentioned theorists, Steven Harnad provides a two-paper defence (1989 and 2012) of Searle’s argument in the context of the System Reply. In his 1989 paper, Harnad writes,

Searle formulates the problem as follows: Is the mind a computer program? Or, more specifically, if a computer program simulates or imitates activities of ours that seem to require understanding (such as communicating in language), can the program itself be said to understand in so doing? (Harnad 1989:5). The issue raised here is hinged on understanding. Harnad says “On the face of it, Searle’s Chinese room argument looks valid and certainly works against the most common rejoinder, the ‘Systems Reply’” (ibid.). According to Cole (2014), Harnad seems to follow Searle in connecting understanding and states of consciousness; further, Harnad (2012) argues that Searle shows that the core problem of conscious “feeling” requires sensory connections to world” (ibid.).

5.1.2. The Robot reply This objection originated from Yale University. In John Preston’s view, the Robot reply is more natural because it refers to the causal contact a system has with its environment, for example, being capable of reacting to stimuli, negotiating terrain, and operating upon things. This is contrary to the System Reply where the system has “no more means of attaching meaning to, or interpreting, the Chinese symbols than the person in the Room had” (Preston and Bishop 2002: 29).

Robert Damper in his book The Logic of the Chinese Room summarises the Robot reply succinctly;

if we replace the disembodied AI program by a robot with sensors, effectors, etc., this is ‘intelligent’ in just the way that a human is (2006: 165). What this means is that, Searle-in-the-room would supposedly acquire understanding of Chinese, if sensori-motor mechanisms were added.

What is clear for Boden is that,

16

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

the Robot reply accepts that the only understanding of Chinese which exists in Searle’s example is that enjoyed by the Chinese people outside the room. Searle-in-the-room’s inability to connect Chinese characters with events in the world shows that he does not understand Chinese (1990: 94). The nub of this position, as Boden explains, is that the computer is placed inside a robot and is made to not only take “formal symbols as input and give out formal symbols as output” (Boden 1990: 76), but will rather be more involved by operating the robot. However, Searle’s response is that the robot only ends up doing “something like perceiving, walking, moving about, hammering nails, eating, drinking - anything you like” (ibid., emphasis mine), but still does not have “genuine understanding and other mental issues” (ibid.), in Searle’s view. I now look at the reply in greater detail.

Searle's Reply Searle’s initial hunch is to point out that the robot reply concedes the falsity of Strong AI because “to insist that syntax plus external causation would produce semantics is to admit that syntax is insufficient for semantics” (1987: 297). For Searle, in the Robot reply,

the addition of ‘perceptual’ and ‘motor’ capacities adds nothing by way of understanding, in particular, or intentionality, in general, to Schank's original program (Searle in Boden 1990: 76). Searle, therefore emphasizes that his action as Searle-in-the-robot is just manipulating formal symbols. He does not know other factors, viz., that he receives “information” from the robot’s “perceptual” apparatus and that he is just pushing out “instructions” to its motor apparatus without really understanding those factors (in Boden 1990: 77). In Searle’s words:

I am the robot's homunculus, but unlike the traditional homunculus, I don't know what's going on. I don't understand anything except the rules for symbol manipulation (ibid.). For Searle, it is the case that the robot has no intentional states at all. All the movement that occurs is a sum of electrical connections and the effected programming. Furthermore, despite performing the programming, Searle-in-the-robot still fails to have “intentional states of the relevant type” (ibid.). All his action is merely the following of formal programming and manipulating formal symbols (ibid.).

17

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

Given any feasible program to control a robot, Searle-in-the-robot can execute it and still not understand. Hence, Searle-in-the-robot still lacks intentional states, and as a result the robotic functionalist hypothesis fails too.

5.1.3. The Brain Simulator reply The third objection put forward against the Chinese room objection originated from Berkeley and Massachusetts Institute of Technology (MIT), and is hinged on the issue of knowledge representation. Here the suggestion is that we design a program that “simulates the actual sequence of neurone firings at the synapses of the brain” (Searle in Boden 1990: 77). The objection proposes that we look into how neural firings occur in people when they receive input and when they produce output, and then we produce a machine to simulate a similar set of sequences. Then, possibly, at that level such a system understands.

In response, Searle first points out that this reply is “unusual”, for the reason that it is hinged on an idea of simulation (ibid.). Simulation can be generally defined as imitation of either a process or a situation (Bode and Dietrich 2013: 54-55). For Searle, strong AI is based on the idea that “we don’t need to know how the brain works to know how the mind works” (ibid.) and so the brain simulator reply certainly seems odd.

Second, Searle still insists that there is no understanding in the system on this reply. The only thing that the system does is that it takes Chinese input, simulates the formal structure of the synapses of the Chinese brain and gives Chinese output (in Boden 1990: 78), As a result, Searle concludes that “…the problem with the brain simulator is that it is simulating the wrong things about the brain” (ibid.).

Terry Winograd, however, argues that Searle misses the point that his respondents24 were making. Their point was that “simulation could duplicate in some exact sense the activities of the brain” (in Preston and Bishop 2002: 87). Searle views this response as appealing to a technical issue and not as a response to a specific philosophical issue (in Preston and Bishop 2002: 87). The issue Searle reminds us he is raising with the Chinese Room is whether a “computer running a symbolic simulation could ever be described as ‘understanding’” (ibid.).

24 Referring to those participants from Berkeley and MIT, who attended colloquia at these institutions.

18

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

In my view, Searle’s conclusion is reasonable, since the brain simulator reply is based on attempts at copying or simulating an action (the operation of the brain) and that fails to be sufficient to address Searle’s worry: how to produce understanding (Searle in Boden 1990: 78).

5.1.4. The Combination reply This objection, emanating from Berkeley and Stanford, suggests that the best way of dealing with the Chinese room argument is to take the three previous replies and benefit from the issues they raise. Collectively, they could be more convincing and even decisive (Searle in Boden 1990: 78) in undermining the Chinese room argument. We are asked to imagine:

a robot with a brain-shaped computer lodged in its cranial cavity, imagine the computer programmed with all the synapses of a , imagine the whole behaviour of the robot is undistinguishable from human behaviour, and now think of the whole thing as a unified system and not just as a computer with inputs and outputs (ibid.). In Searle’s view, based on the above formulation it as agreeable to ascribe intentionality to the unified system (in Boden 1990: 79). However, his worry is on how this would be of help to the claims of Strong AI, for instance the claim that “the appropriately program computer is a mind […] can be literally said to understand and have cognitive states” (Searle in Boden 1990: 67).

To fortify his reply Searle gives an example of how we find it reasonable to ascribe intentionality to other primates and domestic animals, based on the assumption of similarity (in Boden 1990: 80). Searle points out two issues: the coherence of animal behaviour to humans and that mental states must be produced by a made out of stuff that is like our stuff (ibid.). In his view, this is how we come to making similar assumptions about the robot. However in Searle’s view, this argument falls away once we find that 1) behaviour was resultant of formal programming and 2) that actual causal properties of the physical substance were irrelevant. As a result, Searle claims that the assumption of intentionality falls away (ibid.).

I have so far looked at the four replies that Searle received and responded to. He goes on to further discuss two other replies; one from Yale and yet another one from

19

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

Berkeley. According to him, these two replies really miss the point of the Chinese room argument. Nonetheless, he thinks they are worth discussing (ibid.).

5.1.5. The Other Minds reply From Yale comes “the Other Minds reply”, which claims

[If] a computer can pass the behavioural tests as well as [people] can … if you are going to attribute cognition to other people you must in principle also attribute it to computers (Searle in Preston and Bishop 2002: 88). For this reply, there is only one way of knowing that other people have mental states, in other words that they understand (in Boden 1990: 80) and that it is by means of observing their “behaviour” (ibid.). This suggests that if a computer could pass a behaviour test in principle, such a computer must also be attributed understanding.

Searle’s response to the Other Minds reply is to insist that “cognitive science cannot be based purely on behavioural description” (in Preston and Bishop 2002: 88) since behaviour does not necessarily guarantee that we know what is going on in other minds, or even that there is another mind at all. For example, I could walk into the Office of the Head of Department of Philosophy at University of Johannesburg and see her smiling. Then based on the smile I could conclude that she is happy. Unknown to me it could be the case that she is only acting and she is in fact very tired. In the same way, I could engage in a “conversation” via e-mail generated by a machine programmed to appear intelligent, but that does not guarantee that the machine “understands” what I am saying.

Searle’s argues that the issue at hand “is not how I know that other people have cognitive states, rather what it is that I am attributing to [other people] when I attribute cognitive states […]” (Searle in Boden 1990: 80). What is evident is that it is not just computational processes and output, since these two can exist without the cognitive state (ibid.). Searle’s argument is that,

[i]n ‘cognitive sciences’ one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects (ibid.). 5.1.6. The Many Mansions reply Yet another reply arose from Berkeley. This response to the Chinese room argument is about the current state of innovation, saying it has not developed adequately to

20

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

achieve the right causal processes essential for intentionality (Searle in Boden 1990: 81). Briefly, the argument states, “[…] eventually we will be able to build devices that have these [mental] causal processes” (Winograd in Preston and Bishop 2002: 89). This reply, according to Cole (2014: n.p.) takes the digital computer as a bad example of having potential to understand, but it suggests that in future we may be able to build artificial intelligence using different devices closer to neurons that will understand.

Searle in his reply says,

I really have no objection to this reply save to say that it in effect trivializes the project of strong AI by redefining it as whatever artificially produces and explains cognition (Searle in Boden 1990: 81). According to Andrew Basden (2007), Searle counters this objection astutely. Searle seemingly agrees to what the reply proposes, the idea that the current state of innovation is the hindrance to machines achieving intentionality. Yet he insists that the reply avoids the issue25. By redefining the aim of strong AI, the reply renders it irrelevant to the precise, well-defined thesis of strong AI, namely: “mental processes are computational processes over formally defined elements” (Searle in Boden 1990: 81).

In this section, I have explained the distinction that Searle makes between Strong AI and Weak AI. I summarised the Chinese Room Argument and then discussed in varying detail the six replies that Searle noted and are included as part of his 1980 article. As I move to the following section, I discuss a separate reply that was given by Boden (1990). In interrogating that reply, I argue that Boden’s argument against what she calls Searle’s mistaken claim - the claim that “computational theories in psychology are essentially worthless” (Boden 1990: 87) – suffers from a number of problems.

6. Interrogating Boden’s Reply to Searle My focus now turns to Boden’s position. This section forms the central part of my research, and comprises a detailed analysis of Boden’s article “Escaping the Chinese Room” (1990) in which she sets out her so-called English reply to Searle. I

25 The issue on hand is about the possibility of Strong AI.

21

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

divide this section into subsections using the parts of Boden’s arguments, outlining her arguments in detail, and then immediately including my critical responses to her.

Boden begins by distinguishing two main claims that she thinks Searle makes:

(1) that because computational theories are purely formal in nature, they cannot help us to understand mental processes (Searle’s negative claim); and

(2) that computer hardware does not have the right causal powers26 to generate mental processes, unlike neuroprotein (Searle’s positive claim).

This she calls “Searle's two-pronged critique of computational psychology” (Boden 1990: 92). Boden then proceeds to develop a critique of what she calls Searle’s “positive claim - his view that intentionality must be biologically grounded” (ibid.). Later in the paper she then replies to what she calls his “negative claim - that purely formalist theories cannot explain mentality” (ibid.).

As I will discuss, Boden provides a double pronged critique of Searle’s positive claim. On this count, she attempts to demonstrate that (1) Searle’s claim that intentionality is biologically grounded depends on bad “biological analogies” (Boden 1990: 92) and that (2) his claim that intentionality is biologically grounded depends on unreliable intuitions (ibid.). She claims that his “‘positive’ claim, […] is at best a promissory note, at worst mere mystery-mongering” (Boden 1990: 94). In my view, she comes to this conclusion too hastily. I show this by demonstrating that his analogy between photosynthesis and intentionality is not as "bad” an analogy as Boden suggests. I argue against Boden that we know less about photosynthesis than she asserts we do, and more about intentionality than she asserts, and that on this basis, Searle’s analogy is stronger than she gives it credit for. My view is that Boden’s position is based on an implicit bias in her work – that science gives us knowledge based on absolutely true facts, and that this gives rise to her attack on Searle’s analogy.

26 Boden would say the right causal powers that Searle notes is in terms of the material stuff - what something is made of - not how things function. The term stuff is dealt with at length by J. Christopher Maloney in his paper titled The Right Stuff (1997). The paper is a defence of Searle’s position on consciousness, in other words, intentionality. The stuff he refers to is neuroprotein - something he takes for granted to be that right stuff to generate intentionality. In contrast, he disputes that present day computers are not made with the right ‘stuff’, that is, metal, plastic and silicon.

22

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

With regards to her second claim, that is, that our intuitions have (at present) nothing useful to “say about the material basis of intentionality”, Boden (1988: 242) makes it clear that she is willing to grant that:

Our intuitions might change, with the advance of science. Possibly we shall eventually see neuroprotein (and perhaps silicon too) as obviously capable of embodying mind, much as we now see biochemical substances in general (including chlorophyll) as obviously capable of producing other such substances (ibid.).

My view is that even though Boden is correct in claiming that the material basis of intentionality still requires further investigation, claiming that Searle’s position is a mere intuition (Boden 1990: 92) is misleading.

I discuss what Boden calls Searle’s “negative claim - that purely formalist theories cannot explain mentality” (ibid.) in a later section of this mini-dissertation. At this point, it is sufficient to note that she divides her argument against his position into two parts, first considering the example of Chinese Room directly, whilst second, questioning the background assumption on which the Chinese Room depends, the point that “computer programs are pure syntax” (ibid.).

6.1. Interrogating Boden’s critique of Searle’s Positive Claim – Intentionality must be Biologically Grounded Searle insists that “intentionality, […] is a biological phenomenon” (Searle in Boden 1990: 86). This is a view that he has not abandoned throughout his career. In an interview in January 2014 with Zan Boag, for example, Searle again insists that “consciousness27 is a biological property like digestion or photosynthesis” (Searle 2014: n.p.). For him, intentionality arises from the underlying biochemistry of conscious beings, just as photosynthesis and lactation are dependent on underlying substances (Searle in Boden 1990: 86).

He argues that “a deep and abiding dualism […]”, (Searle in Boden 1990: 87) causes people to think that it is possible that intentionality is NOT a biological phenomenon. Searle believes “whatever […] the brain does to produce intentionality, it cannot

27 Searle distinguishes between intentionality and consciousness, but he sometimes uses the terms interchangeably. I discuss this presently.

23

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

consist in instantiating a program since no program, by itself, is sufficient for intentionality” (Searle in Boden 1990: 87).

It is important to note at this point that Searle distinguishes between consciousness and intentionality. Searle proposes a common-sense definition of “consciousness”. By consciousness he refers to those,

states of sentience and awareness that typically begin when we awake from a dreamless sleep and continue until we go to sleep again. Or fall into a coma or die or otherwise become “unconscious” (1997: 5) Consciousness so defined is an inner, first-person, qualitative phenomenon (subjectivity) (ibid.).

For Searle, conscious states could exhibit intentionality, but not necessarily so. Intentionality can be generally defined as follows:

[I]ntentionality is that property of many mental states and events by which they are directed at or about or of objects and states of affairs in the world […] if I have an intention, it must be an intention to do something […], a feature of directedness or aboutness “Intentionality” (Searle 1983: 1), “in relation to some object or content” (Cooney 2000: 7). Three years after the publication of “Minds, Brains and Programs”; Searle wrote a book with the primary aim of developing a theory of intentionality, and he defines intentionality as directedness […] intending to do something […] (1983: 3). Over ten years later, Searle (1999:4) elaborated: “Intentionality” is the name that philosophers and psychologists give to that feature of many of our mental states by which they are directed at, or about states of affairs in the world. If I have a belief or a desire or a fear, there must always be some content to my belief, desire or fear. It must be about something even if the something it is about does not exist or is a hallucination. Even in cases when I am radically mistaken, there must be some mental content which purports to make reference to the world.

Importantly, for Searle (1997), not all conscious states have intentionality in this sense. For example, there are states of anxiety or depression where one is not anxious or depressed about anything in particular but just in a bad mood. This would not count as an intentional state. But if one is depressed about a forthcoming event,

24

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

that is an intentional state, because it is directed at something beyond itself (Searle 1997).

Geir Overskeid (2005) points out another important aspect that Searle (1992) introduces, he calls it “nonconscious intentionality” (2005: 599). In Searle’s view nonconscious beliefs may still be intentional, as long as they are potentially conscious (ibid.)28. Searle says,

I now want to make a very strong claim, […] The claim is this: Only a being that could have conscious intentional states could have intentional states at all, and every unconscious intentional state is at least potentially conscious. This thesis has enormous consequence for the study of the mind (1992: 132). I now discuss how Boden attempts to show that Searle’s claims are incorrect, by attacking his position on two fronts.

6.1.1. Searle’s claim that intentionality is biologically grounded depends on bad analogies Boden argues against Searle’s use of the argument by analogy, i.e. that intentionality is a biological phenomenon just like photosynthesis (Boden 1990: 92).

To understand Boden’s method here, it is critical to understand the use of analogy in philosophical argumentation. Analogical arguments follow inductive reasoning. When an analogy is given, the aim is to show that two or more things that are distinct can still be looked at as being alike or similar in some respect. For example, a sparrow is very different from a car, but they are still similar in that they both move. Arguing by analogical reasoning does not aim to say its findings are conclusive – rather, they result in varying degrees of probability, i.e. “they are probably similar in one further way” (Vaughn 2006: 35, my emphasis). A strong argument by analogy is one where there are “more relevant similarities between the […] things compared” (ibid.); whilst a weak argument by analogy is one where “several similarities are noted. But there are some unmentioned dissimilarities” (Vaughn 2006: 35-6).

Boden argues that Searle’s analogy is weak since according to her intentionality cannot be understood to be on par with “digestion, or photosynthesis, or mitosis, or miosis, or any other biological phenomenon” (Boag and Searle 2014: n.p.). In her

28 For a detailed discussion about ‘nonconscious beliefs being intentional’ consult Pierre Jacob “State Consciousness Revisited” in Consciousness and Intentionality: Models and Modalities of Attribution. (e.d). Denis Fisette (1999).

25

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

view, the reason for this is that we have a very clear idea of how photosynthesis and its related functions work, unlike the brain and understanding, about which we do not know much (Boden 1990: 92). As a result, Searle is illegitimately comparing intentionality with biological phenomena such as photosynthesis in Boden’s view.

To make her point, Boden (1990) notes we can delineate the products of photosynthesis (the sugars and starches) and that we are able to show how these are different from other biochemical products; for example, proteins. She claims we have firm knowledge about chlorophyll and how it is an important part of photosynthesis (Boden 1990: 92). I agree with her that our understanding of chlorophyll tells us that it is a catalyst and not a raw material and that we also know the process that speeds up the catalytic function it has. Boden then directs her argument to the comparison of brains and understanding, where she claims the case is very different (ibid.).

Boden here briefly mentions the various attempts that have been made to characterize intentionality: first she mentions Frantz Brentano29, who says intentional states direct the mind on an object. Second, she acknowledges that Searle claims that they (intentional states) have intrinsic representational capacity or ‘aboutness’ and thirdly, she mentions Roderick Milton Chisholm (1967) who defines intentionality in logical terms (Boden 1990: 92). From her brief discussion, Boden concludes that intentionality is clearly an area where consensus is a problem. In comparison, the 30 general molecular formula of carbohydrates in chemistry – CH2O - is uncontested, and well defined, and so, in Boden’s view, Searle’s analogy is a weak one.

This then is the first prong of Boden’s argument - that the analogies that Searle uses to support his claims are unreliable, without direct evidence, “misleading” (Boden 1990: 92), mere conjecture and not a result of any form of reasoning process. They are, in short, bad analogies.

29 In his book Psychology from an Empirical Standpoint Brentano famously introduced the Intentionality thesis. “Every mental phenomenon is characterized by what the Scholastics of the Middle Ages called the intentional (or mental) inexistence of an object, and what we might call, though not wholly unambiguously, reference to a content, direction toward an object (which is not to be understood here as meaning a thing), or immanent objectivity. Every mental phenomenon includes something as object within itself...” (Brentano 1981: 88). 30 Formaldehyde, a naturally-occurring compound, is considered the simplest carbohydrate.

26

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

6.1.2. My Objections to Boden’s attack on Searle’s Analogy I argue here that Boden’s attack on how Searle uses his analogy of intentionality and photosynthesis is problematic, by challenging her claim that we know a lot about photosynthesis, and very little about intentionality. My view is that Boden’s position is a result of her implicit adherence to the view that the physical is amendable to scientific investigation, whereas the mental is not.

6.1.2.1. A closer look at Photosynthesis Are we really in a position to say we know everything about photosynthesis, even today, many years after Boden’s paper? I do not think so. Photosynthesis is normally defined as “the physico-chemical process by which photosynthetic organisms use light energy to drive the synthesis of organic compounds” (Govindjee 1999: 11). This definition, which is generally taught to children at school, neglects to acknowledge the huge amount of research still going on today. Current research in photosynthesis continues to show us that there are lots of things that we actually do not know and that we are only discovering them now. I provide three examples:

In a 2015 article “A Quantum Protective mechanism in Photosynthesis”, Adriana Marais31, Ilya Sinayskiy, Francesco Petruccione and Rienk van Grondelle, propose that the fast-relaxing spin of iron plays a protective role in photosynthesis by generating an effective magnetic field (2015: 1). The protective mechanism is, for them, a clear example of a quantum effect playing a macroscopic role vital for life (ibid.). Marais et al’s work is a good example of how our understanding of photosynthesis has been extended by work in quantum physics.

A second example is from a 2014 article titled “Non-classicality of the molecular vibrations assisting exciton energy transfer at room temperature” by Edward J. O’Reilly and Alexandra Olaya-Castro. This paper shows that in prototype dimers present in a variety of photosynthetic antennae, efficient vibration-assisted energy transfer32 in the sub-picosecond timescale and at room temperature can manifest

31 http://quantum.ukzn.ac.za/members/ms.-a-marais 32 In an article titled “Excitation Energy Transfer and Energy Migration: Some Basics and Background” the concept of energy transfer is described (Govindjee 1969). If there are two molecules close together, one with an absorption band at a wavelength shifted to the longer wavelengths than another, light energy absorbed by the one absorbing at the shorter wavelength is usually transferred to the one that absorbs at the longer wavelengths (ibid.). In other words, one molecule donates excitation energy, and the other accepts this energy. Research shows that this transfer probably takes

27

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

and benefit from non-classical fluctuations of collective pigment motions (2014: n.p). As such, the suggestion is that non-classical properties of vibrational motions assisting excitation and charge transport, photoreception and chemical sensing processes could be crucial in showing a role for non-trivial quantum phenomena in biology (ibid.). Hence, as the article purports, photosynthesis, nearly thirty years after Boden’s paper, continues to be a fertile research area where new knowledge is being added.

My third example is an earlier research angle by Whitmarsh and Govindjee in an article, “The Photosynthetic Process” published in Concepts in Photobiology: Photosynthesis and Photomorphogenesis (1999). This research is directed at understanding the interaction between global climate change and photosynthetic organisms. This additionally fortifies my doubt about Boden’s claim that we conclusively know and understand photosynthesis. As the three examples of evidence I have presented suggest, there are still lots of things we still do not know about photosynthesis; and so, contrary to Boden, we can only conclude that we have relative certainty about it.

Boden’s claims about photosynthesis reveal what I think is an implicit bias in her work – that the natural sciences are able to give us indubitable facts about the world. This idea has been challenged by a number of philosophers, including Alan Chalmers (1999) in his book, What is this Thing called Science? Chalmers shows that our common sense idea that science is knowledge based on the facts of experience (with ‘facts’ seen as absolute knowledge arrived at by means of experience), is incorrect. Rather, our scientific (and other) perceptions are influenced by the background and expectations of the observer, and judgements about the truth of observation statements depend on what is already known or assumed (Chalmers 1999: 17-18).

As I discuss in the next section in more detail, Searle himself mentions this bias (he calls it scientific ), although he does not explicitly link it to Boden’s argument, as I do here. My point is that Boden’s claim that what we know about

place by a resonance mechanism, and is only properly describable in terms of quantum mechanics (ibid.).

28

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

photosynthesis is based on indubitable facts about the world is questionable, and so her problematisation of Searle’s analogy is already shaky.

6.1.2.2. We do not know much about Intentionality Unlike photosynthesis, the theory of intentionality, according to Boden (1990), is a philosophically controversial issue. She claims “we cannot even be entirely confident that we can recognize it when we see it” (Boden 1990: 92). Of course, at the time that Boden wrote her paper a lot of grey areas existed in the study of intentionality, as they still do today. However, I agree with Searle, who suggests we now have made significant inroads to our understanding of intentionality (Searle in Preston and Bishop 2002: 69). As a result, it is my contention that the theory of intentionality is not as problematic as Boden would like us to believe. We know more about intentionality than what she thinks or wants to acknowledge perharps.

To reinforce my position, despite Boden acknowledging both Searle’s 1969 and 1983 contributions as “relevant” (1990: 92), she appears to circumvent the long history of research on intentionality that Searle draws from. In support of my view, one need only consider Ryan Hickerson’s The History of Intentionality (2007), which outlines the development of the concept of Intentionality from Aristotle; its revitalisation by Brentano in Psychology from Empirical Standpoints (1874); to ’s Logical Investigations (1900 and 1901); through to the less well-known Kazimierez Twardowski’s On the Content and Object of Representations (1894). In addition, Ausonio Marras’s anthology Intentionality, Mind, and Language (1972), a text that would have been available to Boden at the time she wrote her paper, considers the work by some analytic philosophers (Chisholm, , Nagel, and Hempel for example) who use the perspective of the concept of intentionality. Even though Boden may be right that the concept of intentionality is still being investigated and researched, her claim that it remains “philosophically controversial” (Boden 1990: 92), as compared to the certainty we have about photosynthesis, is in my view, not quite accurate. A related point is that Boden over-emphasises well-acknowledged features of philosophy (debate) and of science (experimental proof and ‘certainty’); and in the process this results in her under-estimating the importance of the fact that she herself is making an analogy which is weak, namely an analogy between science and philosophy and their methodologies.

29

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

I have shown in this section that Boden’s claim that Searle’s analogy is weak is not as convincing as she would have it be by showing that we actually know a lot more about intentionality than she assumes, and a lot less about photosynthesis than she claims. I now move on to the more serious claim she makes against Searle’s “positive” claim – that Searle’s position is based on unreliable intuitions.33

6.1.3. Searle’s claim that intentionality is biologically grounded depends on unreliable intuitions Boden accuses Searle of appealing to ‘unreliable intuitions’ (1990: 92). Searle asserts, for example, that it is “intuitively obvious that the inorganic substances with which (today’s) computers are manufactured are essentially incapable of supporting mental functions” (Searle in Boden 1990: 92, my emphasis). He reiterates this view again more recently, where he claims that it is “screamingly obvious to anybody who’s had any education” (Boag and Searle 2014: n.p.) that intentionality is biologically grounded.

These claims are based not on argument, but on intuition. As he admits:

I offer no a priori proof that this physical computer is not conscious any more than I offer a proof that this chair is not conscious. Biologically speaking, I think the idea that they might be conscious is simply out of the question (Searle 1997: 14). One wonders why Searle would argue this way. He clarifies his approach in an article “Twenty-One Years in the Chinese Room” published in 2002, to correct misconceptions of his view that he claims he had already expressed in his original article (Searle in Preston and Bishop 2002: 56).

The essence of what he says is that his argument that intentionality is possible only in neuroprotein is an empirical claim (ibid.). His view is that a system composed in its entirety of beer cans, for example, is not sufficient to cause consciousness and intentionality. This empirical claim that he makes is a claim about “how nature works”

33 According to Francis Cholle in an article “What is Intuition, and how do we use it” Intuition is a process that gives us the ability to know something directly without analytic reasoning, bridging the gap between the conscious and non-conscious parts of our mind, and also between instinct and reason. For example, “we are not like animals,” is a common phrase, it tells us that the assumed difference between humans and animals is humans’ ability to reason with our instinctual impulses, and the unspoken message is that reason is a higher and better quality to possess. The thing is, not only are we like animals, we are animals. However, we are animals with the distinct advantage of having both instinct and reason at our disposal (www.psychologytoday.com/blog/the-intuitive- compass/201108/).

30

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

(Searle in Preston and Bishop 2002: 57). For Searle, it may be logically possible that intentionality can arise from beer cans, but this is not actually possible (Searle in Preston and Bishop 2002: 56-57). What he means is that on the one end, we know that human and animal brains are sufficient to cause consciousness (ibid.). On the other end, we know that beer cans cannot do it. We are able to make such assertions, explains Searle, because we know that the right sort of chemistry is required to cause consciousness. Searle does admit, however, that there is an unresolved mystery; we do not yet know how brains produce it (ibid.).

Searle further explains that we generally understand computers as machines, but “computation as standardly defined does not name a machine process” (ibid.). The odd thing, according to Searle, is that the problem is not that computational processes are too much machine-like to be conscious, actually they are too little machine-like (ibid.). The idea follows from the fact that computation is defined in purely formal or abstract terms, viz., implementation of a computer algorithm, and not in terms of energy transfer. According to Searle, computation does not name a machine process similar to photosynthesis or internal combustion which are machine processes. Photosynthesis and internal combustion necessarily involve energy transfers, something that computation does not do. What is clear is that computation is the name of an abstract mathematical process that can be implemented with machines that engage in energy transfer; however, the energy transfer is not part of how computation is defined (ibid.). Already here we can see that Searle is showing that his analogy is not merely based on intuition, but on a reasoning process that shows the similarities and differences between silicone and neuroprotein.

For Searle, a failure to understand his position is a result of twin traditions:

on the one hand there’s God, the soul and immortality that says it is really not part of the physical world, and then there is the almost as bad tradition of scientific materialism that also says it’s not a part of the physical world (ibid.). Scientific materialism is the view “that physical reality, as made available to the natural sciences, is all that truly exists” (Haught 2010: 48). According to Searle the twin traditions mentioned above make the same mistake - they refuse to take consciousness on its own terms as a biological phenomenon.

31

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

Religious doctrine gave rise to the view that the body (which exists in physical reality) and the mind (or soul, which does not exist in physical reality) are split. This split is taken up in Western intellectual history. If we look at the seventeenth century, for example, Descartes and Galileo made a sharp distinction between physical reality (science) and the mental reality of the soul (outside the scope of scientific research). Dualism has become a serious problem in the twentieth century for the reason that “it seems to place consciousness and other mental phenomena outside the ordinary physical world and thus outside the realm of natural science” (Searle 1997:6).

The proposal by Searle is to abandon such dualism (substance dualism) and start with a causal account of consciousness. Beginning with the assumption that, “consciousness is an ordinary biological phenomenon comparable with growth, digestion, or the secretion of bile” (ibid.).

6.1.4. My Reply to Boden’s attack on Searle’s Intuition Claim My view is that even though Boden is correct in claiming that the material basis of intentionality still requires further investigation, claiming that Searle’s position is a mere intuition (Boden 1990: 92), is misleading. Searle actually presents a clever argument, which I show by reconstructing it as follows:

Humans (and animals) Beer cans Computers

Neuroprotein No neuroprotein No neuroprotein

Energy Transfer No energy transfer No energy transfer

Consciousness and No consciousness and No consciousness and intentionality intentionality intentionality

What Boden forgets when she accuses Searle of relying on unreliable intuitions is that, with his example of the beer cans, and of the chair that I mentioned above34, Searle is actually performing a reductio ad absurdum. David Morrow and Antony

34 See section 6.1.3. where I discuss whether Searle’s claim that intentionality is biologically grounded depends on unreliable intuitions.

32

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

Weston define a reductio ad absurdum, that is, a “reduction to absurdity” (2011: 141) as follows:

Arguments by reductio (or “indirect proof”) establish their conclusions by showing that assuming the opposite leads to absurdity: to a contradictory or silly result. Nothing is left to do, the argument suggests, but to accept the conclusion (ibid.). When Searle says that he need not offer a priori proof that a physical computer is not conscious, he does so by comparing being asked for proof that a chair is conscious (Searle 1997: 14). Since everyone would agree that such a request is ridiculous, they should, according to Searle, agree that asking for proof that computers are conscious is equally silly. Similarly, Searle uses the example of the beer cans to show that a request to prove that they (beer cans) are consciousness is equally silly. In the same way, saying that intentionality is not a biological process like photosynthesis is for Searle, absurd.

In this section, I looked at Boden’s accusations directed towards Searle, viz. that he relies on unreliable intuitions to support his claim that intentionality is biologically grounded. I have shown that contrary to assertions by Boden: Searle actually is crafty and strategic in outlining his argument. He uses reductio ad absurdum and is not merely relying on blind intuition. As such, it is my view that Boden is unsuccessful in depicting Searle’s claims as ‘unreliable intuitions’.

6.2. Interrogating the “Negative” Claim: Formal-computational theories cannot explain understanding Searle’s negative claim “that formal-computational theories cannot explain understanding” (Boden 1990: 94) is difficult to rebut in Boden’s view, and so she devises a two-pronged attempt to show up the problems she sees with it. The first prong is aimed directly at the example of Chinese Room, whilst the second prong attacks the background assumption upon which the Chinese Room depends, the point that “computer programs are pure syntax” (ibid.).

In terms of the first prong, Boden begins by explaining that Searle’s response to the Robot reply is firstly a claim for victory. That victory claim is firstly based on the understanding that cognition is not solely a matter of formal symbol manipulation but that it requires a set of causal relations with the outside world (ibid.). Secondly, in

33

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

Boden’s view, Searle’s claim for victory is based on pointing out that to add movement capacity to a computational system is not to add intentionality (Boden 1990: 94-95). Boden rejects Searle’s argument as rebuttal of the robot reply, since in her view “it draws a false analogy” (1990: 95). As I show, Boden notes that computationalists do not ascribe intentionality to the brain (as she reads Searle as implying) and so would not credit it with full-blooded intentionality (Boden 1990: 96). She therefore points out that the description by Searle of the robot’s pseudo-brains as understanding involves a category-mistake, compared to treating the brain as the bearer - as opposed to the causal basis of - intelligence (ibid.).

Moving on to the second prong, I investigate what Boden calls Searle’s ‘background assumption’ that programs are pure syntax. She employs her ‘English reply’ to argue that the “…instantiation of a computer program, whether by man or by manufactured machine, does involve understanding - at least of the rule-book” (Boden 1990: 97, emphasis mine) (in English). I discuss the Luminous Room experiment by the Churchlands (1990: 35), an experiment which Harnad says “rest on a false analogy”35, to investigate further support for Boden’s position. Their objection disputes Searle’s ‘anchoring’ premise that “syntax by itself is neither constitutive of nor sufficient for semantics” (Searle 1990: 27). As I will show the Churchlands complain about the “question-begging character of Searle’s axiom 3”36 (1990: 34), in relation to his conclusion that “programs are neither constitutive of nor sufficient for minds” (ibid.). In discussing the experiment by the Churchlands I agree with Searle that the analogy they make is incoherent (Searle 1990: 31), and that it does not strengthen Boden’s position.

Boden bases her position on the computer scientist B.C. Smith’s argument which she believes proves that computer programs are not all syntax and no semantics (Boden 1990: 102). Sloman’s (1986a; 1986b) “discussion of the sense in which programmed instructions and computer symbols must be thought of as having some semantics, howsoever restricted” (ibid.) is of significance for her. I argue that Searle’s position cannot be viewed as a hunch (as Boden suggests), and that his

35 See 6.2.1.2 for an explanation of the fallacy. 36 The Churchlands are referring to Searle’s 1990 article “Is the Brain’s Mind a Computer Program?” where Searle gives his argument as four abbreviated axioms as well as four conclusions (1990:27- 29).

34

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

view that programs are pure syntax remains intact despite several criticisms, with special reference to Chrisley’s (1995) article “Weak Strong AI: An elaboration of the English Reply to the Chinese Room”.

In his paper, Chrisley characterises Boden’s response as “weak strong AI” (WSAI), and then discusses several possible objections to that position. The objections proposed by Chrisley seek to show that the Chinese room argument “does not argue against WSAI” (1995: 2) and so Boden’s position stands as a strong threat to Searle’s Chinese room. I conclude that despite Chrisley offering such robust reinforcements to Boden’s position against Searle, Searle’s position on the prospects of Strong Artificial Intelligence (SAI) remains plausible and can be defended against the WSAI thesis.

6.2.1. The First Prong: Attacking the Chinese Room argument via the Robot Reply

6.2.1.1. The Victory Argument37 Against the Robot reply, Searle (in Boden 1990) asks us to imagine a robot that instead of having a computer program to make it work, contains a miniaturized (smaller) version of Searle. The smaller version of Searle is what he calls Searle-in- the-robot, and is equipped with a new rule book and acts similarly to Searle-in-the- room (Searle in the Chinese room squoggling). The miniaturized Searle is fitted with visual and audio equipment attached to the robot’s ears and eyes (Searle in Boden 1990: 76). This robot is assumed to be capable of doing and recognizing what native Chinese people would do - the idea is that “…it can recognize raw beansprouts and, if the recipe requires it, toss them into a wok as well as the rest of us” (Boden 1990: 95).

Corresponding to Searle-in-the-room, Searle’s view is that Searle-in-the-robot does not know about the wider context (ibid.), i.e., he does not have any connection to or understanding of the environment outside the skull of the robot in which he is housed. As such, Searle says the robot cannot be credited with any understanding of the issues outside the realm of being a robot (Searle in Boden 1990: 95), just like Searle who understands English, but does not have a clue about Mandarin

37 I propose this name to refer to what Boden claim’s to be Searle’s “smug” position.

35

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

Chinese.38 As a result, Searle claims that the robot “…has no intentional states of the relevant type” (ibid.) and so claims victory over the robot reply.

6.2.1.2. Searle’s Wrong Turn: Illegitimately dismissing the Robot Reply According to Boden, Searle draws a false analogy between the Searle-in-the-robot example and the claims made by computational psychology (1990: 95). As she explains, computational psychology focuses on the mind: the mind “considered as an informational [and] not an energy system” (Boden 1988: 5). Its practitioners (computational psychologists as opposed to computer-using psychologists) share three ways of theorizing (Boden 1988: 6):

1) They adopt a functionalist approach to the mind, and claim that every psychological phenomenon is,

assumed to be generated by some effective procedure, some precisely specifiable set of instructions defining the succession of mental states within the mind (Boden 1988: 5). 2) They conceive of the mind as a representational system, […] whereby mental representations are constructed, organized, interpreted, and transformed (ibid.). Here mental phenomena are about “having a meaning, semantic content, as being directed upon some object or imaginary object outside the mind itself” (ibid.), and;

3) They think about neuroscience in a computational way and so question the logical or functional operations that might be embodied in neural networks (Boden 1988: 6). In other words,

what the brain does that enables it to embody the mind is the question, not what it is in itself as a physical system (ibid., emphasis mine). Boden’s point is that Searle is mistaken in thinking that computationalists are saying that Searle-in-the-robot is acting out the function of the human brain (Boden 1990:95), since she claims that most computationalists do not credit intentionality to the brain (ibid.). Even though she grants that a few computationalists do ascribe intentionality to the brain in a limited way (ibid.), for Boden:

38 Standard Mandarin is usually called “Han language” Hanyu in China, but different names like huayu or zhongwen are also used. In the People’s Republic of China (PRC) the term Putonghua (meaning “common language”) is frequently used, while in Taiwan it is known as guoyu “national language”. The term “Chinese normally refers to standard language, even though the term Mandarin is often used to refer to that standard language when we want to distinguish it from the different dialects. (http: //linguese.com/blog/how-many-languages-are-spoken-in-china)

36

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

[c]omputational psychology does not credit the brain with seeing beansprouts or understanding English, intentional states such as these are properties of people, not brains (Boden 1990: 96). For Boden, both Searle and the computationalists generally believe that representations and mental processes are embodied in brains, but they credit sensorimotor capacities and propositional attitudes39 to the person as-a-whole (ibid.). Boden concludes that Searle’s view of the system inside the robot’s skull as that which can understand English does not truly represent what computationalists say about the brain (1988: 244).

As I pointed out in the introduction to this section, Boden claims Searle draws on a false analogy here. A false analogy “…consists in assuming that because two things are alike in one or more respects, they are necessarily alike in some other respect” (http://www.txstate.edu/philosophy/resources/fallacy-definitions/Faulty-Analogy.html) An example is:

Because human bodies become less active as they grow older, and because they eventually die, it is reasonable to expect that political bodies will become less and less active the longer they are in existence, and that they too will eventually die (ibid.). But this is not true, and so the analogy is false. So how then does Boden claim that Searle is drawing a false analogy? Searle is drawing a false analogy between the Searle-in-the-robot example and the claims made by computational psychology.

In her view, the system inside the robot’s skull can understand English, but a brain by itself cannot. Her point is to underline that computationalists (whom Searle is attacking) do not say the brain can understand English (or anything else). As Donald Berkich (2010) explains; ‘the brain’ does not understand English, Chinese, or any language. Rather, the brain is the organ that facilitates a person’s understanding of English or Chinese. Correspondingly, the entire person sees a red apple, not the person’s eye. It merely transmits information about the light wavelengths, and so on (2010: n.p.).

39 For example, the belief that snow is white. According to Rainer Bäuerle and M. J. Cresswell, the phrase “” was used by Bertrand Russell “to cover such “mental” things as beliefs, hopes, wishes, fears and the like. […] The grammatical mark of an expression for a propositional attitude in English is that it can take a that-complement” (1989: 491). Simplified propositional attitudes are assumed as fundamental units of thought, and their contents being propositions, are either true or false.

37

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

For Boden, such a position is a result of Searle making a category-mistake, that is, treating the brain as the bearer of intelligence rather that the causal basis of intelligence.

Before proceeding, let me explain what a category-mistake is, so that it becomes clearer why Boden says Searle’s argument is unacceptable.

6.2.1.3. Illustrating a Category-mistake Gilbert Ryle outlines what he calls ‘the official theory’ on the mind-body problem at the time in his article “Descartes’ Myth” (Ryle in Chalmers 2002: 34). According to Ryle, the official theory – Rene Descartes’ substance dualism - is a mistake of a special kind (ibid.), he calls it “a category-mistake” (ibid.).

It represents the facts of mental life as if they belonged to one logical type or category (or range of types or categories), when they actually belong to another (ibid.). To specify, Ryle illustrates what is meant by a category-mistake by enumerating a series of examples. His first illustration is of a foreigner visiting Oxford or Cambridge universities and wanting to know where the university is, despite having seen several components that when put together (not just physically) they represent the University (Ryle in Chalmers 2002: 35). The second illustration is of a child witnessing soldiers’ march-past (of battalions, batteries, squadrons, and so on) and yet the child asks to see the division (ibid.), and lastly, a foreigner watching a game of cricket (bowlers, batsmen, fielders, umpires, and so on) for the first time and expects to see someone whose role is to exercise esprit de corps (ibid.).

Similarly, the University of Johannesburg is comprised of the Doornfontein, Bunting, Kingsway and Soweto Campuses, Madibeng, the Library, the Art Galleries, student residences, sport facilities, the Alumni Network offices, and so on. It is then a category-mistake when one fails co-ordinate how all these put together represent the University. One cannot ask where the University is as if the University of Johannesburg is something outside those entities. As Ryle points out, a common feature about category-mistakes is that they,

are those made by people who are perfectly competent to apply concepts, at least in the situations with which they are familiar, but are still liable in their abstract thinking to allocate those concepts to logical types to which they do not belong (Ryle in Chalmers 2002: 35).

38

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

6.2.1.4. Back to Boden Basden (2007) sympathises with Boden’s argument regarding Searle making a category-mistake. He believes Boden has a point in suggesting that Searle’s makes a category-mistake, in that, “…in a person, it is not the brain that has intentionality but the person. Therefore, Searle’s argument that Searle-in-the-room character cannot possess intentionality could be written off as irrelevant because it is likened to a brain rather than a person” (Basden 2007: n.p.). Boden’s position seems very strong here, in my view.

In an attempt to make her position even stronger, Boden anticipates an objection that could be directed at her. She notes that she could be accused of contradicting herself by making a claim that one cannot ascribe intentionality to brains and yet she implicitly does that herself (Boden 1990: 96). She admits that this issue could be gleaned from her earlier assertion - her position that brains effect ‘stupid’ component-procedures (ibid.). As I will explain later, for her the “instantiation of a computer program, whether by man or by manufactured machine, does involve understanding – at least of the rule book“ (Boden 1990: 97). To defend herself in the event of such an objection, she proposes two inter-related points to clarify her position on intentionality:

1) Boden explains that basic information-processing functions like DOG-detecting and synaptic inhibition can be explained by the biochemistry of the brain (Boden 1990: 96-97). The idea of stupidity in her view is inappropriate in relation to them (ibid.). Such basic information-processing functions, however “could properly be described as ‘very, very, very …stupid’”(Boden 1990: 97). Consequently, the fact that stupidity is included in discussing them implies that intentional language is applicable to brain processes (ibid., emphasis mine). She earlier asserted that

stupidity is virtually a species of intelligence. To be stupid is to be intelligent, but not very (a person or a fish can be stupid, but a stone or a river cannot) (Boden 1990: 96). 2) Deduced from the claim above, Boden says her argument is not that “intentionality cannot be ascribed to brains, but that full-bloodied intentionality cannot” (1990:97), nor is she saying “brains cannot understand anything at all, in howsoever limited a fashion, but that they cannot (for example) understand English” (ibid.), rather she is

39

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

reemphasing her earlier assertion that “a few computationalists do ascribe some degree of intentionality to the brain (or to the computational processes going on in the brain) (ibid.).

My concern here is that Boden seems to misunderstand the point. It may seem that Searle is treating the brain as the bearer of intelligence rather that the causal basis of intelligence as a result of the necessity of having to have Searle-in-the-robot’s head in order to report understanding or no understanding of Chinese: “Searle must be the homunculus for the CR argument to even get going” (Chrisley 1995: 3). The nature of the thought experiment is such that this cannot be escaped. But this does not imply that Searle is not aware of this problem, nor that he would still not assert that the robot “…has no intentional states of the relevant type” (ibid.).

In this section, I interrogated Boden’s attack on Searle’s negative claim: that formal- computational theories cannot explain understanding, I have so far only discussed the first prong: attacking the Chinese room via the robot reply. In discussing that I started by looking at what I termed the victory argument. I then investigated what is considered to be Searle’s illegitimate dismissal of the robot reply which Boden points out to be based on a false analogy. My last point was an appraisal of Boden’s accusation of Searle’s description of the robot’s pseudo-brain as understanding English; she labels such a description as involving a category mistake.

To repeat an earlier point, Searle says

in principle, the man can internalize the entire system, memorizing all the instructions and the database, and doing all the calculations in his head. He could then leave the room and wander outdoors, perhaps even conversing in Chinese. But he still would have no way to attach “any meaning to the formal symbols” (Searle in Boden 1990: 73-76). He still cannot get semantics from syntax (ibid.). For Searle, on the one hand, computer operations are “formal” and respond only to the physical form of the strings of symbols, not to the meaning of the symbols (Cole 2014: n.p). Minds, on the other hand, have states with meaning, mental contents and respond accordingly. We can “respond to signs because of their meaning, not just their physical appearance. In short, we understand” (ibid.).

40

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

I now move on to scrutinize the second prong of Boden’s attack: the question of syntax versus semantics and what Boden calls the English reply. The English reply is, in short, Boden’s claim that there is some understanding in the Chinese room, the understanding of English by Searle-in-the-room to carry out the squoggling procedure. In her view, by understanding, she refers to two things, viz., the understanding required to recognize symbols and the understanding of English required to read the rule book (Boden 1990: 97). In the next section, I show and test the English reply’s “bearing on Searle’s background assumption that formal-syntactic computational theories are purely syntactic” (ibid.) by considering Boden’s claims directly, and then use Chrisley’s paper on weak-strong AI (WSAI) to develop my position.

6.2.2. The Second Prong: Syntax versus Semantics and the English reply This reply by Boden is in two parts and forms what she calls the “crux” (ibid.) of her argument against Searle’s negative claim. In the first part, Boden claims that the instantiation of a computer program, carried out by a human or by an artefact, does involve understanding - at least of instructions (the rule book) [in English] (1990: 97). In other words, her main claim against Searle’s negative claim is that Searle-in-the- room is someone who ‘understands’ at least the language of the instructions (English) (ibid.), to carry out the squiggle-squoggle process. As a result, Boden can claim that Searle’s conclusion - that there is NO understanding in the Chinese room – is faulty, and so it seems that Boden here deals a fatal blow to Searle’s position.

For Searle, the SAI thesis by Newell and Simon (1963) that claims that “the kind of cognition they claim for computers is exactly the same as for human beings” (Searle in Boden 1990: 72), is wrong. In his view, the programmed computer understands […], exactly nothing (ibid.), in other words “computer understanding is not just (like my understanding of German) partial or incomplete; it is zero” (ibid.). However, is that what Searle is really saying? As I have mentioned before, Searle actually acknowledges Boden’s point, but argues that this does not affect the strength of his position. He says:

I am the robot's homunculus, but unlike the traditional homunculus, I don't know what's going on. I don't understand anything except the rules for symbol manipulation (Searle in Boden 1990: 77).

41

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

Searle is arguing that formal symbol-manipulation does not offer sufficient grounds to claim semantics (Searle in Boden 1990: 70-1). He bases his view on his understanding of what it means when he says “understanding” (Searle in Boden 1990: 70). For Searle “as long as a program is defined in terms of computational operations on purely formally defined elements”, (Searle in Boden 1990: 71) we cannot expect from a machine answers that resemble the kind of answers that could be given by an English-speaking Searle. Searle says:

whatever purely formal principles you put into a computer, they will not be sufficient for understanding, since a human will be able to follow the formal without understanding anything (ibid.). In terms of a response to Boden’svery strong argument against Searle; to reiterate, the position that Searle-in-the-room is someone who ‘understands’ at least the language of the instructions (English) (ibid.), to carry out the squiggle-squoggle process. My contention is similar to that of Basden (2007), who points out that Boden misses that,

[w]hat Searle meant by semantics was the semantics of Chinese characters, the squiggles and squoggles, and not the procedural element of the rule- following program (2007: n.p.). To build on this idea, it seems to me then that Searle’s position hinges on a different understanding of ‘understanding’ to Boden, and so it is to that I now turn.

6.2.2.1. Understanding ‘understanding’ Overskeid explains that the word ‘understanding’ is often times used as a word to describe the attainment of knowledge, and sometimes for knowledge itself (2005: 601). According to him, several authors propose varying descriptions and definitions of understanding, mostly based on the ability to explain, to predict, to respond appropriately and so on (ibid.). He notes, however, that none of the descriptions and definitions have won general acceptance (ibid.). Searle’s work on understanding therefore is part of a highly contested area in philosophical debate.

Searle has been criticised for his understanding of understanding. Overskeid (2005), for example claims that Searle only tells us what understanding is NOT and not what it IS. Although I agree that Searle’s 1980 paper did not offer much on what he meant by understanding, he has since then published several works to correct that. For

42

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

example, “Consciousness, Unconsciousness and Intentionality” (1991), The Rediscovery of the Mind (1992), The Mystery of Consciousness (1997), and recently “Twenty-one years in the Chinese Room” (2002) among several other works he has published. In what follows, I draw upon these and others works to show what Searle understands by understanding.

In my view, Searle uses the word ‘understanding’ in two distinct senses. Boden, on the other hand, seems to me to conflate these two senses of ‘understanding’ postulated by Searle into one, and then uses this conflation to show that Searle’s position is flawed. I start this section by looking at Searle’s position in more detail, then move on to a critical appreciation of Boden’s view, before elaborating in more detail on own view.

For Searle, there are clear cases in which understanding literally applies and cases in which it does not apply (in Boden 1990: 71). This is, as he explains, his starting point to set the parameters for his critics of how he uses the word in his thought experiment (ibid.). Searle claims he ‘is not interested’ in those critics who point out the various degrees that understanding can be applied; or those who think understanding is not a simple two-place predicate (ibid.), and merely outlines a specific framework for the word understanding.

Firstly, Searle gives the example of his understanding of languages (English - he understands, French - he understands to a lesser degree, German - to a still lesser degree and Chinese - not at all) (ibid.), something which is an empirical fact in his case (Mooney 1997: 7). In the same breath, Searle contends that artefacts such as cars, calculators and kegs that we use are not to be included when we attribute understanding. In his view, artefacts “understand nothing: they are not in that line of business” (Searle in Boden 1990: 72).

So, it becomes a pertinent question: why do we describe artefacts as having understanding? We could say, for example, that the Sony-owned Hawk-Eye (a goal- line technology) ‘knows’ that a goal has been scored, without actually meaning that the Hawk-eye really ‘knows’ anything. According to Searle, what we do in these kinds of cases is hinged on us extending our own intentionality to tools (ibid.), but only in a metaphorical sense. To exemplify, he gives an example of doors with

43

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

sensors on some shops where the door opens automatically as we walk towards it. Generally, we would think that the door ‘knows’ when to open. Of course, we know that the door actually opens because the photoelectric cell sends the instructions to the sensors and then, the door opens. The sense in which Searle understands English is not at all the same sense that the door understands the instructions from the photoelectric cell in Searle’s estimation (ibid.). What is significant for Searle is that a misunderstanding arises when the metaphorical sense in which the door or Hawk-Eye is said to understand is then assumed to be the same as his (the person) understanding of his native language. For Searle “it is obvious that a computer does not understand the way a human understands” (Overskeid 2005: 611), specifically because a “nonhuman thing or creature would need to possess the “causal powers” of the human brain to understand” (ibid.).

Jeffrey Whitmer (1983: 194-5) gives a very useful summary of Searle’s position drawn from Searle’s (1982) “The Myth of the Computer” here:

1. Brain processes cause mental phenomena. 2. Mental states are caused by and realized in the structure of the brain. So, 3. Any system that produces mental states must have (causal) powers equivalent to those of the human brain. 4. Digital computer programs by themselves are never sufficient to produce mental states. So, 5. The way the brain produces a mind cannot be by simply instantiating a computer program. So, 6. If you want to build a machine to produce mental states, then it cannot be designed to do so solely in virtue of its instantiating a certain computer program.

In 1984, when presenting his second Reith lecture titled Beer Cans and Meat Machines Searle explains in more detail. He says “understanding a language, or, indeed, having mental states at all, involves more than having just formal symbols. It involves having an , or a meaning attached to those symbols” (1984: n.p., emphasis mine). Searle is happy to admit that we understand the questions directed at us using language that we are familiar with (ibid.). For example, if I am asked about my favourite colour, or my goals for the year, I understand the question as it is posed in English, and can formulate a fitting response in English. In Searle’s view, I answer the given questions on the basis that the questions are expressed in symbols whose meanings I know and are meaningful to me (ibid.). To understand,

44

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

for Searle, presupposes intentionality, semantics, or meaning as well as true or valid knowledge (Overskeid 2005: 610). In the Chinese Room, Searle may be churning out appropriate replies in Chinese, and he may understand English in order to be able to use the rule book, but he still has no way of attaching meaning to any of the elements (squiggles) (ibid.).

Searle’s claim then is that computation alone cannot, in principle, give rise to genuine cognition (Preston and Bishop 2002: 19). Recall that to be a digital computer it is about having the capacity to do computations, where computation is defined as the manipulation of symbols in accordance with purely formal or syntactical rules (ibid.). As such, something,

that is only computing cannot be said to have access to or know or understand the ‘content’, the semantic properties (meaning, interpretation) of the symbols it happens to be manipulating. For the computer, as it were, what it manipulates are ‘just formal counters’ (Searle 1982a:4), not symbols (ibid.). This distinction is very clearly explained by Overskeid (2005: 598), whose example I adapt to make the same point. Imagine two women - Rachel and Lillian - looking at the chevron patterns at the Great Zimbabwe ruins. Rachel believes the chevron patterns are signs in some unknown language. As for Lillian, she believes those patterns are mere cracks that are not a result of human action. When Rachel begins to wonder what those chevron signs mean, her reaction will be ‘I do not understand them.’ As for Lillian, she won’t have this problem, since she does not think there is anything to be understood, since she views those signs as mere cracks, communicating no kind of meaning or semantics (ibid.). This is what Searle is highlighting in the above quotation – for the computer, like Lilian, there is no understanding to be had, since it is working with formal counters (like Lilian’s cracks) and not symbols.

So Searle is using two senses of the word understanding – one that he attributes to humans (and other non-human animals) where understanding involves a subjective experience - what he calls intentionality, semantics, or meaning (Overskeid 2005: 604); and a second minimal sense of the word understanding that we attribute metaphorically to machines like computers, electric doors and goal line technologies like the Hawk-Eye. For Searle, then, the Strong Artificial Intelligence (SAI) thesis is

45

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

false. Specifically, the idea that the appropriately programmed computer really is a mind, in the sense that computers ‘solely by virtue of running the right program’ (Chrisley 1995: 1) given the right programs, can be literally said to understand and have other cognitive states (Searle in Boden 1990: 67).

John Preston notes then that on Searle’s view, “computers therefore cannot be credited with understanding the rules they apparently follow, or the programs those rules compose, or the symbols they manipulate” (Preston and Bishop 2002: 19). As Preston notes, Searle points out that their “states entirely lack what philosophers call intentionality, ‘the feature of certain mental states by which they are directed at or about objects and states of affairs in the world’ (ibid.). As Searle has shown those who want to rally behind other explanations of understanding are “misunderstanding” (Searle in Boden 1990: 71) his project.

Boden, in my view, seems to combine Searle’s two senses of understanding into one by means of her argument that “the instantiation of a computer program, whether by man or by manufactured machine, does involve understanding - at least of the rule- book” (1990: 97). If one looks closely Boden has conflated what Searle took time to distinguish, viz., human understanding and metaphorical ascription of understanding to computers.

Searle points out that in the Chinese Room he can manipulate Chinese symbols but he cannot attach meaning to those symbols (1990: 26, emphasis mine). Similarly, Searle’s lack of understanding of Chinese solely from executing an algorithm, is the same as claiming a digital computer manipulating formal symbols cannot solely on the basis of running an algorithm be said to understand (ibid.). For Searle,

just manipulating the symbols is not by itself enough to guarantee cognition, perception, understanding […] and so forth. And since computers qua computers, are symbol-manipulating devices, merely running the computer program is not enough to guarantee cognition (ibid.). Basden argues that we can generally agree that Searle-in-the-room does not have genuine understanding of Chinese just like Searle, and by analogy, “a computer

46

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

running a symbol-manipulating program can never understand the knowledge level content of the program” (Basden 2007: n.p.).

Recall, Searle insists a computer program by definition has syntax and no semantics. He states

[f]ormal symbols by themselves can never be enough for mental contents, because the symbols, by definition, have no meaning (or interpretation, or semantics) except insofar as someone outside the system gives it to them (Searle 1989: n.p.). One can ask at this point, in defence of Boden’s position: is Searle justified in having two ways of understanding “understanding”? As Cole points out, the Chinese room argument raises so many issues, and the prospects about settling them remains a pipe dream “until there is consensus about the nature of meaning, its relation to syntax, and the biological basis of consciousness” (2014: n.p.), since significant disagreements about processes that create meaning, understanding, and consciousness persist (ibid.). The question thus remains as to whether Searle is justified in holding two senses of the concept of understanding. One theorist who argues that Searle is NOT justified in using these two different senses of understanding is Geir Overskeid (2005). Overskeid argues that “understanding probably does not presuppose the causal powers of the human brain, and that computers can have intentionality in Searle’s sense of the word (Overskeid 2005: 595). Searle’s position is quite different here as I will show in the next section.

6.2.2.2. Understanding and the Causal Powers of the Human Brain. Searle (1990) asserts the brain does not merely “instantiate a formal pattern or program (it does that, too) but it also causes mental events by virtue of specific neurobiological processes” (1990: 29). According to him brains are specific biological organs, and “their specific biochemical properties enable them to cause consciousness and other mental phenomena” (ibid.). From this we can see that Searle’s description clearly excludes artefacts as having causal powers, since they lack the biochemical properties that brains have.

It is important to note that Searle distinguishes “between internal causes of the brain, and the impact of the external world” (Whitmer 1983: 201). I adapt an example that Whitmer (1983) gives to explain. We can hallucinate about a giraffe or we can

47

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

actually go to the Lion and Safari Park40 and see one. The “effects” (ibid.) of the hallucination and of directly looking at the giraffe in a park are different; i.e., the latter involves the external world and the former may involve other neural stimulators for example when the brain activity is altered by certain drugs. However, the internal mental states encompass precisely the same intentional state (ibid.). Thus Searle’s (1980) perspective that

the operation of the brain is causally sufficient for intentionality, and that it is the operation of the brain and not the impact of the outside world that matters for the content of our intentional states, in at least one important sense of ‘content’ (Searle in Boden 1990: 84), is well founded. Searle (2002) asserts that, unlike brains, an implemented program “…is insufficient by itself to cause mental states because the program is defined independently of the physics of its implementation” (2002: 54). He is clear that any causal power the machine might have to cause consciousness and intentionality would have to be a consequence of the physical nature of the machine. But a program, qua program, hasn’t got any physical nature (ibid.). As such in his view, only that which has the right biological material can cause intentionality.

Returning to Overskeid, he argues two things. First, that understanding does not presuppose the causal powers of the human brain because “rats, cats or birds” understand (Overskeid 2005: 612). However, as I have mentioned previously, Searle’s point is not that it is the human brain that is the deciding factor, but rather that it is neuroprotein. So the first part of Overskeid’s argument fails. Second, Overskeid claims that the understanding does not presuppose language, and so that Searle’s argument fails (ibid.). However, Searle nowhere claims this, and in his acknowledgement that non-human animals can have intentionality (as mentioned previously), he is implicitly acknowledging that language is not necessary for understanding. Here again, Overskeid’s argument fails. As such, he seems to be misunderstanding Searle. Searle, as pointed out earlier, is not saying other non- humans do not actually understand, or that language is necessary for understanding,

40 A wilderness park which is situated in the Harteespoort, Magaliesburg and Cradle of Humankind area in North West province of South Africa.

48

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

but rather that understanding depends on neuroprotein, and the robot does not have the intentional states of the relevant type.

6.2.2.3. Syntax and semantics41 Searle’s basic premise is that a computer program is:

purely formal in nature: the computation it specifies is purely syntactic and has no intrinsic meaning or semantic content to be understood (1990: 98). Boden suggests that the above claim is disputable and suggests replacing it with another premise that says “if computer programs are not concerned only with syntax, then the English reply may be relevant” (1990: 99). She addresses the new question by going into elaborate detail about the nature of computer programming. According to Cole, “computer programs are “formal” in that they respond only to the physical form of the strings of symbols, not to the meaning of the symbols” (2014:n.p.).

I agree with the point that Boden makes regarding the fact that a program does follow some form of procedure specified (1990: 99). As Basden explains, Boden’s claim is that “the semantics of the syntax of the program is that it follows the rules” (2007: n.p.). This is in keeping with Boden’s claim that:

[…] a computer program is a program for a computer: When a program is run on suitable hardware, the machine does something as a result (1990: 99). This engages the notion of the machine doing ‘something’; that is, the idea that the computer after being given instructions, obeys them. Such a claim results from us generally understanding that a written computer program facilitates a specified activity (ibid.). To explain, when a computer is programmed to manipulate a specific algorithm or procedure; the activity that results is not a mere formal pattern for Boden (ibid.). Rather, with suitable hardware such a procedural specification can be executed, and results observed as output (ibid.).

However, even if we grant this to Boden, the ‘semantics’ at issue here is so minimal in the sense in which Searle speaks about understanding as outlined previously. Searle’s position - that formal symbol-manipulation does not offer sufficient grounds to claim semantics (Searle in Boden 1990: 70-1) – remains intact. For Searle, “as

41 For a brief definition of these terms, ‘syntax’ and ‘semantics’, please refer to section 4.3. The Chinese room in focus.

49

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

long as a program is defined in terms of computational operations on purely formally defined elements” (Searle in Boden 1990: 71), we cannot expect from a machine answers that resembles the kind of answers that could be given by an English- speaking Searle, when it executes an input and output operation. Searle says

whatever purely formal principles you put into a computer, they will not be sufficient for understanding, since a human will be able to follow the formal without understanding anything (ibid.).

Searle claims that his understanding of English is not a result of operating with any formal program, and so comparably, there cannot be any formal principles you put into the computer that will be sufficient for understanding (ibid.).

Basden neatly points out Boden’s misses:

[w]hat Searle meant by semantics was the semantics of Chinese characters, the squiggles and squoggles, and not the procedural element of the rule- following program (2007: n.p.). Further exploring the possibility that the new premise “computer programs are not concerned only with syntax” (1990: 99) offers, Boden emphasises that a theory of programs and of computation; as is generally agreed, are essential parts of a computer program and “they make things happen” (ibid.), unlike in symbolic logic or computational logic. And, as a result Boden argues, following Smith (1982), that what happens in computer programming does not only involve syntactic formulae (Boden 2006: 1424), since things happen when a program is executed (ibid.). For her, “computation is a causal concept” (ibid.). Smith (1982) according to Boden

was thinking primarily not of the program’s electronic implementation, but of causation in the virtual machine. On that view, the expressions of any given programming language aren’t mere empty syntax (ibid.). That is, procedures that result from programming do

refer (sic) to virtual objects and causal processes such as variables (whose values may or may not differ at different times), numbers, procedures, strings, and larger structures containing these (ibid.). Searle’s response in my view would be to contest Boden’s assertion. He says

it does not matter how well the system can imitate the behaviour of someone who really does understand, nor how complex the symbol manipulation are; you cannot milk semantics out of syntactical processes alone (1997: 12).

50

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

He explains how it is the notion of programming that constitutes the core of the misunderstanding to his position (ibid.).

Searle starts by clarifying that he does not deny that a “given physical computer might have consciousness as an “emergent property’” (ibid.). To avoid the seeming contradiction in his view, Searle restates what he argues against - that by solely having an implemented program we have understanding. This is the claim of SAI: that by solely executing a program, a mind is guaranteed (ibid.).

To encapsulate the view:

the thesis of Strong AI is not that a computer might “give off” or have mental states as emergent properties, but rather that the implemented program, by itself, is constitutive of having a mind. The implemented program, by itself, guarantees mental life (Searle 1997: 14). As I pointed out earlier, Searle is worried about how the concept of word ‘understanding’ is interpreted and applied in engaging his Chinese room argument. To reiterate, Searle pointed out that for his experiment he has a specific understanding of how he uses understanding, for him “there are clear cases in which ‘understanding’ literally applies and clear cases in which it does not apply” (Searle in Boden 1990: 71). Searle is clear when he reminds us that the program is defined purely syntactically, and that syntax by itself is not enough to guarantee the presence of mental, semantic content (1997: 14). Searle asserts that the confusion about his position lies in the type of question being asked. He says:

…the question, ‘Is consciousness a computer program?’ lacks a clear sense. If it asks, ‘Can you assign a computational interpretation to those brain processes which are characteristic of consciousness?’ the answer is: you can assign a computational interpretation to anything. But if the question asks, ‘Is consciousness intrinsically computational?’ the answer is: nothing is intrinsically computational. Computation exists only relative to some agent or observer who imposes a computational interpretation on some phenomenon. This is an obvious point. I should have seen it ten years ago but I did not (Searle 1999: 7). I now turn to an objection to Searle’s position on the syntax and semantics debate that was proposed by the Churchlands in the 1990s.

6.2.2.4. The Churchlands in the Luminous Room Vincent Mooney, in his 1997 article “Searle's Chinese Room and its Aftermath”, claims that Paul and (1990) raise a novel objection that directly

51

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

disputes Searle argument; especially his anchoring premise, “syntax by itself is neither constitutive of nor sufficient for semantics” (Searle 1990: 27). The Churchlands complain about what in their view is the “question-begging character of Searle’s axiom 3” (1990: 34), that contributes to his conclusion that “programs are neither constitutive of nor sufficient for minds” (ibid.).

Churchland and Churchland (1990) base their objection on an analogy with electromagnetic forces and conclude that “syntax can generate semantics” (Mooney 1997: 28). To outline their analogous ‘refutation’ of the electromagnetic theory of light, they use an imagined example. They postulate a man producing electromagnetic waves by waving a bar magnet about in a dark room. Waving the magnet does not illuminate the room; and thus, one could conclude that electromagnetic waves “are neither constitutive of nor sufficient for light” (Churchland and Churchland 1990: 35).

Similarly, say Churchland and Churchland, the Chinese room “looks” semantically dark (Mooney 1997: 28). So,

the intuited “semantic darkness” in the Chinese Room no more disconfirms the computational theory of mind than the observed darkness in the Luminous Room disconfirms the electromagnetic theory of light (Hauser n.d.: 21). As Mooney explains, for the Churchlands, the ‘semantic darkness’ in the Chinese room “does not demonstrate that it is so. We may have an uninformed commonsense understanding of semantic and cognitive phenomena” (Mooney 1997: 28). As such, they conclude that Searle's Chinese room argument exploits our failure to understand these phenomena (ibid.) and so insist that “Searle is […] mistaking the limits on his (or the reader's) current imagination for the limits on objective reality” (ibid.).

Searle responds to Churchland and Churchland (1990) by asserting that arguments by analogy are usually quite weak, because it is always difficult to ensure that the two cases are analogous (1990: 30). Searle’s claim is that in this case, analogy breaks down. In particular, the Churchlands’ analogy fails, since he asserts that “formal symbols have no physical, causal powers” (ibid.).

52

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

Recall, says Searle, that strong AI rests on formal programs. There is a dilemma for the Churchlands' analogy between syntax and electromagnetism: either syntax is in terms of purely formal mathematical properties or not. If it is, then there's no analogy, because syntax has no physical powers and no causal powers. If syntax is not purely formal, then there is indeed an analogy, but not one applicable to strong AI (Mooney 1997: 29). It seems then that the argument by the Churchlands does not provide any assistance in bolstering a position like that of Boden.

I now turn to the other angle that Boden uses to shore up her English reply in her attempted escape from the Chinese room. As I pointed out earlier, she underlines the fact that computer programs ‘makes things happen’. Turning to programming- language, Boden believes it does not only express representations, but also brings out what she calls the “representational activity” of certain machines (Boden 1990: 99). For Boden, “representation is an activity rather than a structure” (ibid.).

To support her view, Boden turns to the computer scientist, B.C. Smith (1982) who argues that “programmed representations, […], are inherently active, and that an adequate theory of the semantics of programming-languages would recognize the fact” (1990: 100). Daniel Hofstadter, among others, argued for same view that mental representations are supposedly active (Boden 1990: 99). He expresses a connectionist42 approach. For him,

The brain itself does not “manipulate symbols”; the brain is the medium in which the symbols are floating and in which they trigger each other (Hofstadter 1985: 648). Boden, therefore takes Hofstadter as someone who is more sympathetic to ‘connectionist’ rather than ‘formalist’ psychological theories (1990: 99). In her view,

connectionist approaches involve parallel-processing systems broadly reminiscent of the brain, and are well suited to model cerebral representations, symbols, or concepts, as dynamic (1990: 99- 100). However, as Boden admits, Smith worries about how issues about intentionality, as well as representation, are all marred with unclarities within the professional community of computer scientists (ibid.). Consequently, for Smith, the understanding that computer scientists claim regarding these two phenomena is mostly intuitive

42 This view was discussed earlier see page 15.

53

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

(ibid.). Therefore, as Smith discusses programming-languages he highlights many inconsistencies in computer science, especially worrisome for him is the claim computer scientists make, giving “too complete a theoretical separation between a program’s control-functions and its nature as a formal-syntactic system” (Boden 1990: 100).

Despite this, Boden weighs in by going into some detail about the ‘dual-calculus’ approach to programming (ibid.). According to her, this approach gives a theoretical distinction between a “declarative (or denotational) representational structure and the procedural language that interprets it when the program is run” (ibid.). In this case, knowledge-representation and the interpreter are at times written in two distinct formalisms, like predicate calculus and LISP. In this instance, these formalisms LISP – (LISt-Processing language) and PROLOG (PROgramming in LOGic) allows facts and procedures to be expressed in formally similar ways (ibid.).

As an illustration of the difference, Boden states four representations (Boden 1990:101)43. I only focus on the frame-based representation. Frame-based representation works with a slot-filling44 mechanism,

(Analogously, as Boden points out, people who knew that Searle speaks no Portuguese would not give Searle-in-the-room a Portuguese rule-book unless they were prepared to teach him the language first.) (ibid.), and for the present purposes, it shows that,

a fundamental theory of programs, and of computation, should acknowledge that an essential function of a computer program is to make things happen (Boden 1990: 102). In my view, ‘making things happen’ suggests an appropriate following of rule book, (in an ‘understanding’ manner). Recall, Searle without any understanding of Chinese is able to carry out the squiggle-squoggle activity.

43 There are four ways of writing a representational structure: 1) list-structure, 2) frame-based representation, 3) a formula of the predicate calculus, and 4) an English sentence (Boden 1990: 100). 44 According to Mihai Surdeanu, ‘slot-filling’ is an area in knowledge bases (KBs), it is viewed as information extraction, or alternatively as a question answering task, where the questions are static but the target changes (http//.surdeanu.info/kbp/index.php). I do not attempt a detailed exploration of this research area; I only limit myself to the definition in this paper. A more detailed discussion of slot- filling can be accessed in his article titled, “Overview of TAC2013 Knowledge Base Population Evaluation: English Slot Filling and Temporal Slot Filling”.

54

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

6.2.2.4.1. Searle and the relevance of the Frame problem.

Larry J. Crockett, accused Searle of committing an anachronism, by arguing that Searle fails to pay attention to the frame problem in “Minds, Brains, and Programs” (1994: 159). The frame problem as an abstract epistemological problem is relevant in discussing how the computer program supposedly ‘causes’, things to happen. Very briefly, the frame problem45 refers to a relatively “narrow, technical problem in ‘logic-based’ approaches to Artificial Intelligence” (Samuels 2010: 6). Crocket concludes that,

Searle's inattention to the frame problem and how it bears on mature conversational abilities in a computer is that Searle doesn't see that the frame problem is pertinent (1994: 159)46. As Peter Morton points out, the frame problem is concerned with

writing an algorithm for a task that will anticipate the results of actions carried out but without having to provide instructions for everything that will not result from carrying out an action (1996: 261). Despite Searle arguing that computer intelligence, in other words understanding “is not merely difficult to achieve, but impossible because of the very nature of computers” (ibid.). I contend that Crockett (1994) is possibly right to question the credibility of Searle's Chinese Room argument. Based on what he calls the evident insolvability of the frame problem, in his view, “something that seriously undercuts the plausibility of Searle’s argument” (1994: 147).

6.2.2.5. Programming Language and Representation Computers are operated using specific language code, what is referred to as programming language. Searle believes that the nature of computers, defined as programmed artefacts, makes them unsuitable as candidates of understanding. As he says “the program is not defined in terms of its powers to cause higher-level

45 The task in which the problem arises was first formulated in McCarthy 1960 (Dennett in Boden 1990: 148). McCarthy and Hayes coined the term in their 1969 paper. Larry Crockett takes the frame problem to be an abstract epistemological problem, discovered by AI thought experimentation. When a cognitive creature, an entity with many beliefs about the world, performs an act, the world changes and many of the creature's beliefs must be revised or updated. How? It cannot be that we perceive and notice all the changes . . . and hence it cannot be that we rely entirely on perceptual input to revise our beliefs. So we must have internal ways of up-dating our beliefs that will fill in the gaps and keep our internal model, the totality of our beliefs, roughly faithful to the world (1994: 72-73). 46 This is actually not quite accurate – Searle indirectly considers the Frame problem in a number of places, including in his article titled, "Is the Brain's Mind a Computer Program?" (1990), and “Twenty years in the Chinese room” (2002).

55

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

features of the system, because it has no such powers” (in Preston and Bishop 2002: 56-57). Computers cannot attach meaning since they do not have an intrinsic connection to the world, recall computers are defined in terms of its syntactical or formal structure (ibid.). The significance of a detailed enquiry into the frame problem becomes apparently sensible. In Crockett’s view, “Searle maintains that computer programs lack the semantical content that is a necessary ingredient in thinking” (1994: 146). I agree with the point that Crockett advocates for, often issues which might appear to be necessary conditions to generate some phenomenon turn out to be only sufficient (ibid.). He says,

we have a habit of mistaking a sufficient for a necessary condition, and it is an empirical question whether a nonthinking system of some kind will turn out to be a sufficient condition for such language competency (ibid.). As stated earlier, Boden (1990) takes it as crucial that fundamentally a theory of programs, and of computation, must acknowledge what is essential for a computer program - it ‘makes things happen’ (Boden 1990: 102). This is unlike in symbolic logic which can be taken as juggling uninterpreted formal calculi, and computational logic which studies abstract timeless relations in mathematically specified ‘machines’ (ibid.). In her view computer science does not suit any of the above descriptions (ibid.).

Thus, according to Boden, Searle (1980) mistakenly supposes programs are pure syntax. For her, Smith’s argument allows her to conclude that “characterization of computer programs as all syntax and no semantics is mistaken” (ibid.). In her view,

the inherent procedural consequences of any computer program give it a toehold in semantics, where the semantics in question is not denotational, but causal (ibid.). Consequently, for her, a robot might have causal powers that enable it to refer. But for Searle, a computer has a different type of and will never understand in the same way as a human or non-human animal.

Aaron Sloman (1986a; 1986b) also infers Smith’s (1982) idea in his “discussion of the sense in which programmed instructions and computer symbols must be thought of as having some semantics, howsoever restricted” (ibid.). Boden discusses this as causal semantics. In her view, in a “causal semantics the meaning of a symbol is to

56

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

be sought by reference to its causal links with the phenomena” (ibid.). Phenomena here refers to things outside ourselves, such as external objects or events that influences an activity, in computers this is about purely internal computation processes (ibid.). This can be compared to what happens to Searle-in-the-room when he uses English words to understand the rule book or when a computational program is executed. The point that Boden seeks to highlight is that the program cannot use the symbol “restaurants” to mean restaurants in a way an English speaker uses it. Despite that, in her view, such a program has its internal symbols and procedures and it is those that do embody some minimal understanding of certain matters (relevant to computers), for example what it is to compare two formal structures (1990: 103).

Boden asserts that the ‘understanding’ involved is so minimal that it is not at all relevant in the argument (ibid.). Boden (1988) elaborates,

the single-minded nature of virtually all current computer programs (despite an early exception [Reitman, 1965]) renders them fundamentally unsuited to the representation of psychodynamic matters […] the complexity of human purposes is even greater, because of the enormous representational potential provided by natural language and the variety of cultural influences on our beliefs and desires (1988: 262). Although Searle nowhere directly addresses these worries, Searle’s response would be a reiteration of his point that a program does not have a physical nature. He says:

To the frequently asked question, ‘Well, what is the difference between the brain, which after all, functions by a set of microprocesses such as neurons firing at synapses and the computer with its microprocesses of flip flops “and” gates and “or” gates?’ The answer to that question can be given in one word causation. […] The point I make is not that it is counter-intuitive that computers should be conscious, I have no interest whatever in such intuitions, The point, rather is, that the implemented program is insufficient by itself to guarantee the presence of consciousness (Searle 2002: 54-55).

I have discussed both Boden’s first and second prong, in which the core of Boden’s argument is that Searle’s efforts against computational psychology are not based on reliable evidence (1990: 103). She says in principle computational psychology is not incapable of explaining how meaning attaches to mental processes (ibid.). The core of the disagreement between Searle and Boden seems to lie at the level of causation, and it seems to me to make sense that the Frame problem is not directly

57

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

addressed by him, due to the underlying view of the computer he holds, as expressed in the quotation above.

6.2.2.6. Chrisley’s Weak Strong AI (WSAI) thesis: In defence of Boden’s English reply My discussion now turns to focus on one of the few articles that directly deals with Boden’s argument, Chrisley’s (1995) article “Weak Strong AI: An elaboration of the English Reply to the Chinese Room”. The article only considers Boden’s response to Searle’s so called negative claim, “that purely formalist theories cannot explain mentality” (Boden 1990: 92), and not his “positive claim”, namely Searle’s view that “intentionality must be biologically grounded” (in Boden 1990: 92).

Chrisley (1995) develops Boden’s (1990) English Reply by discussing further objections that could be attempted against his reading of it. Chrisley seeks to persuade us that Boden’s position in response to Searle’s Chinese Room (CR) argument is a viable position. Recall again that Boden’s English reply points out

that there is understanding in the Chinese Room: the understanding required to recognize the symbols, the understanding of English required to read the rulebook, etc. (Chrisley 1995: 1). In Boden’s defence, Chrisley shows how discerning the English reply was. To begin his defence; Chrisley proposes that Boden’s position be summed up into what he labels weak strong AI (WSAI), the claim that “there are cases of understanding that a computer can achieve solely by virtue of that computer running a program” (ibid.). WSAI stands in between strong artificial intelligence (SAI) and weak artificial intelligence (WAI). As discussed previously, SAI is the claim that

the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states (Searle in Boden 1990: 67). and WAI is the claim that “the principal value of a computer in the study of the mind is that it gives us a very powerful tool” (ibid.). Searle contends that his Chinese Room argument is constructed to show that SAI is impossible. Given the English reply, Chrisley believes that Searle’s Chinese room cannot argue against WSAI (1995: 1).

58

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

Chrisley discusses eleven possible objections to WSAI. WSAI, as pointed out above, is a claim which is essentially Boden’s English response to Searle’s Chinese room argument. The objections are meant to strengthen Boden’s English reply against Searle’s Chinese room (CR), and are as follows: 1) Irrelevance objection, 2) Direct objection, 3) Behavioural objection, 4) Module objection, 5) Overkill objection, 6) Unconscious objection, 7) Conscious objection, 8) Modal objection, 9) objection, 10) Regress objection and 11) Consistency objection. These objections are examples of what could be proposed by those who try to prop up Boden’s English reply, and defend it as a strong position against Searle’s (CR) argument. As I proceed, let me recall Boden’s observation of Searle whilst he is in the CR47. According to Chrisley:

Searle does understand while implementing a program in the CR. Although Searle might deny, as before, that there is not this kind or that kind of understanding going on in the CR, he cannot deny that there is some understanding going on (1995: 2). For my purposes, I do not attempt a detailed discussion of all the above objections, but I only limit myself to an assessment of two objections, to Chrisley’s assertion that “WSAI stands” (ibid.) viz., the Overkill objection (OO) and the Modal objection (MO); in my view, these two objections offer greater strength in another attempt to shore up Boden’s argument.

I begin by discussing the Overkill objection (OO) to WSAI which states:

1) Searle’s understanding is the problem here, since. Searle and his understanding (of English) are not needed to run a program in the CR. Searle, who understands English, could be replaced by a simple mechanism that everyone would agree does not understand anything at all (Chrisley 1995:3).

Chrisley’s reply to this objection is that it “misunderstands the argument” (ibid.). In his view, this is because the CR argument hinges on Searle executing the program (ibid.). Searle is the obvious place where computational action in the CR is taking place and he is essential to report that understanding is non-existent inside the CR (ibid.). In other words, Searle is the ‘lifeline’ of the CR argument and cannot be

47 Chrisley (1995: 2) says, (We shouldn’t call this room the Chinese Room, since the symbols need not have anything to do with Chinese, syntactically or semantically. So let “CR” now stand for Compuational Room.)

59

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

replaced by something that does not understand since then there can be no report back that there is or is not understanding. For Chrisley, if this is not the case, then the systems reply to the CR follows, which would be unacceptable to Searle (ibid.), and so WSAI holds against this objection.

I now turn to the Modal objection (MO).

2) The Modal objection claims that the WSAI argument misunderstands what Searle’s job is in showing that WSAI is false (Chrisley 1995: 4). A proponent of this objection would say that Searle does not need to show that there is no understanding in the CR, but rather only that any understanding is not obtained only as a result of running a programme (ibid.). Notably, this point is similar to another point made earlier in this mini-dissertation. The proponent of the modal objection would claim that that is easy to achieve, simply by claiming that the understanding (of English) that is present in the CR is the result of Searle’s intrinsic properties, and not at all a result of a particular program being executed. A supporter of this objection would say that Searle’s presence is important, since without his presence, there would be no understanding, and with his presence, there would only be understanding of English in Searle. As such, “it is not by virtue of running the program that there is understanding; it is an implementation detail (that Searle is involved) that is responsible for any understanding” (Chrisley 1995: 4).

Chrisley (1995) proposes that to defeat this objection, he needs to show that two conditions have not been shown to be false. The two conditions are “that understanding in the CR could depend on the program being run and that understanding in the CR need not depend on anything (it need not depend on Searle being part of the implementation)” (ibid.).

For the first condition, Chrisley says “it is true that no matter what program is being run, Searle will have some understanding” (1995: 4). However, the understanding referred to here is not a consequence of WSAI, that no matter what program is run, the CR will result in understanding just on the basis of executing an algorithm (ibid.). Rather, there is need for just one case of a program whose execution guarantees understanding (ibid.). Thus, it could be that for every program but one, only Searle is

60

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

doing any understanding. This would suggest that “it has not been shown that the presence of understanding does not depend on the program being run” (ibid.).

Searle’s response, I think, would be to say that the issue of understanding,

[...] is an empirical possibility […] in the Chinese case I have everything that artificial intelligence can put into me by way of a program, and I understand nothing; in the English case I understand everything, and there is no reason at all to suppose that my understanding has anything to do with computer programs, that is, with computational operations on purely formally specified elements (in Boden 1990:70). To add to this, Searle believes whatever “purely formal principles you put into the computer, they will not be sufficient for understanding” (in Boden 1990: 71). His belief is based on the idea that a human will also be able to follow the formal principles without understanding anything (ibid.).

As pointed out earlier, Searle has an exact meaning in mind when he uses the word understanding in the case of the Chinese room. To repeat, in his view, “there are clear cases in which understanding literally applies and clear cases in which it does not apply” (ibid.). Searle admits that there are cases where metaphor and analogy is applied e.g. when we say things like “the door knows”, or “the adding machines understands”, but, these cases do not fit his views about understanding (in Boden 1990: 72). Searle’s view is that computer understanding is not just partial or incomplete; it is just zero (ibid.).

The second condition offered to defeat the modal objection is about how understanding in the Chinese room would be shown to depend on Searle’s presence (Chrisley 1995: 4). The point Chrisley makes is to show that understanding needs Searle, when he is not present, there is no understanding to be reported (ibid.). But as he observed the Chinese room is powerless to show this: it only highlights cases where Searle is present (ibid.). This is about an implementation detail (the activity that Searle carries out) and not about running of the program: assuming that only Searle is responsible for understanding and that there can be no understanding by virtue of running a program alone would be to beg the question (ibid.).

Searle’s Chinese room argument argued that “a true understanding of understanding cannot be given in terms of formal, purely syntactic programs” (in Chrisley 1995: 4).

61

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

What is important to note is that while Chrisley admits that he may agree with Searle’s conclusion – that a true understanding of understanding cannot be given in terms of formal syntactic programs – he thinks that the Chinese room does not show this (Chrisley 1995: 5). As a result, Chrisley’s article is not focused on the conclusion of the Chinese room argument, rather his worry is about the argument that leads to the conclusion.

The Overkill and the Modal objections have much strength, but they do not, I think, successfully help Boden’s English reply in casting doubt over Searle’s position. This is because of Searle’s dual understanding of understanding. If we accept this dual sense, then it seems to me that both the modal and the overkill objections lose some of their force. Admittedly, the Chinese room is problematic in its format, but without Searle in the robot, the thought experiment will not function at all. Once Searle is in the robot, the dual understanding of understanding is necessary to sustain the experiment.

In this section, I focused on Boden’s second prong; investigating what Boden calls Searle’s background assumption that programs are pure syntax, and so cannot yield semantics. I investigated how her so-called English reply is employed. The English reply to the Chinese room argument, argues that there is understanding in the Chinese room: the understanding required to recognize 1) the symbols and 2) the understanding of English required to read the rulebook, and so on (Chrisley 1995: 1, my emphasis). My focus was to explore how the English Reply clarifies Boden’s claim of “some understanding” in the Chinese room (Boden 1990: 97). This was in light of Searle’s background assumption. The work by the Churchlands was engaged in an attempt to shore up Boden’s argument, but was shown to be insufficient. My penultimate stop in revisiting the Chinese room was an evaluation of how Boden without success sought support from Smith (1982) and Sloman (1986a: 1986b). Boden takes the work of these two as proof that computer programmes are NOT all syntax and no semantics. Lastly, I have also looked at Chrisley (1995)’s weak strong AI (WSAI) perspective. Overally, I showed that Searle’s position remains defensible.

62

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

7. Conclusion

In this mini-dissertation, I evaluated Boden’s reply to Searle’s famous Chinese Room argument, an argument that has received relatively little attention in the literature to date, and concluded that her attempt to escape from the Chinese room can be threatened. I first looked at essential concepts Functionalism, Artificial Intelligence (AI) and Intelligence; and I also investigated what forms the background to Searle’s thought-experiment, viz., the Turing Test and computers – their nature and also how they function. I then zoomed in on the Chinese Room argument; first pointing out the important distinction that Searle makes between Strong and Weak Artificial intelligence. I then moved on to focus in detail on the Chinese Room argument itself and six replies that it generated and which were addressed by Searle. Two replies (the System and Robot replies) were given the most attention because of their direct links to Boden’s English reply.

I then looked specifically at Boden’s critique of what she calls Searle’s “positive claim - his view that intentionality must be biologically grounded” (Searle in Boden 1990: 92.,). In giving a detailed discussion of Boden’s double pronged critique of Searle’s positive claim, i.e. that (1) Searle’s claim that intentionality is biologically grounded depends on bad “biological analogies” (ibid.) and (2) his claim that intentionality is biologically grounded depends on unreliable intuitions (ibid.), I attempted to show that both of these are problematic. Specifically, I showed that Boden’s argument against Searle’s use of the argument by analogy is shaky by questioning her view on intentionality and photosynthesis. In my view, her claim that we know a lot about photosynthesis, and very little about intentionality is suspect, and I argued that Boden’s position is a result of her implicit adherence to the view that the physical is amendable to scientific investigation, whereas the mental is not. I then questioned her saying that Searle’s argument was based on “unreliable intuitions”. I showed that he does provide an argument for his position, which I characterised as a kind of reductio.

I then discussed what Boden calls Searle’s “negative claim - that purely formalist theories cannot explain mentality” (ibid.). For this second claim, I discussed the two prongs that Boden devises to probe Searle’s claim. The first prong aimed directly at

63

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

the example of Chinese Room, whilst the second prong attacked the background assumption upon which the Chinese Room depends, that “computer programs are pure syntax” (ibid.).

In terms of the first prong, I showed how Boden explains Searle’s response to the Robot reply as firstly, a claim for victory. That victory claim is firstly based on the understanding that cognition is not solely a matter of formal symbol manipulation but that it requires a set of causal relations with the outside world (ibid.). Secondly, in Boden’s view, Searle’s claim for victory is based on pointing out that to add movement capacity to a computational system is not to add intentionality (Boden 1990: 94-95). I have shown that Boden rejects Searle’s argument as rebuttal of the robot reply, since in her view “it draws a false analogy” (1990: 95). For her, computationalists do not ascribe intentionality to the brain and so would not credit it with full-blooded intentionality (Boden 1990: 96). Accordingly the description by Searle of the robot’s pseudo-brains as understanding English involves a category- mistake, compared to treating the brain as the bearer - as opposed to the causal basis of - intelligence (ibid.). I argued that the difference between Boden and Searle here lies in the concept of causation and its role in intentionality. I provided an investigation of Searle’s view here, and tested his position against a theorist who argued against Searle – Overskeid – showing how Overskeid’s objections do not affect Searle’s position.

I then moved on to discuss the second prong, where Boden investigates what she calls Searle’s background assumption that programs are pure syntax. I have tried to show how she employs her English reply to argue that the “…instantiation of a computer program, whether by man or by manufactured machine, does involve understanding - at least of the rule-book” (Boden 1990: 97).

In discussing the syntax and semantics issue I discussed Searle and Boden’s work on understanding in some detail, highlighting how the difference in what the term meant has resulted in the misunderstanding that has persisted in this highly contested area of philosophical debate. I have shown how Searle uses the word “understanding” in two distinct senses on one hand. I argued that Boden, on the other hand, seems to conflate these two senses of ‘understanding’ postulated by Searle into one, and uses this to show that Searle’s position is flawed. I have

64

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

contended that raising the issue of understanding makes an important point against Chrisley’s development of Boden’s position into the WSAI thesis. What is significant is that Boden’s argument highlights two very important issues, viz., “understanding” and the “causal powers of the brain,” and these two areas of research are deserving of more attention in the literature.

In sum, in re-visiting the Chinese room by considering Boden’s reply, I have argued that Searle’s claims both “positive and negative” cannot be viewed as a mere hunch (as Boden suggests). I argued that Boden, in assessing Searle’s two-pronged critique of computational psychology in what she calls his positive and negative claims, does not constitute a completely convincing rebuttal of Searle’s position.

65

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

Bibliography

Basden, A. (2007). “Fresh Light thrown into the Chinese Room” http://www.allofliferedeemed.co.uk/basdenchineseroom.htm. Accessed 05/02/2015. Berkich, D. (2010). “Minds and Machines.” https://philosophy.tamucc.edu/courses/spring-2016/minds-and- machines/introduction?destination=node%2F2704. Accessed 11/02/2015.

Boag, Z., and Searle, J. (2014). Interview with John Robert Searle. Searle: It upsets me when I read the nonsense written by my contemporaries. New Philosopher. January 25, 2014. (www.newphilosopher.com/articles/john- searle-it-upsets-me-when-i-read-the-nonsense-written-by-my- contemporaries/). Accessed 12/12/2015.

Bode, C., and Dietrich, R. (2013). Future Narratives: Theory, Poetics, and Media- Historical Moment. Berlin: Walter de Gruyter.

Boden, M.A. (1977). Artificial Intelligence and Natural Man. Hassocks, Sussex: The Harvester Press.

------(1988). Computer Models of the Mind: Computational Approaches in Theoretical Psychology. Cambridge: Cambridge University Press.

------(1990). The Philosophy of Artificial Intelligence. Oxford: Oxford University Press.

------(2004). The Creative Mind: Myths and Mechanisms. (2nd Ed.). New York: Routledge.

------(2006). Mind as Machine. A History of Cognitive Science. Vol. 1. Oxford: Clarendon Press.

Brentano, F. (1981). Psychology from an Empirical Perspective. Edited by Linda L. McAlister ; translated by Margarete Schättle and Linda L. McAlister. London: Routledge & Kegan Paul.

Chalmers, A.F. (1999). What is this thing called science? Buckingham: Open University Press.

66

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

Chalmers, D.J. (2002). Philosophy of Mind: Classical and Contemporary Readings. Oxford: Oxford University Press.

Churchland, P.M., and Churchland, P.S. (1990). “Could a Machine think” Scientific American. 32-37.

Chrisley, R.L. (1995). “Weak Strong AI: An elaboration of the English Reply to the Chinese Room”. users.sussex.ac.uk/~ronc/searle.ps Accessed 22/06/ 2016.

Cole, D. (2014). "The Chinese Room Argument", The Stanford Encyclopaedia of Philosophy (summer 2014 Edition), Edward N. Zalta (ed.), http://plato.stanford.edu/archives/sum2014/entries/chinese-room/. Accessed 10/02/ 2015.

Cooney, B. (2000). The Place of the Mind. Belmont, CA: Wadsworth.

Copeland, J.B. (2004). The Essential Turing: Seminal writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial life plus the secrets of Enigma. Oxford: Oxford University Press.

------(2002). “The Chinese Room from a Logical Point of View”, in J. Preston and M. Bishop (eds.), Views into the Chinese Room: New Essays on Searle and Artificial intelligence Oxford: Oxford University Press, 104–122.

Costa, P., and Costa P. (2015). “How many languages are spoken in China?” in Linguese. http: //linguese.com/blog/how-many-languages-are-spoken-in- china. Accessed 27/12/2016.

Crane, T. (2003). The Mechanical Mind: The Philosophical Introduction to Minds, Machines and Mental Representations. (2nd Ed.) New York: Routledge.

Crockett, L.J. (1994). The Turing Test and the Frame Problem: AI’s Mistaken Understanding of Intelligence. Norwood, New Jersey: Ablex Publishing Corporation.

Damper, R.I. (2006). “The Logic of Searle’s Chinese Room Argument”. Link.springer.com/article/10.007%2fs11023-006-9031-5#page-1. Accessed 25/03/2015.

67

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

Dennett, D.C. (2002). “Cognitive Wheels: The Frame Problem of AI” in M. Boden (ed.) (1990). The Philosophy of Artificial Intelligence Oxford: Oxford University Press. 145-169.

French, C.S. (1996). Computer Science. 5th ed. London: Letts Educational.

Gabbay, D., and Guenthner, F. (1989). Handbook of Philosophical Logic. Vol. IV: Topics in the Philosophy of Language. Netherlands: Springer. 491-512.

Goldin, G.A., & Kaput, J.J. (1996). “A Joint Perspective on the Idea of Representation in Learning and doing Mathematics.” In L. Steffe and P. Nseher (eds.), Theories of Mathematical Learning Mahwah, NJ: Lawrence Erlbaum, 397-430.

Govindjee, (n.d.) “Excitation Energy Transfer and Energy Migration: Some Basics and Background” by Govindjee; a portion of the text has been modified from Rabinowitch and Govindjee, (1969), John Wiley & Sons http://www.life.illinois.edu/govindjee/biochem494/foerster.htm. Accessed 22/08/2016.

Halpern, M. (2006). “Computer can think.” The New Atlantis. Accessed 06/05/2015.

Haugeland, J. (2002). ‘Syntax, Semantics, Physics’, in J. Preston and M. Bishop (eds.) Views into the Chinese Room: New Essays on Searle and Artificial intelligence (Oxford: Oxford University Press), 379–392.

Haught, J.F. (2010). Making Sense of Evolution: Darwin, God and the Drama of Life. Louisville, KY: Westminster John Knox Press.

Hauser, L. (n.d.). “Searle’s Chinese Room Argument: Annotated Bibliography” http://host.uniroma3.it/progretti/kant/field/chinesebiblio.html: 1-32. Accessed 21/10/2015.

Harnad, S. (1989). “Minds, Machines, and Searle.” Journal of Theoretical and Experimental Artificial Intelligence 1. 5-25.

------(1991). “Other bodies, other minds: a machine incarnation of an old philosophical problem.” Minds and Machines 1: 5-25.

68

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

------(2002).‘Minds, Machines, and Searle: What's Right and Wrong about the Chinese Room Argument’, in J. Preston and M. Bishop (eds.) Views into the Chinese Room: New Essays on Searle and Artificial intelligence. Oxford: Oxford University Press: 294–307.

------(2005) “Searle's Chinese Room Argument”. In Encyclopaedia of Philosophy. London: Macmillan.

Hasemer, T., and Domingue, J. (1989). Common LISP Programming for Artificial Intelligence. Kent: Addison-Wesley Publishing Company.

Hofstadter, D. R. (1985). “Waking Up from the Boolean Dream; Or, Subcognition as Computation.” In D. R. Hofstadter, Metamagical Themas: Questing for the Essence of Mind and Pattern. New York: Viking: pp. 631- 65.

Jacob, P. (2014). “Intentionality”, The Stanford Encyclopaedia of Philosophy (Winter 2014 Edition), Edward N. Zalta (ed.), http://plato.stanford.edu.archives/win2014/entries/intentionality/. Accessed 16/02/2016.

------(1999). “State Consciousness Revisited” in Consciousness and Intentionality: Models and Modalities of Attribution. (ed.). Denis Fissette. Dordrecht: Kluwer Academic Publishers: 9-32.

Levin, J. (2013). "Functionalism", The Stanford Encyclopaedia of Philosophy (Fall 2013 Edition), Edward N. Zalta (ed.), . Accessed 12/05/2015.

Luger, F.G. (2009). Artificial Intelligence: Structures and Strategies for Complex Problem Solving. 6th ed. Boston, M. A: Pearson Education, Inc.

Maloney, J.C. (1987). “The Right Stuff” Synthese. Vol. 70, No. 3. 349-372.

Marais, A., Sinayskiy, I., Petruccione, F. & van Grondelle, R. (2015). “A Quantum Protective Mechanism in Photosynthesis.” Scientific Reports. 5, 8720.

69

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

Miksa, W. (with Mclennan S., William and May) (2011). “Human Computers” http://crgis.ndc.nasa.gov/historic/Human_Computers#references. Accessed 04/04/ 2015.

Minsky, M. (1968). Semantic Information Processing. Cambridge: MIT Press.

Moor, J.H. (ed.) (2003). The Turing test: The Elusive Standard of Artificial intelligence. Dordrecht: Kluwer Academic Publishers.

Mooney, V.J. III (1997). “Searle's Chinese Room and its Aftermath”. Center for the Study of Language and Information. Report No. CSLI-97-202. http://codesign.ece.gatech.edu/papers/papers.html. Accessed 2/09/2016.

Morrow, D.R., and Weston, A. (2011). A Workbook for Arguments: A Complete Course in Critical Thinking. Second Edition. London: Hackett Publishing.

O’ Reilly, E.J., and Olaya- Castro, A. (2014). “A. Non-classicality of the molecular vibrations assisting exciton energy transfer at room temperature”. Natural Communication 5: 3012.

Overskeid, G. (2005). “Empirically Understanding Understanding can make problem go away: The Case of the Chinese Room.” The Psychological Record. 595- 617.

Palmer, S.E. (1977). “Hierarchical structure in perceptual representation”: , Vol. 9. Berkeley, California: Elsevier. 441-474.

Preston, J., and Bishop, M. (eds.), (2002). Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford: Oxford University Press.

Piccinini, G. (2009). “Computationalism in the Philosophy of Mind”. Philosophy Compass. Blackwell Publishing Ltd.

Robinson, H. (2016). “Dualism”, The Stanford Encyclopaedia of Philosophy (Spring 2016 Edition), Edward N. Zalta (ed.), http://plato.stanford.edu/archives/spr2016/entries/dualism/>. Accessed 15/02/2016.

70

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

Samuels, R. (2010). “Classical computationalism and the many problems of cognitive relevance”. Studies in History and Philosophy of Science. 280-293.

Searle, J.S. (1980). “Minds, Brains and Programs” in Margaret A. Boden (1990) (ed.), The Philosophy of Artificial Intelligence Oxford: Oxford University Press. 67-88.

------(1983). Intentionality: An Essay in the Philosophy of Mind. Cambridge: Cambridge University Press.

------(1987). “Turing the Chinese Room”, in T.D.Singh and R. Gomatam (eds.), Synthesis of Science and Religion: Critical Essays and Dialogues: 295- 301.

------(1989). “Artificial Intelligence and the Chinese Room: An Exchange [with Elhanan Motzkin]”, New York Review of Books. February 16, 1989. (http: //www.nybooks.com/articles/1989/02/16/artificial-intelligence-and-the-chinese- room-an-ex/

------(1990). “Is the Brain's Mind a Computer Program?” Scientific American: 26-31.

------(1991). “Consciousness, Unconsciousness, and Intentionality”. Philosophical Issues 1: 45-66.

------(1992). The Rediscovery of Mind. Cambridge, Mass: MIT Press.

------(1997). The Mystery of Consciousness. New York: A New York Review Book.

------(1999). “The Problem of Consciousness.” http://cogsci.soton.ac.uk/~harnad/Papers/Py104/searle.prob.html. Accessed 20/08/ 2015.

------(2002). “Twenty-One years in the Chinese Room” in J. Preston and M. Bishop (eds.), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence Oxford: Oxford University Press, 51-69.

------(2009). “Chinese Room Argument”. Scholarpedia, 4 (8):3100.

71

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

Shieber, S.M. (ed.), (2004). The Turing test: Verbal Behaviour as the Hallmark of Intelligence. Cambridge, Mass: MIT Press.

Sloman, A. (1978). The Computer Revolution in Philosophy: Philosophy Science and Models of Mind. University of Sussex: The Harvester Press.

------(1986a). “Reference without Causal Links.” In B. du Boulay and L.J. Steels (eds.), Seventh European Conference on Artificial Intelligence, 369-81. Amsterdam: North-Holland.

Smith, B. (ed.) (2003). John Searle. Cambridge: Cambridge University Press.

Surdeanu, M. (2013). “Overview of the TAC2013 Knowledge Base Population Evaluation: English Slot Filling and Temporal Slot Filling.” Accessed 30/12/2016.

------(2014). “Task Description for English Slot Filling at TAC-KBP 2014” Version 1.1. http//.surdeanu.info/kbp/index.php. Accessed 26/12/2016.

Turing, A. (1950). “Computing Machinery and Intelligence” Mind. New Series, Vol. 59, No. 236 (Oct., 1950), 433-460.

Vaughn, L. (2006). Writing Philosophy: A Student’s Guide to Writing Philosophy Essays. Oxford: Oxford University Press.

Huemer, W. (2015). “Franz Brentano”, The Stanford Encyclopaedia of Philosophy (fall 2015 Edition), Edward N. Zalta (ed.), http://plato.stanford.edu/archives/fall2015/entries/brentano/ Accessed 01/03/ 2015.

Walmsley, J. (2012). Mind and Machine. Hampshire: Palgrave Macmillan.

Waskan, J. (2016) “Connectionism” in The Internet Encyclopaedia of Philosophy, ISSN 2161-0002, http://www.iep.utm.edu/,. Accessed 01/07/2016.

Whitmarsh, J., and Govindjee (1999). “The Photosynthetic Process” in G.S. Singhal, G. Renger, S.K. Sopory, K.D. Irrgang Govindjee (eds.) Concepts in Photobiology: Photosynthesis and Photomorphogenesis New Delhi: Narosa Publishers, 11-51.

72

Re- Visiting the Chinese Room: Boden’s Reply considered Clarton Fambisai Mangadza

Whitmer, M. J. (1983). “Intentionality, Artificial Intelligence and the Causal Powers of the Brain” Auslegung Vol. X. No. 3: 194-210.

73