
The Constructability of Artificial Intelligence (as defined by the Turing Test) Bruce Edmonds Centre for Policy Modelling, Manchester Metropolitan University http://www.cpm.mmu.ac.uk/~bruce Contents Abstract The Turing Test, as originally specified, centres on the ability to perform a social role. The TT can seen as a test of an ability to enter into normal human social dynamics. In this light it seems unlikely that such an entity can be wholey designed in an ‘off-line’ mode, but rather a considerable period of training in situ would be required. The argument that since we can pass the TT and our cognitive processes might be implemented as a TM that, in theory, an TM that could pass the TT could be built is attacked on the grounds that not all TMs are constructable in a planned way. This observation points towards the importance of developmental processes that include random elements (e.g. evolution), but in these cases it becomes problematic to call the result artificial. Keywords: Turing Test, Artificial Intelligence, Constructability, Evolution, Society, Culture, Computability, Symbol Grounding 1. Dynamic aspects of the Turing Test The elegance of the Turing Test comes from the fact that it is not a requirement upon the mechanisms needed to implement intelligence but on the ability to fulfil a role. In the laguage of biology, Turing specified the niche that intelligence must be able to occupy rather than the anatomy of the organism. The role that Turing chose was a social role – whether humans could relate to it in a way that was sufficiently similar to a human intelligence that they could mistake the two. What is unclear from Turing’s 1950 paper, is the length of time that was to be given to the test. It is clearly easier to fool people if you only have to interact with them in a single period of interaction. For example it might be possible to trick someone into thinking one was an expert on chess if one only met them once at a party, but far harder to maintain the pretence if one has to interact with the same person day after day. It is something in the longer-term development of the interaction between people that indicates their mental capabilities in a more reliable way than a single period of interaction. The deeper testing of that abilities comes from the development of the interaction resulting from the new questions that arise from testing the previous responses against ones interaction with the rest of the world. The longer the period of interaction lasts and the greater the variety of contexts it can be judged against, the harder the test. To continue the party analogy, having talked about chess, one’s attention might well be triggered by a chess article in next day’s newspaper which, in turn, might lead to more questioning of one’s acquantance. The ability of entities to participate in a cognitive ‘arms-race’, where two or more entities try to ‘out-think’ each other seems to be an important part of intelligence. If we set a trap for a certain animal in exactly the same place and in the same manner day after day and that animal keeps getting trapped in it, then this can be taken as evidence of a lack of intelligence. On the other hand if one has to keep innovating one’s trap and trapping techniques in order to catch the animal, then one would usually attribute to it some intelligence (e.g. a low cunning). For the above reasons I will adopt a reading of the Turing Test, such that a candidate must pass muster over a reasonable period of time, punctuated by interaction with the rest of the world. To make this interpretation clear I will call this the “long-term Turing Test” (LTTT). The reason for doing this is merely to emphasise the interactive and developmental social aspects that are present in the test. I am emphasising the fact that the TT, as presented in Turing’s paper is not merely a task that is widely accepted as requiring intelligence, so that a successful performance by an entity can cut short philosophical debate as to its adequacy. Rather that it requires the candidate entity to participate in the reflective and developmental aspects of human social intelligence, so that an imputation of its intelligence mirrors our imputation of each other’s intelligence. That the LTTT is a very difficult task to pass is obvious (we might ourselves fail it during periods of illness or distraction), but the source of its difficulty is not so obvious. In addition to the difficulty of implementing problem-solving, inductive, deductive and linguistic abilities, one also has to impart to a candidate a lot of background and contextual information about being human including: a credible past history, social conventions, a believable culture and even commonality in the architecture of the self. A lot of this information is not deducible from general principles but is specific to our species and our societies. I wish to argue that it is far from certain that an artificial intelligence (at least as validated by the LTTT) could be deliberately constructed by us as a result of an intended plan. There are two main arguments against this position that I wish to deal with. Firstly, there is the contention that a strong interpretation of the Church-Turing Hypothesis (CTH) to physical processes would imply that it is theoretically possible that we could be implemented as a Turing Machine (TM), and hence could be imitated sufficiently to pass the TT. I will deal with this in section 2. Secondly, that we could implement a TM with basic learning processes and let it learn all the rest of the required knowledge and abilities. I will argue that such an entity would not longer be artificial in the section after (section 3). I will then conclude with a plea to reconsider the social roots of intelligence in section 4. 2. The Constructability of TMs Many others have argued against the validity of the CTH when interpreted onto physical processes. I will not do this – my position is that there are reasons to suppose that any attempt to disprove the physical CTT are doomed (Edmonds, 1996). What I will do is argue against the inevitability of being able to construct arbitrary TMs in a deliberate manner. To be precise what I claim is that, whatever our procedure of TM construction is, there will be some TMs that we can’t construct or, alternatively, that any effective procedure for TM construction will be incomplete. The argument to show this is quite simple, it derives from the fact that the definition of a TM is not constructive – it is enough that a TM could exist, there is no requirement that it be constructable. This can be demonstrated by considering a version of Turing’s ‘halting problem’ (Turing, 1936). In this new version the general problem is parameterised by a number, n, to make the limited halting problem. This is the problem of deciding whether a TM of length1 less than n, and input of length less than n will terminate (call this TM(n)). The definition of the limited 1. This ‘length’ is the base 2 logarithm of the TM index in a suitable enumeration of machines. halting problem ensures that for any particular n it is fully decidable (since it is a finite function {}1…n × {}1…n → {}01, which could be implemented as a simple look-up table). However there is not a general and effective method of finding the TM(n) that corresponds to a given n. Thus what ever method (even with clever recursion, meta-level processing, thousands of special cases, combinations of different techniques etc.) we have for constructing TMs from specifications there will be an n for which we can not construct TM(n), even though TM(n) is itself computable. If this were not the case we would be able to use this method to solve the full halting problem by taking the maximum of the TM and input’s length finding the corresponding TM(n), and then running it for the answer. A more complete formal proof may be found in the appendix. What this shows is that any deterministic method of program construction will have some limitations. What it does not rule out is that some method in combination with input from a random ‘oracle’ might succeed where the deterministic method failed. The above arguments now no longer hold, one can easily construct a program which randomly chooses a TM out of all the possibilities with a probability inversely proportional to the power of its length (using some suitable encoding into, say, binary) and this program could pick any TM. What one has lost in this transition is, of course, the assurance that the resulting TM is according to one’s desire (WYGIWYS – what you get is what you specified). When one introduce’s random elements in the construction process one has (almost always) to check that the results conform to one’s specification. However, the TT (even the LTTT) is well suited to this purpose, because it is a post-hoc test. It specifies nothing about the construction process. One can therefore imagine fixing some of the structure of an entity by design but developing the rest in situ as the result of learning or evolutionary processes with feedback in terms of the level of success at the test. Such a methodology points more towards the constructivist approaches of (Drescher, 1991, Riegler, 1992 and Vaario, 1994) rather than more traditional ‘foundationalist’ approaches in AI.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-