An Interview with John Randolph Lucas by Barbara Giolito
Total Page:16
File Type:pdf, Size:1020Kb
Etica & Politica / Ethics & Politics, 2003, 1 http://www.units.it/~dipfilo/etica_e_politica/2003_3/16_monographica.htm An Interview with John Randolph Lucas by Barbara Giolito Barbara Giolito: What are the philosophical premises that played a significant role at the origin of your argument? In particular, what weight had the liar paradox? John Randolph Lucas: I formulated an argument somewhat analogous to the Liar Paradox when I was arguing against another boy at school. He was putting forward an extreme form of materialism, and I argued that his action in arguing for it showed that he believed in the power of reason, and belied his contention that only material causes were effective. B. G.: As it is not possible to recognize the truth of gödelian sentences because it is not possible to proof the consistency of a consistent system, is it correct - in your opinion - to maintain that we cannot say that the difference between human mind and computer is in our ability to "see" a truth that computers cannot see, but - at the most - that we can suppose this truth? J. R. L.: But it is possible to recognise the truth of gödelian sentences. We can and do recognise it. All that Gödel's Second Theorem shows is that we cannot give a "formal" proof of it. I discuss at length reasons for holding our minds to be basically consistent. So it isn't correct to maintain that we cannot say that the difference between human mind and computer is in our ability to "see" a truth that computers cannot see, but-at the most- that we can suppose this truth. B. G.: Human consistence's hypothesis could be not a more likely hypothesis than the idea of using faculties as creativity and fantasy as points of difference between human beings and machines. What would you reply to such an argument? In your opinion, your argument has a different power in comparison with arguments using human properties as creativity and fantasy? J. R. L.: I think that creativity etc. are important differences between human minds and machines. But this point is not conceded by hard-line mechanists. My argument is aimed at them. It does not adduce all the most important differences between human minds and machines, but concentrates on one incontrovertible one, which even a mechanist cannot push aside. B. G.: If - for every machine M - a stronger machine can exist able to recognize the gödelian sentence of M, human mind could not be a machine able to recognize gödelian sentences of weaker machines but not its own gödelian sentence, provable by a stronger machine? J. R. L.: A human mind can in principle recognise its own gödelian sentence. B. G.: According to your opinion that your gödelian argument is not a direct proof that mind is not a machine but a dialectical argument, a schema of disproof for any particular version of mechanism, what do you think about attempts to use the Gödel's theorem as proof against Artificial Intelligence (for instance, that one made by professor Roger Penrose)? J. R. L.: Penrose's argument is very similar to mine. In Turn over the Page I wrote "Roger Penrose uses not Gödel's theorem itself but one of its corollaries, Turing's theorem, which he applies to the whole world-wide activity of all mathematicians put together, and claims that their creative activity cannot be completely accounted for by any algorithm, any set of rigid rules that a Turing machine could be programmed to follow. I used Gödel's theorem itself, and considered only individuals, reasonably numerate (able to follow and understand Gödel's theorem) but not professional mathematicians. I did not give a direct argument, but rather a schema of disproof, whereby any claim that a particular individual could be adequately represented by a Turing machine could be refuted. My version was, designedly, much less formal than the others, partly because I was addressing a not-very-numerate audience, but chiefly because I was not giving a direct disproof, but rather a schema which needed to be adapted to refute the particular case being propounded by the other side". B. G.: What do you think about hopes, set by professor Roger Penrose and by Connectionists, on saving A.I. project by means of quantistic computers or connectionist ones? Is it possible to eliminate these questions saying that these kinds of computer have nothing to do with mechanism because they are dynamical systems? In any case, do you think that these projects can be useful, from a practical point of view, in studying human mind? J. R. L.: They may be, in so far as they are not mechanist. B. G.: What do you think about the idea that not the whole mind but the greater part of it is a machine? J. R. L.: Some parts may be, but it is difficult to divide the mind into parts. B. G.: According to the idea of Mechanists that only formal proofs are valid ones and according to your idea that your argument is not a proof but a schema of refutation, Mechanists could not reply to your argument maintaining that it is too idealised? J. R. L.: It is not that idealised. If the mechanist puts forward a specific claim that a mind is a particular machine, I can show him to be wrong. B. G.: Gödel himself maintained - during the 25th Josiah Willard Gibbs Conference (26th December 1951, Providence) - that the incompleteness theorem on his own, without other philosophical premises, could not lead to the impossibility of producing a machine able to proof mathematical theorems provable by human minds; what do you think about this subject? J. R. L.: In Turn over the Page, again, I wrote “Gödel argues for a disjunction: an Either/Or, with the strong suggestion that the second disjunct is untenable, and hence by Modus Tollendo Ponens that the first disjunct must be true. So the following disjunctive is inevitable: Either mathematics is incompletable in this sense, that its evident axioms can never be comprised in a finite rule, that is to say, the human mind (even within the realm of pure mathematics) infinitely surpasses the powers of any finite machine, or else there exist absolutely unsolvable diophantine problems of the type specified […]. It is clear that Gödel thought the second disjunct false, so that by Modus Tollendo Ponens he was implicitly denying that any Turing machine could emulate the powers of the human mind”. Etica & Politica / Ethics & Politics, 2003, 1 http://www.units.it/dipfilo/etic_e_politica/2003_1/2_monographica.htm Minds, Machines and Gödel (*) J.R. Lucas Fellow of Merton College, Oxford Fellow of the British Academy Gödel's theorem seems to me to prove that Mechanism is false, that is, that minds cannot be explained as machines. So also has it seemed to many other people: almost every mathematical logician I have put the matter to has confessed to similar thoughts, but has felt reluctant to commit himself definitely until he could see the whole argument set out, with all objections fully stated and properly met (1) This I attempt to do. Gödel's theorem states that in any consistent system which is strong enough to produce simple arithmetic there are formulae which cannot {44} be proved-in-the-system, but which we can see to be true. Essentially, we consider the formula which says, in effect, "This formula is unprovable-in-the-system". If this formula were provable-in-the-system, we should have a contradiction: for if it were provablein-the-system, then it would not be unprovable-in-the- system, so that "This formula is unprovable-in-the-system" would be false: equally, if it were provable-in-the-system, then it would not be false, but would be true, since in any consistent system nothing false can be provedin-the-system, but only truths. So the formula "This formula is unprovable-in-the-system" is not provable-in-the-system, but unprovablein-the-system. Further, if the formula "This formula is unprovablein- the-system" is unprovable-in-the- system, then it is true that that [256] formula is unprovable-in-the-system, that is, "This formula is unprovable-in-the-system" is true. The foregoing argument is very fiddling, and difficult to grasp fully: it is helpful to put the argument the other way round, consider the possibility that "This formula is unprovable-in-the- system" might be false, show that that is impossible, and thus that the formula is true; whence it follows that it is unprovable. Even so, the argument remains persistently unconvincing: we feel that there must be a catch in it somewhere. The whole labour of Gödel's theorem is to show that there is no catch anywhere, and that the result can (113) be established by the most rigorous deduction; it holds for all formal systems which are (i) consistent, (ii) adequate for simple arithmetic---i.e., contain the natural numbers and the operations of addition and multiplication---and it shows that they are incomplete--- i.e., contain unprovable, though perfectly meaningful, formulae, some of which, moreover, we, standing outside the system, can see to be true. Gödel's theorem must apply to cybernetical machines, because it is of the essence of being a machine, that it should be a concrete instantiation of a formal system. It follows that given any machine which is consistent and capable of doing simple arithmetic, there is a formula which it is incapable of producing as being true---i.e., the formula is unprovable-in-the-system-but which we can see to be true.