LNCS 3988, Pp
Total Page:16
File Type:pdf, Size:1020Kb
The Church-Turing Thesis Consensus and Opposition Martin Davis Mathematics Dept., University of California, Berkeley, CA 94720, USA [email protected] Many years ago, I wrote [7]: It is truly remarkable (G¨odel . speaks of a kind of miracle) that it has proved possible to give a precise mathematical characterization of the class of processes that can be carried out by purely machanical means. It is in fact the possibility of such a characterization that underlies the ubiquitous applicability of digital computers. In addition it has made it possible to prove the algorithmic unsolvability of important problems, has provided a key tool in mathematical logic, has made available an array of fundamental models in theoretical computer science, and has been the basis of a rich new branch of mathemtics. A few years later I wrote [8]: Thesubject...isAlanTuring’sdiscoveryoftheuniversal(orall-purpose) digitalcomputerasa mathematicalabstraction....Wewilltrytoshow how this very abstract work helped to lead Turing and John von Neu- mann to the modern concept of the electronic computer. In the 1980s when those words were written, the notion that the work by the logicians Church, Post, and Turing had a significant relationship with the coming of the modern computer was by no means generally accepted. What was innovative about the novel vacuum tube computers being built in the late 1940s was still generally thought to be captured in the phrase “the stored program concept”. Much easier to think of this revolutionary paradign shift in terms of the use of a piece of hardware than to credit Turing’s abstract pencil-and- paper “universal” machines as playing the key role. However by the late 1990s a consensus had developed that the Church-Turing Thesis is indeed the basis of modern computing practice. Statements could be found characterizing it as a natural law, without proper care to distinguish between the infinitary nature of the work of the logicians and the necessarily finite character of physical comput- ers. Even the weekly news magazine Time in its celebration of the outstanding thinkers of the twentieth century proclaimed in their March 29, 1999 issue: . the fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine. A. Beckmann et al. (Eds.): CiE 2006, LNCS 3988, pp. 125–132, 2006. c Springer-Verlag Berlin Heidelberg 2006 126 M. Davis Virtually all computers today from $10 million supercomputers to the tiny chips that power cell phones and Furbies, have one thing in common: they are all “von Neumann machines,” variations on the basic computer architecture that John von Neumann, building on the work of Alan Turing, laid out in the 1940s. Despite all of this, computer scientists have had to struggle with the all- too-evident fact that from a practical point of view, Turing computability does not suffice. Von Neumann’s awareness from the very beginning of not only the significance of Turing universality but also of the crucial need for attention in computer design to limitations of space and time comes out clearly in the report [3]: It is easy to see by formal-logical methods that there exist codes that are in abstracto adequate to control and cause the execution of any sequence of operations which are individually available in the machine and which are, in their entirety, conceivable by the problem planner. The really decisive considerations from the present point of view, in selecting a code, are of a more practical nature: simplicity of the equipment demanded by the code, and the clarity of its application to the actually important problems together with the speed of its handling those problems. Steve Cook’s ground-breaking work of 1971 establishing the NP-completeness of the satisfiability problem, and the independent discovery of the same phenom- enon by Leonid Levin, opened a Pandora’s box of NP-complete problems for which no generally feasible algorithms are known, and for which, it is believed, none exist. With these problems Turing computability doesn’t help because, in each case, the number of steps required by the best algorithms available grows exponentially with the length of the input, making their use in practice prob- lematical. How strange that despite this clear evidence that computbility alone does not suffice for practical purposes, a movement has developed under the banner of “hypercomputation” proposing the practicality of computing the non- computable. In a related direction, it has been proposed that in our very skulls resides the ability to transcend the computable. It is in this context, that this talk will survey the history of and evidence for Turing computability as a theoretical and practical upper limit to what can be computed. 1 The Birth of Computability Theory This is a fascinating story of how various researchers approaching from different directions all arrived at the same destination. Emil Post working in isolation and battling his bipolar demons arrived at his notion of normal set already in the 1920s. After Alonzo Church’s ambitious logical system was proved inconsistent by his students Kleene and Rosser, he saw how to extract from it a consistent subsystem, the λ-calculus. From this, Church and Kleene arrived at their notion of λ-definability, which Church daringly proposed as a precise equivalent of the The Church-Turing Thesis 127 intuitive notion of calculability. Kurt G¨odel in lectures 1n 1934 suggested that this same intuitive notion would be captured by permitting functions to be spec- ified by recursive definitions of the most general sort and even suggested one way this could be realized. Finally, Alan Turing in England, knowing none of this, came up with his own formulation of computability in terms of abstract ma- chines limited to the most elemental operations but permitted unlimited space. Remarkably all these notions turned out to be equivalent.1 2 Computability and Computers I quote from what I have written elsewhere [9]: . there is no doubt that, from the beginning the logicians developing the theoretical foundations of computing were thinking also in terms of physical mechanism. Thus, as early as 1937, Alonzo Church reviewing Turing’s classic paper wrote [4]: [Turing] proposes as a criterion that an infinite sequence of digits 0 and 1 be ’computable’ that it shall be possible to devise a computing machine, occupying a finite space and with working parts of finite size, which will write down the sequence to any desired number of terms if allowed to run for a sufficiently long time. As a matter of convenience, certain further restrictions are imposed on the character of the machine, but these are of such a nature as obviously to cause no loss of generality . Turing himself speaking to the London Mathematical Society in 1947 said [24]: Some years ago I was researching what now may be described as an investigation of the theoretical possibilities and limitations of digital computing machines. I considered a type of machine which had a central mechanism, and an infinite memory which was contained on an infinite tape. This type of machine seemed to be sufficiently general. One of my conclusions was that the idea of ’rule of thumb’ process and ’machine process’ were synonymous. Referring to the machine he had designed for the British National Physics Laboratory, Turing went on to say: Machines such as the ACE (Automatic Computing Engine) may be regarded as practical versions of this same type of machine. Of course one should not forget that the infinite memory of Turing’s model can not be realized in the physical world we inhabit. It is certainly impressive to observe the enormous increase in storage capability in readily available comput- ers over the years making more and more of the promise of universality enjoyed by Turing’s abstract devices available to all of us. But nevertheless it all remains 1 I tell the story in some detail in my [7] and provide additional references. My account is not entirely fair to Church; for a better account of his contribution see [21]. 128 M. Davis finite. Elsewhere I’ve emphasized the paradigm shift in our understanding of computation already implicit in Turing’s theoretical work [11, 12]: BeforeTuringthe...suppositionwasthat...thethreecategories,ma- chine, program, and data, were entirely separate entities. The machine was a physical object . hardware. The program was the plan for doing a computation...Thedatawasthenumericalinput. Turing’s universal machine showed that the distinctness of these three categories is an illu- sion. A Turing machine is initially envisioned as a machine . , hardware. Butitscode...functionsasa program, detailing the instructions to the universalmachine...Finally,theuniversalmachineinits step-by-step actions sees the . machine code as just more data to be worked on. This fluidity...is fundamental to contemporary computer practice. A program ...is data to the . compiler. One can see this interplay manifested in the recent quite non-theoretical book [14], for example in pp. 6–11. 3 Trial and Error Computability as Hypercomputation Consider a computation which produces output from time to time and which is guaranteed to eventually produce the correct desired output, but with no bound on the time required for this to occur. However once the correct output has been produced any subsequent output will simply repeat this correct result. Someone who wishes to know the correct answer would have no way to know at any given time whether the latest output is the correct output. This situation was analyzed by E.M. Gold and by Hilary Putnam [15, 19]. If the computation is to determine whether or not a natural number n as input belongs to some set S,thenit turns out that sets for which such “trial and error” computation is available are 0 exactly those in the Δ2 class in the arithmetic hierarchy.