
J. Intell. Syst. 21 (2012), 325–330 DOI 10.1515/jisys-2012-0017 © de Gruyter 2012 Something Old, Something New, Something Borrowed, Something Blue Part 1: Alan Turing, Hypercomputation, Adam Smith and Next Generation Intelligent Systems Mark Dougherty Abstract. In this article intelligent systems are placed in the context of accelerated Turing machines. Although such machines are not currently a reality, the very real gains in computing power made over previous decades require us to continually reevaluate the potential of intelligent systems. The economic theories of Adam Smith provide us with a useful insight into this question. Keywords. Intelligent Systems, Accelerated Turing Machine, Specialization of Labor. 2010 Mathematics Subject Classification. Primary 01-02, secondary 01A60, 01A67. 1 Introduction The editor-in-chief of this journal has asked me to write a series of shorter articles in which I try to illuminate various challenges in intelligent systems. Given space requirements they cannot be regarded as comprehensive reviews, they are intended to provide an overview and to be a point of departure for discussion and debate. The title of the series implies that I will look at both very old and rather new concepts and I will not be afraid to borrow from other subjects. Nor will I be afraid to incorporate radical “blue skies” ideas or hypotheses. By challenges I mean aspects of our subject which cannot easily be tackled by either theoretical means or through empirical enquiry. That may be because the questions raised are not researchable or because no evidence has yet been ob- served. Or it may be that work is underway but that no firm conclusions have yet been reached by the research community. Such questions are often deep and intimidating and this very fact often leads them to be dismissed out of hand. My view is that this is a mistake. We should allow ourselves time to pause and re- flect occasionally on the really major challenges we face. As Alan Turing once famously remarked [10]: 326 M. Dougherty We can only see a short distance ahead, but we can see plenty there that needs to be done. The trouble with that is that we never seem to find the time to lift our heads and try to look a little further. 2 The Accelerated Turing Machine Let me start with some philosophical speculation which dates back to an era which predates the development of the digital computer. The formal ideas rest upon a concept known as temporal patterning, whereby the time taken for a machine to perform individual operations does not stay constant over time. Weyl [11] de- scribes such a machine that can perform: An infinite sequence of distinct acts of decision within a finite time; say, by supplying the first result after 1/2 minute, the second after another 1/4 minute, the third 1/8 minute later than the second, etc. In this way it would be possible . to achieve a traversal of all natural numbers and thereby a sure yes-or-no decision regarding any existential question about natural numbers. The reader will immediately recognize Zeno’s paradox at work here. Bertrand Russell seems to have mentioned a similar concept and explicitly mention Zeno in a lecture held in Boston in 1914. Blake [1] independently wrote about the same idea which is therefore known as Russell–Blake–Weyl temporal patterning. We can now understand the conceptual basis of the so-called accelerated Turing machine. This is a concept which was introduced by Copeland in 1998 [2,3] which explicitly marries the concept of temporal patterning together with the Turing ma- chine [9] to produce the accelerated Turing machine; a Turing machine where each operation takes half the time of the previous one. The existence of such a machine would clearly have major implications for complexity theory and which classes of problem are computationally feasible. This emergence of what is now more generally known as hypercomputation (computational models which go beyond the power of the Church–Turing thesis) may seem far-fetched but there is a lively theoretical debate over the possible exis- tence of such machines [4]. Some of these models touch upon AI paradigms. For example, it is proved that an analogue recurrent artificial neural network can per- form hypercomputation; alternatively a digital equivalent with infinite precision of weights [7]. Something . Part 1 327 3 A Quasi-Accelerated Turing Machine Putting the existence, or otherwise, of an accelerated Turing machine to one side and looking for a moment at the history of computing, we can make a striking observation. For the last 5 decades there has been an exponential increase in com- puting power and a corresponding reduction in the time taken for a computation operation. I am of course referring to Moore’s Law [6], which states that the num- ber of transistors in a microprocessor will double every 18 months. This law, when first hypothesized in 1965 is still valid today and seems set to hold for at least a further decade. Naturally even this astonishing increase in computing power is not the same as a genuine accelerated Turing machine; we would need to be assured that expo- nential growth could continue indefinitely. Unfortunately issues such as quantum effects and heat dissipation problems will eventually set a stop (at least for conven- tional silicon chips) and we are back in the realm of more speculative computing paradigms if an accelerated Turing machine is to become reality. Yet we have already gone through 30 doublings. By any stretch of the imagina- tion this is a lot and it’s not unreasonable to imagine that we have at least a quasi- accelerated Turing machines in our hands. Yet somehow intelligent systems, and computer systems in general, do not seem to have lived up to the weight of expec- tation which such a vast increase in power offers. To put it simply in terms that readers of this journal should empathize with, intelligent systems are still not that intelligent. There are of course many explanations for this, but one issue which I feel is critical requires borrowing some concepts from another discipline, in this case economics. 4 The Wealth of Nations In his groundbreaking treatise The Wealth of Nations [8], Adam Smith put forward the concept of specialization of labor. The idea is that in a complex economy, nobody can hope to master all possible trades and occupations. It is better for different people to specialize in different jobs and each develop a skill set which is uniquely suitable for their chosen occupation. He further put forward the radical idea that it was worth the cost of educating the working class – the increase in production would more than outweigh the cost. Adam Smith’s ideas are at the very heart of the modern industrial revolution. Production has undergone the same exponential trajectory as computers have un- der Moore’s law. Are these two concepts related? When we put Adam Smith’s arguments into the context of a software system, what can we conclude? That most (if not all) software should be specialized. That 328 M. Dougherty it should have the ability to adapt and learn. In short, that intelligent systems should be at the very heart of software engineering. This is good news for propo- nents of intelligent systems (which I imagine the majority of readers of this journal are), but somehow we are failing in many situations to deliver. How “smart” is a smartphone really? I use one of these devices every day, almost every waking hour, but as yet I have experienced or observed little in the way of adaptation, self-learning and so on. Perhaps new initiatives such as the “Strong AI” project of Kimera Systems [5] will change this situation, only time will tell. When software is written, much of it is not specialized at all. It is put together largely from recycled or standard library components. One consequence of this is “bloatware” – problems are solved by brute force and not at all efficiently, either in terms of space or time. A good example of this is the near-exponential growth in the size required to store a typical operating system, or indeed to do just about anything. Why does a not very “smart” smartphone come with 32GB of memory at a time when I can store my data in a cloud service? The facile answer often given to that question is that “memory is cheap these days”. That’s true, but so are many other products, yet we see no need to over provide ourselves. Although one could argue that modern production systems revolve largely around standardization (for example, one brand of sneakers can be produced by the same machines as another brand of sneakers), this is to fall into the trap of confusing production with output. In an ideal world production systems should be agile and adaptable, even if the products produced for the market are relatively standardized. From the perspective of the human software engineer, society does indeed pro- vide educational opportunities, i.e. society is investing in the production system for the writing of software. They may or may not be free at the point of delivery, but even in fee-paying situations students are willing to pay. Thus presumably they feel that the necessary investment in skills is worth the price. Furthermore, most universities recognize that fact that many skills and knowledge become outdated in a fast-developing world. Thus there is considerable emphasis in providing an educational experience which gives the students a more general skill – the ability to adapt to change and acquire (and interpret) new information and knowledge as required.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-