J. Intell. Syst. 21 (2012), 325–330 DOI 10.1515/jisys-2012-0017 © de Gruyter 2012

Something Old, Something New, Something Borrowed, Something Blue Part 1: , Hypercomputation, Adam Smith and Next Generation Intelligent Systems

Mark Dougherty

Abstract. In this article intelligent systems are placed in the context of accelerated Turing machines. Although such machines are not currently a reality, the very real gains in computing power made over previous decades require us to continually reevaluate the potential of intelligent systems. The economic theories of Adam Smith provide us with a useful insight into this question.

Keywords. Intelligent Systems, Accelerated Turing Machine, Specialization of Labor.

2010 Mathematics Subject Classification. Primary 01-02, secondary 01A60, 01A67.

1 Introduction

The editor-in-chief of this journal has asked me to write a series of shorter articles in which I try to illuminate various challenges in intelligent systems. Given space requirements they cannot be regarded as comprehensive reviews, they are intended to provide an overview and to be a point of departure for discussion and debate. The title of the series implies that I will look at both very old and rather new concepts and I will not be afraid to borrow from other subjects. Nor will I be afraid to incorporate radical “blue skies” ideas or hypotheses. By challenges I mean aspects of our subject which cannot easily be tackled by either theoretical means or through empirical enquiry. That may be because the questions raised are not researchable or because no evidence has yet been ob- served. Or it may be that work is underway but that no firm conclusions have yet been reached by the research community. Such questions are often deep and intimidating and this very fact often leads them to be dismissed out of hand. My view is that this is a mistake. We should allow ourselves time to pause and re- flect occasionally on the really major challenges we face. As Alan Turing once famously remarked [10]: 326 M. Dougherty

We can only see a short distance ahead, but we can see plenty there that needs to be done. The trouble with that is that we never seem to find the time to lift our heads and try to look a little further.

2 The Accelerated Turing Machine

Let me start with some philosophical speculation which dates back to an era which predates the development of the digital computer. The formal ideas rest upon a concept known as temporal patterning, whereby the time taken for a machine to perform individual operations does not stay constant over time. Weyl [11] de- scribes such a machine that can perform: An infinite sequence of distinct acts of decision within a finite time; say, by supplying the first result after 1/2 minute, the second after another 1/4 minute, the third 1/8 minute later than the second, etc. In this way it would be possible . . . to achieve a traversal of all natural numbers and thereby a sure yes-or-no decision regarding any existential question about natural numbers. The reader will immediately recognize Zeno’s paradox at work here. Bertrand Russell seems to have mentioned a similar concept and explicitly mention Zeno in a lecture held in Boston in 1914. Blake [1] independently wrote about the same idea which is therefore known as Russell–Blake–Weyl temporal patterning. We can now understand the conceptual basis of the so-called accelerated Turing machine. This is a concept which was introduced by Copeland in 1998 [2,3] which explicitly marries the concept of temporal patterning together with the Turing ma- chine [9] to produce the accelerated Turing machine; a Turing machine where each operation takes half the time of the previous one. The existence of such a machine would clearly have major implications for complexity theory and which classes of problem are computationally feasible. This emergence of what is now more generally known as hypercomputation (computational models which go beyond the power of the Church–Turing thesis) may seem far-fetched but there is a lively theoretical debate over the possible exis- tence of such machines [4]. Some of these models touch upon AI paradigms. For example, it is proved that an analogue recurrent artificial neural network can per- form hypercomputation; alternatively a digital equivalent with infinite precision of weights [7]. Something . . . Part 1 327

3 A Quasi-Accelerated Turing Machine

Putting the existence, or otherwise, of an accelerated Turing machine to one side and looking for a moment at the history of computing, we can make a striking observation. For the last 5 decades there has been an exponential increase in com- puting power and a corresponding reduction in the time taken for a operation. I am of course referring to Moore’s Law [6], which states that the num- ber of transistors in a will double every 18 months. This law, when first hypothesized in 1965 is still valid today and seems set to hold for at least a further decade. Naturally even this astonishing increase in computing power is not the same as a genuine accelerated Turing machine; we would need to be assured that expo- nential growth could continue indefinitely. Unfortunately issues such as quantum effects and heat dissipation problems will eventually set a stop (at least for conven- tional silicon chips) and we are back in the realm of more speculative computing paradigms if an accelerated Turing machine is to become reality. Yet we have already gone through 30 doublings. By any stretch of the imagina- tion this is a lot and it’s not unreasonable to imagine that we have at least a quasi- accelerated Turing machines in our hands. Yet somehow intelligent systems, and computer systems in general, do not seem to have lived up to the weight of expec- tation which such a vast increase in power offers. To put it simply in terms that readers of this journal should empathize with, intelligent systems are still not that intelligent. There are of course many explanations for this, but one issue which I feel is critical requires borrowing some concepts from another discipline, in this case economics.

4 The Wealth of Nations

In his groundbreaking treatise The Wealth of Nations [8], Adam Smith put forward the concept of specialization of labor. The idea is that in a complex economy, nobody can hope to master all possible trades and occupations. It is better for different people to specialize in different jobs and each develop a skill set which is uniquely suitable for their chosen occupation. He further put forward the radical idea that it was worth the cost of educating the working class – the increase in production would more than outweigh the cost. Adam Smith’s ideas are at the very heart of the modern industrial revolution. Production has undergone the same exponential trajectory as computers have un- der Moore’s law. Are these two concepts related? When we put Adam Smith’s arguments into the context of a software system, what can we conclude? That most (if not all) software should be specialized. That 328 M. Dougherty it should have the ability to adapt and learn. In short, that intelligent systems should be at the very heart of software engineering. This is good news for propo- nents of intelligent systems (which I imagine the majority of readers of this journal are), but somehow we are failing in many situations to deliver. How “smart” is a smartphone really? I use one of these devices every day, almost every waking hour, but as yet I have experienced or observed little in the way of adaptation, self-learning and so on. Perhaps new initiatives such as the “Strong AI” project of Kimera Systems [5] will change this situation, only time will tell. When software is written, much of it is not specialized at all. It is put together largely from recycled or standard library components. One consequence of this is “bloatware” – problems are solved by brute force and not at all efficiently, either in terms of space or time. A good example of this is the near-exponential growth in the size required to store a typical operating system, or indeed to do just about anything. Why does a not very “smart” smartphone come with 32GB of memory at a time when I can store my data in a cloud service? The facile answer often given to that question is that “memory is cheap these days”. That’s true, but so are many other products, yet we see no need to over provide ourselves. Although one could argue that modern production systems revolve largely around standardization (for example, one brand of sneakers can be produced by the same machines as another brand of sneakers), this is to fall into the trap of confusing production with output. In an ideal world production systems should be agile and adaptable, even if the products produced for the market are relatively standardized. From the perspective of the human software engineer, society does indeed pro- vide educational opportunities, i.e. society is investing in the production system for the writing of software. They may or may not be free at the point of delivery, but even in fee-paying situations students are willing to pay. Thus presumably they feel that the necessary investment in skills is worth the price. Furthermore, most universities recognize that fact that many skills and knowledge become outdated in a fast-developing world. Thus there is considerable emphasis in providing an educational experience which gives the students a more general skill – the ability to adapt to change and acquire (and interpret) new information and knowledge as required. Intelligent systems offer the potential to harness more of the potential that learn- ing and adaptation provides, provided we are willing to rethink some of our rea- sons for developing them. For example, there should be more emphasis on produc- ing intelligent system modules which can replace the “dumb” modules of today. Intelligent systems should be within the grasp of all software developers if they are to have much impact on our daily computing experience and make their contribu- tion to actually making use of the “quasi-accelerated Turing machine” which as I Something . . . Part 1 329 indicated we have available to us. Naturally this implies deep thinking about how many intelligent modules can co-exist as a system, but the word “system” clearly implies a number of actors at work.

5 Conclusion

Haven’t I just sunk my own ship? If all software developers can build intelligent systems, that surely implies less specialization of labor in our profession? On the contrary, the work of intelligent system experts should be concentrated at the next level up. It is our job to build the necessary tools, the necessary production system for AI. Such a vision is not new; it was the main aim of the Japanese 5th Generation Computing Project. That project is seen largely as a failure, but let’s reflect on that fact that Adam Smith’s ideas were initially very popular, later derided but were eventually widely accepted. Although the accelerated Turing machine may remain in the realm of specu- lation we do have a major challenge in front of us even to program the “quasi- accelerated” machines we already have. Intelligent systems can and must be at the heart of future developments. However, my belief is that to succeed we have to invest more in producing intellectual capital for the future and spend less effort on solving problems which can be solved in a short timescale.

Bibliography

[1] R. M. Blake, The paradox of temporal process, J. Philos. 23 (1926), 645–654. [2] B. J. Copeland, Even Turing machines can compute uncomputable functions, in: Unconventional Models of Computation, Springer, Singapore (1998), 150–164. [3] B. J. Copeland, Accelerating Turing machines, Mind and Machines 12 (2002), 281– 300. [4] M. Davies, Why there is no such discipline as hypercomputation, Appl. Math. Com- put. 178 (2006), 4–7. [5] Kimera Systems, http://kickstarter.kimerasystems.com [6] G. E. Moore, Cramming more components onto integrated circuits, Electronics 38 (1965), 114–117. [7] P. Rodrigues, J. F. Costa and H. T. Siegelmann, Verifying properties of neural net- work, in: Proceedings of the 6th International Work-Conference on Artificial and Natural Neural Networks, Connectionist Models of Neurons, Learning Processes and Artificial Intelligence, Part I (2001), 158–165. [8] A. Smith, An Inquiry into the Nature and Causes of the Wealth of Nations, Edwin Cannan (Ed.), 5th ed., Methuen & Co., London, 1904. First publication 1776. 330 M. Dougherty

[9] A. M. Turing, On computable numbers, with an application to the Entschei- dungsproblem, Proc. London Math. Soc. Ser. 2 42 (1937), 230–265. [10] A. M. Turing, Computing machinery and intelligence, Mind 59 (1950), 433–460. [11] H. Weyl, Philosophy of Mathematics and Natural Science, Princeton University Press, Princeton, 1949. Translation of the original publication in German from 1927.

Received November 15, 2012.

Author information Mark Dougherty, Department of Computer Engineering, School of Technology and Business Studies, Dalarna University, 79188 Falun, Sweden. E-mail: [email protected]