The Singularity May Never Be Near
Total Page:16
File Type:pdf, Size:1020Kb
Articles The Singularity May Never Be Near Toby Walsh I The technological singularity, often here is both much optimism and pessimism around simply called the singularity, is the artificial intelligence (AI) today. The optimists on the hypothesis that at some point in the Tone hand are investing millions of dollars, and even in future we will invent machines that can some cases billions of dollars into AI. The pessimists, on the recursively self improve, and that this other hand, predict that AI will end many things: jobs, war- will be a tipping point resulting in run- away technological growth. I examine fare, and even the human race. Both the optimists and the half a dozen arguments against this pessimists often appeal to the idea of a technological singu- idea. larity, a point in time where machine intelligence starts to run away, and a new, more intelligent “species” starts to inhabit the earth. If the optimists are right, this will be a moment that fundamentally changes our economy and our society. If the pessimists are right, this will be a moment that also fundamentally changes our economy and our society. It is therefore very worthwhile spending some time deciding if either of them might be right. 58 AI MAGAZINE Copyright © 2017, Association for the Advancement of Artificial Intelligence. All rights reserved. ISSN 0738-4602 Articles The History of the Singularity erty as intelligence, that it can be measured and com- pared, and that the technological singularity is when The idea of a technological singularity can be traced this measure increases exponentially fast in an appro- back to a number of different thinkers. Following priate and reasonable scale. John von Neumann’s death in 1957, Stanislaw Ulam The possibility of a technological singularity has wrote: driven several commentators to issue dire predictions One conversation [with John von Neumann] centered about the possible impact of artificial intelligence on on the ever accelerating progress of technology and the human race. For instance, in December 2014, changes in the mode of human life, which gives the Stephen Hawking told the BBC: appearance of approaching some essential singularity The development of full artificial intelligence could in the history of the race beyond which human affairs, spell the end of the human race.… It would take off on as we know them, could not continue. (Ulam 1958) its own, and re-design itself at an ever increasing rate. I. J. Good made a more specific prediction in 1965, Humans, who are limited by slow biological evolu- calling it an “intelligence explosion” rather than a tion, couldn’t compete, and would be superseded.1 “singularity”: Several other well known figures including Bill Gates, Let an ultraintelligent machine be defined as a Elon Musk, and Steve Wozniak have subsequently machine that can far surpass all the intellectual activ- issued similar warnings. Nick Bostrom has predicted ities of any man however clever. Since the design of a technological singularity, and argued that this pos- machines is one of these intellectual activities, an es an existential threat to the human race (Bostrom ultraintelligent machine could design even better 2014). In this article, I will explore arguments as to machines; there would then unquestionably be an intelligence explosion, and the intelligence of man why a technological singularity might not be close. would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.” (Good 1965) Some Arguments Many credit the technological singularity to the com- Against the Singularity puter scientist, and science fiction author, Vernor The idea of a technological singularity has received Vinge, who predicted: more debate outside the mainstream AI community Within thirty years, we will have the technological than within it. In part, this may be because many of means to create superhuman intelligence. Shortly the proponents for such an event have come from after, the human era will be ended. (Vinge 1993) outside this community. The technological singular- More recently, the idea of a technological singularity ity also has become associated with some somewhat has been popularized by Ray Kurzweil (2006) as well challenging ideas like life extension and transhu- as others. Based on current trends, Kurzweil predicts manism. This is unfortunate as it has distracted the technological singularity will happen around debate from a fundamental and important issue: will 2045. For the purposes of this article, I suppose that we able to develop machines that at some point are the technological singularity is the point in time at able to improve their intelligence exponentially fast which we build a machine of sufficient intelligence and that quickly far exceed our own human intelli- that is able to redesign itself to improve its intelli- gence? This might not seem a particularly wild idea. gence, and at which its intelligence starts to grow The field of computing has profited considerably exponentially fast, quickly exceeding human intelli- from exponential trends. Moore’s law has predicted gence by orders of magnitude. with reasonable accuracy that the number of transis- I start with two mathematical quibbles. The first tors on an integrated circuit (and hence the amount quibble is that the technological singularity is not a of memory in a chip) will double every two years mathematical singularity. The function 1 / 1 – t has a since 1975. And Koomey’s law has accurately pre- mathematical singularity at t = 1. This function dicted that the number of computations per joule of demonstrates hyperbolic growth. As t approaches 1, energy dissipated will double every 19 months since its derivative ceases to be finite and well defined. the 1950s. Is it unreasonable to suppose AI will also Many proponents of a technological singularity at some point witness exponential growth? argue only for exponential growth. For exampe, the The thesis put forward here is that there are sever- function 2t demonstrates exponential growth. Such al strong arguments against the possibility of a tech- an exponential function approaches infinity more nological singularity. Let me be precise. I am not pre- slowly, and has a finite derivative that is always well dicting that AI will fail to achieve superhuman defined. The second quibble is that the idea of expo- intelligence. Like many of my colleagues working in nential growth in intelligence depends entirely on AI, I predict we are just 30 or 40 years away from this the scale used to measure intelligence. If we measure event. However, I am suggesting that there will not intelligence in logspace, exponential growth is mere- be the runaway exponential growth predicted by ly linear. I will not tackle here head on what we mean some. I will put forward multiple arguments why a by measuring the intelligence of machines (or of technological singularity is improbable. humans). I will simply suppose there is such a prop- These are not the only arguments against a tech- FALL 2017 59 Articles nological singularity. We can, for instance, also from the history of science, it is that we are not as inherit all the arguments raised against artificial special as we would like to believe. Copernicus taught intelligence itself. Hence, there are also the nine com- us that the universe did not revolve around the earth. mon objections considered by Alan Turing in his Darwin taught us that we were little different from seminal Mind paper (Turing 1963) like machines not the apes. And artificial intelligence will likely teach being conscious, or not being creative. I focus here us that human intelligence is itself nothing special. though on arguments that go to the idea of an expo- There is no reason therefore to suppose that human nential runaway in intelligence. intelligence is some special tipping point that, once passed, allows for rapid increases in intelligence. Of The Fast Thinking Dog Argument course, this doesn’t preclude there being some level One of the arguments put forwards by proponents of of intelligence that is a tipping point. the technological singularity is that silicon has a sig- One argument put forwards by proponents of a nificiant speed advantage over our brain’s wetware, technological singularity is that human intelligence and this advantage doubles every two years or so is indeed a special point to pass because we are according to Moore’s law. Unfortunately speed alone unique in being able to build artefacts that amplify does not bring increased intelligence. To adapt an our intellectual abilities. We are the only creatures on idea from Vernor Vinge (1993), a faster thinking dog the planet with sufficient intelligence to design new is still unlikely to play chess. Steven Pinker put this intelligence, and this new intelligence will not be argument eloquently: limited by the slow process of reproduction and evo- There is not the slightest reason to believe in a coming lution. However, this sort of argument supposes its singularity. The fact that you can visualize a future in conclusion. It assumes that human intelligence is your imagination is not evidence that it is likely or enough to design an artificial intelligence that is suf- even possible. Look at domed cities, jet-pack commut- ficiently intelligent to be the starting point for a tech- ing, underwater cities, mile-high buildings, and nological singularity. In other words, it assumes we nuclear-powered automobiles — all staples of futuris- tic fantasies when I was a child that have never have enough intelligence to initiate the technologi- arrived. Sheer processing power is not a pixie dust that cal singularity, the very conclusion we are trying to magically solves all your problems. (Pinker 2008) draw. We may or may not have enough intelligence Intelligence is much more than thinking faster or to be able to design such artificial intelligence.