The Path to More General Artificial Intelligence
Total Page:16
File Type:pdf, Size:1020Kb
The Path to More General Artificial Intelligence by Ted Goertzel Rutgers University at Camden, NJ [email protected] Journal of Experimental & Theoretical Artificial Intelligence. 2014. DOI: 10.1080/0952813X.2014.895106., pp 1-12, 2014. Abstract A small-N comparative analysis of six different areas of applied artificial intelligence suggests that the next period of development will require a merging of narrow-AI and strong-AI approaches. This will be necessary as programmers seek to move beyond developing narrowly defined tools to developing software agents capable of acting independently in complex environments. The present stage of artificial intelligence development is propitious for this because of the exponential increases in computer power and in available data streams over the last twenty-five years, and because of better understanding of the complex logic of intelligence. Applied areas chosen for examination were: heart pacemakers, socialist economic planning, computer-based trading, self-driving automobiles, surveillance and sousveillance, and artificial intelligence in medicine. Keywords Future of AI, History of AI, cybernetic socialism, AI in Medicine, surveillance, self- driving motorcars, computer-based trading 1. Introduction Two quite different approaches have developed in the history of Artificial Intelligence. The first, known as “strong-AI” or “Artificial General Intelligence,” seeks to engineer human-level general intelligence based theoretical models (Wikipedia, 2013). The second, sometimes known as “narrow-AI” or “applied AI,” develops software to solve limited practical problems. Strong-AI has had a mercurial history, experiencing “several cycles of hype, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or decades later” (Wikipedia, 2013). Narrow or applied AI became dominant in the field because it was successful in solving useful practical problems and because it was easier to demonstrate in academic papers even without a practical application. Today, the 1 remarkable success of narrow-AI is widely known. A lead article in New York Times (Markoff, 2012) reported that: “using an artificial intelligence technique inspired by theories about how the brain recognizes patterns, technology companies are reporting startling gains in fields as diverse as computer vision, speech recognition and the identification of promising new molecules for designing drugs.” Public interest and support for strong-AI has lagged because optimistic predictions often failed, and because progress is difficult to measure or demonstrate. This paper argues, however, that the next stage of development of AI, for the next decade and more likely for the next twenty-five years, will be increasingly dependent on contributions from strong-AI. This hypothesis arises from an empirical study of the history of AI in several practical domains using what Abbott (2004) calls the small-N comparative method. This method examines a small number of varied cases in moderate detail, drawing on descriptive case study accounts. The cases are selected to illustrate different processes, risks and benefits. This method draws on the qualitative insights of specialists who have studied each of the cases in depth, and over historically significant periods of time. The small-N comparisons help to identify general patterns and to suggest priorities for further research. Readers who want more depth on each case are encouraged to pursue links to the original case research. The six cases chosen for examination here are intentionally quite different. The first was the attempt to use what was then known as “cybernetic” technology to create a viable socialist economic system. This took place during the heyday of optimism about strong-AI and failed in both the Soviet Union and in Chile for technical and sociological reasons. The second case is computer-based financial trading, a decisively capitalist enterprise and one that continues to be actively and profitably pursued although its effectiveness is still controversial. These are followed by three cases where narrow-AI has had impressive success: surveillance for security purposes, self-driving automobiles and heart pacemakers. Finally, we examine the case of artificial intelligence in medicine, a very well-funded and promising enterprise that has encountered unanticipated difficulties. 2. Cybernetic Socialism Norbert Wiener’s classic 1948 book Cybernetics was enthusiastically received by an influential segment of forward-looking Soviet scientists and engineers. Cybernetics as presented by Wiener was an intriguing set of analogies between humans and self-regulating machines. It posited that there were general principles that could be applied to the modeling and control of biological, mechanical and social processes. At first, the Soviet establishment rejected cybernetics as “reactionary pseudo-science,” but 2 things loosened up after the death of Stalin in 1953 and cybernetic enthusiasts succeeded in establishing a Council on Cybernetics within the Academy of Sciences. In 1959, its director, engineer Aksel Ivanovich Berg made very strong promises for the new field, claiming that cybernetics was nothing less than: “the development of methods and tools for controlling the entire national economy, individual technological processes, and various forms of economic activity to ensure the optional regime of government” (Gerovitch 2002: 254). In the early 1960s cybernetics (often renamed upravlenie, a Russian word which means “optimization of government”) was included in the Communist Party program as “science in the service of communism” (Gerovitch 2002: 4). This was appealing because, as the Soviet economy became more complex it became more and more difficult for central planners to manage and control. Soviet philosophers, who had initially opposed cybernetics as anti-Marxist, decided that it was actually firmly rooted in Marxist principles. Politically it was opposed by traditional economists and administrators who feared being displaced and by reformers who saw the cybernetic movement as an obstacle to the market reforms they believed Russia really needed. If cybernetics really had the technical capability to solve the problems of Soviet central planning in the 1960s, its proponents could probably have overcome the political resistance. The movement failed because the computer scientists and economists charged with implementing it concluded that “it was impossible to centralize all economic decision making in Moscow: the mathematical optimization of a large-scale system was simply not feasible” (Gerovitch, 2002: 273). The data needed were not available and could not readily be obtained, and the computers would not be ready to handle them if they were. A second attempt to apply cybernetics to building a form of socialism took place in Chile during the Allende government from 1970 to 1973 (Medina, 2011). Allende’s goal was to build a democratic socialist system as a “third way” between western capitalism and Soviet communism. One of the young visionaries in his administration, Fernando Flores, was familiar with the writings of British cybernetician Stanford Beer and invited him to consult with the Allende government. Beer was a successful business consultant who specialized in helping companies computerize their information and decision making systems. He was ideologically sympathetic to Allende’s project and responded enthusiastically to Flores’ invitation. Beer was determined not to repeat the mistakes of the Soviet experience. He fully understood the technical limitations in Chile at the time. There were approximately fifty computers in the country as a 3 whole, many of which were out of date, as compared to approximately 48,000 in the United States at the time. There was no network linking them together. They did have a number of as yet unused telex machines in a government warehouse, so they set up telex links to key industries so they could send up- to-date information on production, supplies and other variables. Beer then designed a modernistic control room where decision-makers could sit on swivel chairs and push buttons on their arm rests to display economic information on charts on the wall. Following the cybernetic vision, this wall was decorated with a chart showing how this network of communications was analogous to that in the human brain. But there was no attempt to replace human brains. The project simply sought to collect time sensitive information and present it in a way that human managers and workers could use it to make better-informed decisions. Beer emphasized that information should be available to all levels of the system, so that workers could participate with managers in making decisions. This modest task was difficult enough. Managers were busy running their factories and dealing with multiple crises, and weren’t always able to get the data typed into the telex promptly. Then it took time for the data to be punched on cards and fed into the computers. President Allende had many more urgent matters on his agenda, but he gave his imprimatur to the project, which was progressing reasonably well until they ran into a public relations problem. The project was never a secret, but neither was it publicized. Just when they were getting ready to make a public announcement, the British newspaper The Observer (January 7, 1973) published a story claiming that “the first computer system designed to control an entire economy has been secretly brought