Cognitive Collaboration Why Humans and Computers Think Better Together
Total Page:16
File Type:pdf, Size:1020Kb
Issue 20 | 2017 Complimentary article reprint Cognitive collaboration Why humans and computers think better together By James Guszcza, Harvey Lewis, and Peter Evans-Greenwood Illustration by Josie Portillo About Deloitte Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee, and its network of member firms, each of which is a legally separate and independent entity. Please see http://www/deloitte.com/about for a detailed description of the legal structure of Deloitte Touche Tohmatsu Limited and its member firms. Please see http://www.deloitte.com/us/about for a detailed description of the legal structure of the US member firms of Deloitte Touche Tohmatsu Limited and their respective subsidiaries. Certain services may not be available to attest clients under the rules and regulations of public accounting. Deloitte provides audit, tax, consulting, and financial advisory services to public and private clients spanning multiple industries. With a globally connected network of member firms in more than 150 countries and territories, Deloitte brings world-class capabilities and high-quality service to clients, delivering the insights they need to address their most complex business challenges. Deloitte’s more than 200,000 professionals are committed to becoming the standard of excellence. This communication contains general information only, and none of Deloitte Touche Tohmatsu Limited, its member firms, or their related entities (collectively, the “Deloitte Network”) is, by means of this communication, rendering professional advice or services. No entity in the Deloitte network shall be responsible for any loss whatsoever sustained by any person who relies on this communication. Copyright © 2017. Deloitte Development LLC. All rights reserved. 8 Cognitive collaboration Why humans and computers think better together By James Guszcza, Harvey Lewis, and Peter Evans-Greenwood Illustration by Josie Portillo www.deloittereview.com Cognitive collaboration 9 A SCIENCE OF THE ARTIFICIAL LTHOUGH artificial intelligence (AI) has experienced a number of “springs” and “winters” in its roughly 60-year history, it is safe to expect the current AI A spring to be both lasting and fertile. Applications that seemed like science fiction a decade ago are becoming science fact at a pace that has surprised even many experts. The stage for the current AI revival was set in 2011 with the televised triumph of the IBM Wat- son computer system over former Jeopardy! game show champions Ken Jennings and Brad Rutter. This watershed moment has been followed rapid-fire by a sequence of striking break- throughs, many involving the machine learning technique known as deep learning. Computer algorithms now beat humans at games of skill, master video games with no prior instruction, 3D-print original paintings in the style of Rembrandt, grade student papers, cook meals, vacu- um floors, and drive cars.1 www.deloittereview.com 10 Cognitive collaboration All of this has created considerable uncertainty deeper attention than it typically receives: the about our future relationship with machines, ways in which machine intelligence and hu- the prospect of technological unemployment, man intelligence complement one another. AI and even the very fate of humanity. Regard- has made a dramatic comeback in the past five ing the latter topic, Elon Musk has described years. We believe that another, equally vener- AI as “our biggest existential threat.” Stephen able, concept is long overdue for a comeback of Hawking warned that “The development of full its own: intelligence augmentation. With intel- artificial intelligence could spell the end of the ligence augmentation, the ultimate goal is not human race.” In his widely discussed book Su- building machines that think like humans, but perintelligence, the philosopher Nick Bostrom designing machines that help humans think discusses the possibility of a kind of techno- better. logical “singularity” at which point the general THE HISTORY OF THE FUTURE OF AI cognitive abilities of computers exceed those of humans.2 I as a scientific discipline is commonly agreed to date back to a conference Aheld at Dartmouth University in the summer of 1955. The conference was convened “Any sufficiently advanced by John McCarthy, who coined the term “arti- technology is indistinguishable ficial intelligence,” defining it as the science of from magic.” creating machines “with the ability to achieve goals in the world.”4 The Dartmouth Confer- 3 —Arthur C. Clarke’s Third Law ence was attended by a who’s who of AI pio- neers, including Claude Shannon, Alan Newell, Herbert Simon, and Marvin Minsky. Discussions of these issues are often mud- Interestingly, Minsky later served as an advisor died by the tacit assumption that, because to Stanley Kubrick’s adaptation of the Arthur computers outperform humans at various C. Clarke novel 2001: A Space Odyssey. Per- circumscribed tasks, they will soon be able to haps that movie’s most memorable character “outthink” us more generally. Continual rapid was HAL 9000: a computer that spoke fluent growth in computing power and AI break- English, used commonsense reasoning, experi- throughs notwithstanding, this premise is far enced jealousy, and tried to escape termination from obvious. by doing away with the ship’s crew. In short, Furthermore, the assumption distracts atten- HAL was a computer that implemented a very tion from a less speculative topic in need of general form of human intelligence. www.deloittereview.com Cognitive collaboration 11 The attendees of the Dartmouth Conference intelligence is quantified by the so-called “g believed that, by 2001, computers would im- factor” (aka IQ), which measures the degree to plement an artificial form of human intelli- which one type of cognitive ability (say, learn- gence. Their original proposal stated: ing a foreign language) is associated with other cognitive abilities (say, mathematical ability). The study is to proceed on the basis of the con- This is not characteristic of today’s AI appli- jecture that every aspect of learning or any cations: An algorithm designed to drive a car other feature of intelligence can in principle would be useless at detecting a face in a crowd be so precisely described that a machine can or guiding a domestic robot assistant. be made to simulate it. An attempt will be made to find how to make machines use lan- Second, and more fundamentally, current guage, form abstractions and concepts, solve manifestations of AI have little in common kinds of problems now reserved for humans, with the AI envisioned at the Dartmouth Con- and improve themselves [emphasis added].5 ference. While they do manifest a narrow type of “intelligence” in that they can solve prob- As is clear from widespread media speculation lems and achieve goals, this does not involve about a “technological singularity,” this origi- implementing human psychology or brain sci- nal vision of AI is still very much with us to- ence. Rather, it involves machine learning: the day. For example, a Financial Times profile of process of fitting highly complex and power- DeepMind CEO Demis Hassabis stated that: ful—but typically uninterpretable—statistical models to massive amounts of data. At DeepMind, engineers have created pro- grams based on neural networks, modelled on For example, AI algorithms can now distin- the human brain. These systems make mis- guish between breeds of dogs more accurately takes, but learn and improve over time. They than humans can.7 But this does not involve can be set to play other games and solve other algorithmically representing such concepts as tasks, so the intelligence is general, not spe- “pinscher” or “terrier.” Rather, deep learning cific. This AI “thinks” like humans do.6 neural network models, containing thousands of uninterpretable parameters, are trained on Such statements mislead in at least two ways. large numbers of digitized photographs that First, in contrast with the artificial general in- have already been labeled by humans.8 In a telligence envisioned by the Dartmouth Con- similar way that a standard regression model ference participants, the examples of AI on can predict a person’s income based on vari- offer—either currently or in the foreseeable ous educational, employment, and psycho- future—are all examples of narrow artificial logical details, a deep learning model uses a intelligence. In human psychology, general www.deloittereview.com 12 Cognitive collaboration photograph’s pixels as input variables to pre- us closer to the original vision articulated at dict such outcomes as “pinscher” or “terrier”— Dartmouth. Appreciating this is crucial for un- without needing to understand the underlying derstanding both the promise and the perils of concepts. real-world AI. The ambiguity between general and narrow LICKLIDER’S AUGMENTATION AI—and the evocative nature of terms like IVE years after the Dartmouth Confer- “neural,” “deep,” and “learning”—invites con- ence, the psychologist and computer fusion. While neural networks are loosely in- scientist J. C. R. Licklider articulated a spired by a simple model of the human brain, F significantly different vision of the relationship they are better viewed as generalizations of between human and computer intelligence. statistical regression models. Similarly, “deep” While the general AI envisioned at Dartmouth refers not to psychological depth, but to the remains the stuff of science fiction, Licklider’s addition of structure (“hidden