Smarter Together Why Artificial Intelligence Needs Human-Centered Design
Total Page:16
File Type:pdf, Size:1020Kb
COMPLIMENTARY ARTICLE REPRINT ISSUE 22, JANUARY 2018 Smarter together Why artificial intelligence needs human-centered design By James Guszcza ILLUSTRATION BY BARRY DOWNARD Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee, and its network of member firms, each of which is a legally separate and independent entity. Please see http://www/deloitte.com/about for a detailed description of the legal structure of Deloitte Touche Tohmatsu Limited and its member firms. Please see http://www.deloitte.com/ us/about for a detailed description of the legal structure of the US member firms of Deloitte Touche Tohmatsu Limited and their respective subsidiaries. Certain services may not be available to attest clients under the rules and regulations of public accounting. For information on the Deloitte US Firms’ privacy practices, see the US Privacy Notice on Deloitte.com. Copyright © 2018. Deloitte Development LLC. All rights reserved. 36 FEATURE WHY ARTIFICIAL INTELLIGENCE NEEDS HUMAN-CENTERED DESIGN by James Guszcza ILLUSTRATION BY BARRY DOWNARD Smarter together 37 “Seekers after the glitter of intelligence are misguided in trying to cast it in the base metal of computing.” — Terry Winograd1 38 FEATURE RTIFICIAL INTELLIGENCE (AI) HAS on the part of human end users. Many of us have emerged as a signature issue of our time, experienced the seemingly paradoxical effect of set to reshape business and society. The adding a highly intelligent individual to a team, only Aexcitement is warranted, but so are con- to witness the team’s effectiveness—its “collective cerns. At a business level, large “big data” and AI IQ”—diminish. Analogously, “smart” AI technol- projects often fail to deliver. Many of the culprits ogy can inadvertently result in “artificial stupidity” are familiar and persistent: forcing technological if poorly designed, implemented, or adapted to the square pegs into strategic round holes, overestimat- human social context. Human, organizational, and ing the sufficiency of available data or underestimat- societal factors are crucial. ing the difficulty of wrangling it into usable shape, taking insufficient steps to ensure that algorithmic An AI framework outputs result in the desired business outcomes. At a societal level, headlines are dominated by the is- It is common to identify AI with machines that sue of technological unemployment. Yet it is becom- think like humans or simulate aspects of the human ing increasingly clear that AI algorithms embedded brain (for a discussion of these potentially mislead- in ubiquitous digital technology can encode societal ing starting points, see the sidebar, “The past and biases, spread conspiracies and promulgate fake present meanings of ‘AI’,” on page 43). Perhaps news, amplify echo chambers of public opinion, even more common is the identification of AI with hijack our attention, and even impair our mental various machine learning techniques. It is true that well-being.2 machine learning applied to big data enables pow- Effectively addressing such issues requires a re- erful AI applications ranging from self-driving cars alistic conception of AI, which is too often hyped as to speech-enabled personal assistants. But not all emerging “artificial minds” on an exponential path forms of AI involve machine learning being applied to generally out-thinking humans.3 In reality, to- to big data. It is better to start with a functional def- day’s AI applications result from the same classes of inition of AI. “Any program can be considered AI if algorithms that have been under development for it does something that we would normally think of decades, but implemented on considerably more as intelligent in humans,” writes the computer sci- powerful computers and trained on larger data sets. entist Kris Hammond. “How the program does it is They are “smart” in narrow senses, not in the gen- not the issue, just that is able to do it at all. That is, eral way humans are smart. In functional terms, it it is AI if it is smart, but it doesn’t have to be smart is better to view them not as “thinking machines,” like us.”6 but as cognitive prostheses that can help humans Under this expansive definition, the computer think better.4 automation of routine, explicitly defined “robotic In other words, AI algorithms are “mind tools,” process” tasks such as cashing checks and pre-pop- not artificial minds. This implies that successful ulating HR forms count as AI. So does the insightful applications of AI hinge on more than big data and application of data science products, such as using powerful algorithms. Human-centered design is a predictive decision tree algorithm to triage emer- also crucial. AI applications must reflect realistic gency room patients. In each case, an algorithm per- conceptions of user needs and human psychology. forms a task previously done only by humans. Yet it Paraphrasing the user-centered design pioneer Don is obvious that neither case involves mimicking hu- Norman, AI needs to “accept human behavior the man intelligence, nor applying machine learning to way it is, not the way we would wish it to be.”5 massive data sets. This essay explores the idea that smart technolo- Starting with Hammond’s definition, it is useful gies are unlikely to engender smart outcomes un- to adopt a framework that distinguishes between AI less they are designed to promote smart adoption for automation and AI for human augmentation. Smarter together 39 AI for automation what they are doing. That’s an illusion. Algorithms “demonstrate human-like tacit knowledge” only in AI is now capable of automating tasks associ- the weak sense that they are constructed or trained ated with both explicit and tacit human knowledge. using data that encodes the tacit knowledge of a The former is “textbook” knowledge that can be large number of humans working behind the scenes. documented in manuals and rulebooks. It is in- The term “human-in-the-loop machine learning” is creasingly practical to capture such knowledge in often used to connote this process.7 While big data computer code to achieve robotic process automa- and machine learning enable the creation of algo- tion (RPA): building software “robots” that perform rithms that can capture and transmit meaning, this boring, repetitive, error-prone, or time-consuming is very different from understanding or originating tasks, such as processing changes of address, insur- meaning. ance claims, hospital bills, or human resources forms. Because RPA enjoys both It is tempting to conclude that low risk and high economic return, it is often a natural computers are implementing—or starting point for organi- zations wishing to achieve rapidly approaching—a kind of efficiencies and cost sav- ings through AI. Ideally, it human intelligence in the sense can also free up valuable human time for more com- plex, meaningful, or cus- that they “understand” what they tomer-facing tasks. Tacit knowledge might are doing. That’s an illusion. naively seem impervious to AI automation: It is automatic, intuitive “know-how” Given that automation eliminates the need for that is learned by doing, not purely through study human involvement, why should autonomous AI or rule-following. Most human knowledge is tacit systems require human-centered design? There are knowledge: a nurse intuiting that a child has the flu, several reasons: a firefighter with a gut feel that a burning building Goal-relevance. Data science products and AI is about to collapse, or a data scientist intuiting that applications are most valuable when insightfully de- a variable reflects a suspicious proxy relationship. signed to satisfy the needs of human end users. For Yet the ability of AI applications to automate tasks example, typing “area of Poland” into the search en- associated with human tacit knowledge is rapidly gine Bing returns the literal answer (120,728 square progressing. Examples include facial recognition, miles) along with the note: “About equal to the size sensing emotions, driving cars, interpreting spoken of Nevada.” The numeric answer is the more ac- language, reading text, writing reports, grading stu- curate, but the intuitive answer will often be more dent papers, and even setting people up on dates. useful.8 This exemplifies the broader point that “op- In many cases, newer forms of AI can perform such timal” from the perspective of computer algorithms tasks more accurately than humans. is not necessarily the same as “optimal” from the The uncanny quality of such applications make perspective of end-user goals or psychology. it tempting to conclude that computers are imple- Handoff. Many AI systems can run on “auto- menting—or rapidly approaching—a kind of hu- pilot” much of the time, but require human inter- man intelligence in the sense that they “understand” vention in exceptional or ambiguous situations that www.deloittereview.com 40 FEATURE require common sense or contextual understanding. example is Tay, a chatbot designed to learn about Human-centered design is needed to ensure that the world through conversations with its users. The this “handoff” from computer to human happens chatbot had to be switched off within 24 hours after when it should, and that it goes smoothly when it pranksters trained it to utter racist, sexist, and fas- does happen. Here’s an admittedly low-stakes per- cist statements.10 Other examples of algorithms re- sonal example of how AI can give rise to “artificial flecting and amplifying undesirable societal biases stupidity” if the handoff doesn’t go well. I recently are by now ubiquitous. For such reasons, there is hailed a cab for a trip that required only common an increasing call for chatbot and search-engine de- sense and a tiny amount of local knowledge—driv- sign to optimize not only for speed and algorithmic ing down a single major boulevard.