<<

WHEN YOUR PARTNER ISN’T A HUMAN

PEOPLE AREN’T THE ONLY ONES WHO WILL NEED TO BE INCLUDED IN THE WORKPLACE OF THE FUTURE. CYBORGS AND OTHER FORMS OF MANMADEWILL MECHANICALHAVE A PLACE CREATIONS WITH US.

STORY BY LAWRENCE M. FISHER ILLUSTRATIONS BY RUSSELL COBB

63 IN THE PROJECT TEAMS SO COMMON IN THE MODERN ENTERPRISE, PARTICIPANTS ENGAGE WITH CO-WORKERS FROM DIVERSE CULTURES, FIND COMMON CAUSE AND PURSUE SHARED GOALS. AS ROBOTS AND ARTIFICIAL INTELLIGENCE SYSTEMS ENTER THE WORKPLACE, EMPLOYEES MAY NEED TO LEARN TO COLLABORATE WITH NON-HUMAN COLLEAGUES.

he common wisdom is that artifi- Advanced Research Projects Agency. CALO ran from 2003 to 2008 and produced many cial intelligence has overpromised spinoffs, most notably the Siri intelligent soft- and underdelivered, and it is surely ware assistant now included in Apple iOS. Siri is not especially human-like, but suc- one of the most-hyped techno- cessors currently in development will be much more so. As the interface with these devices logical developments of all time. moves from command-driven to conversa- Your first robotic colleague will probably tional, our relationships with them will inevi- not resemble the paranoid HAL of Stanley tably become more interactive, even intimate. Kubrick’s “2001: A Space Odyssey,” or the Even if the devices are not truly sentient, or sultry-voiced Samantha of Spike Jonze’s 2013 conscious, research shows that we will experi- film, “Her,” but it could be something more ence them as if they were. Firms now provide akin to an artificial administrative assistant. diversity training for working with different That was the goal of CALO, or the “Cognitive races, gender identities and nationalities. Will Assistant that Learns and Organizes,” an SRI we need comparable workplace policies for International project funded by the Defense human-robotic interaction? Illustrator Russell Cobb exhibits and lectures on art and design throughout Europe. His appreciation for things mechanical (and futuristic) is apparent in these illustrations for Briefings.

65 Any good article about robots not always aware of it. When you amazing things become possible. should mention ’s three run the spelling and grammar check That doesn’t give systems feelings, fundamental Rules of , and on a document you produced with or give them the power to learn the this is as good a place as any: One, a Microsoft Word, an artificial intel- way the human brain does, but it does robot may not injure a human being, ligence is doing the job of copy editor. make them very capable machines. or, through inaction, allow a human If you drive one of the new Volvos While conventional AI has been being to come to harm. Two, a robot equipped with IntelliSafe, the car will about programming ever-more must obey the orders given it by human hit the brakes or steer its way out of powerful computers to tackle ever- beings except where such orders would danger far faster than a human being more complex problems, an emerging conflict with the . Three, a can respond. You may be holding the technology called artificial general robot must protect its own existence wheel, but at critical moments, the intelligence, or AGI, is about devel- as long as such protection does not robot is driving. Google’s self-driving oping systems with less ingrained conflict with the First or Second Laws. cars take over driving entirely — no knowledge, but with the capacity to Asimov first postulated the rules in a need to hold the wheel at all. learn by observation. You may well 1941 short story, “Runaround,” which And if you use one of those clever need to mind your manners around happened to be set in the year 2015, scheduling apps that pluck dates, these machines. surely a sign that the time has come to times and phone numbers from your “Whether you’re polite to your at least think about fundamental rules e-mails to set up meetings and articu- software will actually make a differ- of human-robotic interaction. late goals and objectives, congratula- ence in the future,” said Pei Wang, What will it be like to work side tions, you already have an artificial a professor of computer science at by side with robots? Actually many intelligence on your project team. It Temple University. “But that assumes of us already do every day; we’re just may lack a face, a voice and a person- that the system will actually learn. It ality, at least for now, but it is already will need to have general intelligence, ISAAC ASIMOV’S THREE FUNDAMENTALperforming important LAWS tasks OF that ROBOTICS once the ability to evaluate someone’s per- required a human co-worker. formance, some form of emotion.” All of the systems just described Even without emotion, which are the products of fairly mainstream even AGI advocates say is some artificial intelligence programming. distance away, learning computers The they work now when may respond more productively when they didn’t work in past decades is they are treated better. Even if the primarily due to the explosive growth robot does not care about your rude in processing power available at an behavior, your human co-workers ever-lower cost. When your laptop will feel uncomfortable, which is or smartphone has more processing not conducive to team solidarity. power than the Apollo mission, Owing to something computer A ROBOT MAY NOT scientists call the ELIZA effect, people tend to unconsciously assume INJURE A HUMAN BEING, OR, computer behaviors are analogous THROUGH INACTION, ALLOW to human behaviors, even with quite A HUMAN BEING TO COME TO HARM. unsophisticated systems, like the ATM machine that says “thank you.” Teammates may assume you have hurt your artificial admin’s feelings, A ROBOT MUST OBEY THE even if she has none. ORDERS GIVEN IT BY HUMAN BEINGS EXCEPT WHERE SUCH ORDERS WOULD CONFLICT WITH THE FIRST LAW.

A ROBOT MUST PROTECT ITS OWN EXISTENCE AS LONG AS SUCH PROTECTION DOES NOT CONFLICT WITH THE FIRST OR SECOND LAWS. Can

human diversity will do a disservice treated. But neither rises to the level of to both,” said Terry Winograd, a pro- a human-human interaction, he said. Machines fessor of computer science at Stanford Yoav Shoham, another Stan- University, and co-director of the ford computer scientist, teaches a Stanford Human-Computer Interac- freshman class called “Can Com- Think? tion Group. “The point of diversity puters Think? Can They Feel?” At the training is learning to treat other beginning and end of the course, he people with the same respect as those polls students on those questions, of your type. I just don’t believe that’s noting the evolution in their answers. Can They applicable to computers.” Along the way, they also explore ques- Winograd said he does see the need tions regarding computers and free for rules and procedures for working will, creativity, even consciousness, with artificial intelligences whose out- and their responses inevitably shift Feel? puts have a direct impact on human from predominantly noes to more life, such as a program that interprets yesses. Shoham readily concedes that financial and personal information to he doesn’t know the right answers, hall we treat the robots in our determine who qualifies for a home but adds that at least he knows that midst as our masters, our slaves or loan. “How you relate to machines that he doesn’t know. He says his Socratic Sour partners? It’s more a question are making decisions in spheres that goal is to make the students doubt for Hegel or Nietzsche than the tech- have a human outcome is a huge policy their automatic responses, and to at nologists, but it’s worth considering. problem that needs to be worked on,” least start to question some of their The sociopathic superintelligences he said. “The problem is the AI can’t biases. But he doesn’t see a need to of science-fiction doomsday nar- tell you what the criteria are. They regulate human-robotic relations. ratives are easy enough to dismiss; don’t have any introspection at all. “I don’t think that having Asimov your laptop is not going to develop They run the numbers, get an answer.” rules for ethical treatment of robots autocratic tendencies anytime soon. Winograd said that an AI’s re- is something that’s needed now, any We could program robots to be our sponse to rudeness will simply be the more than we need rules for our GPS complacent slaves, but history shows one it is programmed to have, and it or our smart watch,” Shoham said. that slavery dehumanizes the master has no “feelings” to hurt. Robots are “It’s well documented that we tend to along with the slave, hardly a happy valuable assets, so they will require anthropomorphize objects. I think prospect. As artificial intelligence protection from abuse, but there will there’s a reason to be polite in com- becomes more humanoid, a kind of be a wide range of human-robotic munication with software, but not for partnership seems the most likely interactions, and which ones will be that reason. I will occasionally cuss outcome. Technology always seems considered appropriate will depend on at my computer, and my wife is very to outrace policy, so why not consider circumstance. He sees a parallel with upset by it. She doesn’t like me to use diversity programs for non-human animal rights, in which some activ- that language, because I’ll get used to colleagues now? ists see any human use of animals as it, and it will reflect on me. What if my Not so fast, say some AI experts. abusive, while other interest groups, kids hear me? We have social norms “Of course, people will experience such as laboratory scientists, apply to use language in a certain way, and robots as in some ways human, but something like a cost/benefit analysis breaking it in one context will have I feel that applying concepts from to how non-human animals are spillover effects in different ways.” The Robot Told Me You Were Coming

the great science-fiction novelist, once told The Economist, “The future is ILLIAM GIBSON, already here—it’s just not evenly distributed.” Gibson has a knack for apt aphorisms, and that one hits the mark in Redmond, Wash., just outside Seattle. If you go there to visit Eric Horvitz, director of Microsoft Research, you will be greeted by an affable robot, which gives directions and uses casual language, like “No problem.” Outside Horvitz’s office you will meet Monica, his virtual admin, an attractive redheaded avatar with a British accent. “I was expecting you,” she says. “The robot told me W 67 No Known you were coming.” She might say that to grant time with him, how soon Horvitz is not in now, but she can and how much. Add to that a set of Commercial schedule a meeting. responsive facial expressions, natural While you won’t mistake the language and instant interactive re- greeter or Monica for a human, they sponse, and it is easy to believe there Application, Yet… are personal and personable to a de- is an intelligence at work. Horvitz gree not commonly seen in robotics. and his team are creating a code After all, most robots in the modern base—software—that enables many hile a visit to Redmond pro- enterprise are faceless mechanical forms of complex, layered interaction vides a vision of the near future, muscle, performing one rote task day between machines and humans. Wa journey to Reykjavík, Iceland, and night. They may be ruthlessly The system has situational aware- offers a glimpse of what’s coming in a efficient, but there is not even a sug- ness, meaning that it can take into bit more distant tomorrow. A robotic gestion of personality or conscious- account the physical space, people’s agent, built by an international ness. The greeter and Monica are comings and goings, their gestures team led by researchers at Reykjavik intentionally human-ish, providing and facial expressions, and the give University, is pushing the boundaries a vision of a near future populated in and take of conversation between of artificial intelligence by automati- part by well-informed, if not actually individuals. Horvitz said the goal is cally learning socio-communicative smart, artificial co-workers. to build systems that can coordinate skills. The recently completed project, Horvitz has captured many hours and collaborate with people in a fluid, dubbed HUMANOBS, is not pro- of video showing that people usually natural manner. “I believe that sys- grammed in the conventional sense, speak to Monica in a polite way, tems will get so good, when you call but learns by observing and imitating even apologizing for misunderstood up to work with a human entity on humans in social situations. words and saying “thank you, nice the phone, instead of someone saying “We essentially ditched all of engi- to meet you,” when they leave her. to you ‘this call may be recorded for neering and computer science meth- He says he thanks her, too, without quality assurance purposes,’ it will odology wholesale, and approached it thinking about it. say, ‘I have to by law tell you that I am more from psychology and biology,” “We show the system working not a person.’ When will that happen, said Kristin Thórisson, founding and the courtesy people have to it, when will that be important, is an director of the Icelandic Institute for then I walk up and the system recog- interesting question to ask.” Intelligent Machines. “The starting nizes me,” Horvitz said. “The system Microsoft Research is not a place point for those domains is really always smiles when it sees me, and for dreamers. The intent is to create nature rather than mathematics. We I tell folks that I enjoy the fact that technology for future products, set out a bunch of goals for the project it smiles only at me. These natural and elements of Horvitz’s work can that we thought we would achieve courtesies you extend to a system already be encountered in Cortana, some of; we achieved all of them. The are pretty interesting. At the highest Microsoft’s new virtual assistant goal was to come up with an inde- level, if we have systems that are for smartphones, which is named pendent general learner that could be anthropomorphic, like the kind, at- for the curvaceous AI heroine of the programmed by very high level goals.” tractive British assistant at my door, video game series Halo. Cortana is In an ominous development and this system is working with mul- not as “nice” or knowledgeable as for a still-working journalist, the tiple people, there are subtleties and Monica, but she can set reminders, HUMANOBS system’s first achieve- nuances that make it an appropriate recognize a natural voice without ment was to learn how to conduct social actor in multiparty situations. the user having to input a predefined an interview, simply by watching 20 The courtesy with which you treat series of commands, and answer hours of two humans in a mock-TV the agent is the same you apply to questions using information from interview. Thórisson said that after just other people.” Bing, Microsoft’s search engine. two or three minutes of observation, While Monica cannot “think,” and That sounds a lot like Siri, and it the system starts to understand what’s she really doesn’t “know” anything, is, but at least Microsoft’s engineers going on, how to structure and conduct the system has been programmed have given Cortana a sense of such an interview, and has generalized with detailed data about Horvitz’s humor. The question, “Who’s your some of the main principles of human schedule and priorities and has ad- daddy?” gets this response: “Techni- communication to a point that it can equate analytical capability to make cally speaking, that’d be Bill Gates. be asked to take over the interview, informed decisions about whom No big deal.” either in the role of the interviewer or interviewee, and will continue to systems? “If you force me to make of “Machines of Loving Grace: The interact with the other person. a number, I will say something like Quest for Common Ground Between Thórisson said current workplace 10 years,” said Wang. “In the begin- Humans and Robots,” which will be policies would suffice for interaction ning, of course, it will be very simple. published in August. People will rap- with learning systems, at least in the Five years is not enough, but I don’t idly accept closer relationships with near term. “We have the etiquette for think we need 20. Because the basic robots, he said. “Today we are right how we talk to our co-workers, and principle is not magic. It’s already where society was when ATMs were there’s a lot of legal precedent,” he said. understandable.” first introduced, when people refused “This technology is going to stir up to use them because they preferred to that pot a little bit, but I think it’s going interact with a human teller. Almost to be very similar. If you don’t want Machines of overnight that preference reversed.” your co-workers to know something, So if you’re one of those people don’t tell your digital assistant. But if tempted to say, “you’re welcome,” you think a bit further into the future, Loving Grace when the ATM says, “thank you,” when the machines become harder don’t worry. Your manners are just a to distinguish from humans, when rognostications of emotional little ahead of your time.  you’re at the point where you would robots make some people pro- feel a deep sense of loss if it got erased, P foundly uncomfortable, and it’s like when you lose a pet or a loved one, not hard to see why. The science- then you might see something very fiction treatment of robots, which is the different. But by then we’ll see so many only one most of us know, has reliably different things that I don’t think this veered between super-intelligent will be our main concern.” machines that will enslave humanity Pei Wang, the Temple University to seductive droids that will offer a researcher, is developing systems pleasant but empty alternative to similar to HUMANOBS, and he human intimacy. In her book believes such machines will learn to “Alone Together,” MIT psy- differentiate between more and less chologist Sherry Turkle frets pleasant human interactions. An AGI, about “sociable robots, which or artificial general intelligence, he promise relationships where said, differs from an AI in that it ini- we will be in control, even tially knows nothing, but like a human if that means not being in baby, it is constantly learning. Like the control at all.” baby, it is naïve, but it rapidly begins Will we have to worry to evaluate the reliability of the many about sexual harassment sources bombarding it with informa- of AIs in the workplace? tion. Those sources it deems more Perhaps. The author was reliable will have greater influence, and astonished to be hit on while in time, get better responses. playing World of Warcraft using “In AGI, more and more people a female avatar. When the would- believe that emotion is a necessary be suitor’s advances were rebuffed, aspect of high-level intelligence,” Wang he—presumably it was a he—became said. “It’s nothing fancy. It will have a abusive. Keep in mind that World of different attitude to other people or Warcraft avatars are two-dimensional systems because of their relationship and can only speak in text. An attrac- to it. If someone it likes makes a tive artificial administrative assistant request, they will get more attention is bound to receive a certain amount than others. I don’t believe future AI of amorous attention. Is that good or will have emotions exactly like us, love bad? Who knows? and hate. But the basic motivation be- But best to be ready for it, sooner hind the emotion will be very simple: than later. “As these systems become if you are polite, it will be polite. If you conversational, our relationships with are mean, it will be mean.” them will inevitably trend toward When might we expect such intimacy,” said John Markoff, author 69