Briefings23 Respectyourbot Fis
Total Page:16
File Type:pdf, Size:1020Kb
WHEN YOUR PARTNER ISN’T A HUMAN PEOPLE AREN’T THE ONLY ONES WHO WILL NEED TO BE INCLUDED IN THE WORKPLACE OF THE FUTURE. CYBORGS AND OTHER FORMS OF MANMADE MECHANICAL CREATIONS WILL HAVE A PLACE WITH US. STORY BY LAWRENCE M. FISHER ILLUSTRATIONS BY RUSSELL COBB 63 IN THE PROJECT TEAMS SO COMMON IN THE MODERN ENTERPRISE, PARTICIPANTS ENGAGE WITH CO-WORKERS FROM DIVERSE CULTURES, FIND COMMON CAUSE AND PURSUE SHARED GOALS. AS ROBOTS AND ARTIFICIAL INTELLIGENCE SYSTEMS ENTER THE WORKPLACE, EMPLOYEES MAY NEED TO LEARN TO COLLABORATE WITH NON-HUMAN COLLEAGUES. he common wisdom is that artifi- Advanced Research Projects Agency. CALO ran from 2003 to 2008 and produced many cial intelligence has overpromised spinoffs, most notably the Siri intelligent soft- and underdelivered, and it is surely ware assistant now included in Apple iOS. Siri is not especially human-like, but suc- one of the most-hyped techno- cessors currently in development will be much more so. As the interface with these devices logical developments of all time. moves from command-driven to conversa- Your first robotic colleague will probably tional, our relationships with them will inevi- not resemble the paranoid HAL of Stanley tably become more interactive, even intimate. Kubrick’s “2001: A Space Odyssey,” or the Even if the devices are not truly sentient, or sultry-voiced Samantha of Spike Jonze’s 2013 conscious, research shows that we will experi- film, “Her,” but it could be something more ence them as if they were. Firms now provide akin to an artificial administrative assistant. diversity training for working with different That was the goal of CALO, or the “Cognitive races, gender identities and nationalities. Will Assistant that Learns and Organizes,” an SRI we need comparable workplace policies for International project funded by the Defense human-robotic interaction? Illustrator Russell Cobb exhibits and lectures on art and design throughout Europe. His appreciation for things mechanical (and futuristic) is apparent in these illustrations for Briefings. 65 Any good article about robots not always aware of it. When you amazing things become possible. should mention Isaac Asimov’s three run the spelling and grammar check That doesn’t give systems feelings, fundamental Rules of Robotics, and on a document you produced with or give them the power to learn the this is as good a place as any: One, a Microsoft Word, an artificial intel- way the human brain does, but it does robot may not injure a human being, ligence is doing the job of copy editor. make them very capable machines. or, through inaction, allow a human If you drive one of the new Volvos While conventional AI has been being to come to harm. Two, a robot equipped with IntelliSafe, the car will about programming ever-more must obey the orders given it by human hit the brakes or steer its way out of powerful computers to tackle ever- beings except where such orders would danger far faster than a human being more complex problems, an emerging conflict with the First Law. Three, a can respond. You may be holding the technology called artificial general robot must protect its own existence wheel, but at critical moments, the intelligence, or AGI, is about devel- as long as such protection does not robot is driving. Google’s self-driving oping systems with less ingrained conflict with the First or Second Laws. cars take over driving entirely — no knowledge, but with the capacity to Asimov first postulated the rules in a need to hold the wheel at all. learn by observation. You may well 1941 short story, “Runaround,” which And if you use one of those clever need to mind your manners around happened to be set in the year 2015, scheduling apps that pluck dates, these machines. surely a sign that the time has come to times and phone numbers from your “Whether you’re polite to your at least think about fundamental rules e-mails to set up meetings and articu- software will actually make a differ- of human-robotic interaction. late goals and objectives, congratula- ence in the future,” said Pei Wang, What will it be like to work side tions, you already have an artificial a professor of computer science at by side with robots? Actually many intelligence on your project team. It Temple University. “But that assumes of us already do every day; we’re just may lack a face, a voice and a person- that the system will actually learn. It ality, at least for now, but it is already will need to have general intelligence, ISAAC ASIMOV’S THREE FUNDAMENTALperforming important LAWS tasks OF that ROBOTICS once the ability to evaluate someone’s per- required a human co-worker. formance, some form of emotion.” All of the systems just described Even without emotion, which are the products of fairly mainstream even AGI advocates say is some artificial intelligence programming. distance away, learning computers The reason they work now when may respond more productively when they didn’t work in past decades is they are treated better. Even if the primarily due to the explosive growth robot does not care about your rude in processing power available at an behavior, your human co-workers ever-lower cost. When your laptop will feel uncomfortable, which is or smartphone has more processing not conducive to team solidarity. power than the Apollo mission, Owing to something computer A ROBOT MAY NOT scientists call the ELIZA effect, people tend to unconsciously assume INJURE A HUMAN BEING, OR, computer behaviors are analogous THROUGH INACTION, ALLOW to human behaviors, even with quite A HUMAN BEING TO COME TO HARM. unsophisticated systems, like the ATM machine that says “thank you.” Teammates may assume you have hurt your artificial admin’s feelings, A ROBOT MUST OBEY THE even if she has none. ORDERS GIVEN IT BY HUMAN BEINGS EXCEPT WHERE SUCH ORDERS WOULD CONFLICT WITH THE FIRST LAW. A ROBOT MUST PROTECT ITS OWN EXISTENCE AS LONG AS SUCH PROTECTION DOES NOT CONFLICT WITH THE FIRST OR SECOND LAWS. Can human diversity will do a disservice treated. But neither rises to the level of to both,” said Terry Winograd, a pro- a human-human interaction, he said. Machines fessor of computer science at Stanford Yoav Shoham, another Stan- University, and co-director of the ford computer scientist, teaches a Stanford Human-Computer Interac- freshman class called “Can Com- Think? tion Group. “The point of diversity puters Think? Can They Feel?” At the training is learning to treat other beginning and end of the course, he people with the same respect as those polls students on those questions, of your type. I just don’t believe that’s noting the evolution in their answers. Can They applicable to computers.” Along the way, they also explore ques- Winograd said he does see the need tions regarding computers and free for rules and procedures for working will, creativity, even consciousness, with artificial intelligences whose out- and their responses inevitably shift Feel? puts have a direct impact on human from predominantly noes to more life, such as a program that interprets yesses. Shoham readily concedes that financial and personal information to he doesn’t know the right answers, hall we treat the robots in our determine who qualifies for a home but adds that at least he knows that midst as our masters, our slaves or loan. “How you relate to machines that he doesn’t know. He says his Socratic Sour partners? It’s more a question are making decisions in spheres that goal is to make the students doubt for Hegel or Nietzsche than the tech- have a human outcome is a huge policy their automatic responses, and to at nologists, but it’s worth considering. problem that needs to be worked on,” least start to question some of their The sociopathic superintelligences he said. “The problem is the AI can’t biases. But he doesn’t see a need to of science-fiction doomsday nar- tell you what the criteria are. They regulate human-robotic relations. ratives are easy enough to dismiss; don’t have any introspection at all. “I don’t think that having Asimov your laptop is not going to develop They run the numbers, get an answer.” rules for ethical treatment of robots autocratic tendencies anytime soon. Winograd said that an AI’s re- is something that’s needed now, any We could program robots to be our sponse to rudeness will simply be the more than we need rules for our GPS complacent slaves, but history shows one it is programmed to have, and it or our smart watch,” Shoham said. that slavery dehumanizes the master has no “feelings” to hurt. Robots are “It’s well documented that we tend to along with the slave, hardly a happy valuable assets, so they will require anthropomorphize objects. I think prospect. As artificial intelligence protection from abuse, but there will there’s a reason to be polite in com- becomes more humanoid, a kind of be a wide range of human-robotic munication with software, but not for partnership seems the most likely interactions, and which ones will be that reason. I will occasionally cuss outcome. Technology always seems considered appropriate will depend on at my computer, and my wife is very to outrace policy, so why not consider circumstance. He sees a parallel with upset by it. She doesn’t like me to use diversity programs for non-human animal rights, in which some activ- that language, because I’ll get used to colleagues now? ists see any human use of animals as it, and it will reflect on me.