The end of humanity: will artificial intelligence free us, enslave us — or exterminate us? The Berkeley professor Stuart Russell tells Danny Fortson why we are at a dangerous crossroads in our development of AI Scenes from Stuart Russell’s dystopian film Slaughterbots, in which armed microdrones use facial recognition to identify their targets Danny Fortson Saturday October 26 2019, 11.01pm GMT, The Sunday Times Share Save Stuart Russell has a rule. “I won’t do an interview until you agree not to put a Terminator on it,” says the renowned British computer scientist, sitting in a spare room at his home in Berkeley, California. “The media is very fond of putting a Terminator on anything to do with artificial intelligence.” The request is a tad ironic. Russell, after all, was the man behind Slaughterbots, a dystopian short film he released in 2017 with the Future of Life Institute. It depicts swarms of autonomous mini-drones — small enough to fit in the palm of your hand and armed with a lethal explosive charge — hunting down student protesters, congressmen, anyone really, and exploding in their faces. It wasn’t exactly Arnold Schwarzenegger blowing people away — but he would have been proud. Autonomous weapons are, Russell says breezily, “much more dangerous than nuclear weapons”. And they are possible today. The Swiss defence department built its very own “slaughterbot” after it saw the film, Russell says, just to see if it could. “The fact that you can launch them by the million, even if there’s only two guys in a truck, that’s a real problem, because it’s a weapon of mass destruction. I think most humans would agree that we shouldn’t make machines that can decide to kill people.” The 57-year-old from Portsmouth does this a lot: deliver an alarming warning about the existential threat posed by artificial intelligence (AI), but through a placid smile. “We have to face the fact that we are planning to make entities that are far more powerful than humans,” he says. “Howdo we ensure that they never, ever have power over us?” I almost expect him to offer a cup of tea to wash down the sense of imminent doom. There is no shortage of AI doom-mongers. Elon Musk claims we are “summoning the demon”. Stephen Hawking famously warned that AI could “spell the end of the human race”. Seemingly every month, a new report predicts mass unemployment and social unrest as machines replace humans. The bad news? Russell, essentially, agrees with all of it. This is disconcerting because he quite literally wrote the book on the technology. His textbook, Artificial Intelligence: A Modern Approach, is the most widely used in the industry. Since he authored it in 1994 with Peter Norvig, Google’s director of research, it has been used to train millions of students in more than 1,000 universities. Now? The University of California, Berkeley professor is penning a new edition where he admits that they “got it all wrong”. He adds: “We’re sort of in a bus and the bus is going fast, and no one has any plans to stop.” Where’s the bus going? “Off the cliff.” The good news, though, is that we can turn the bus around. All it entails is a fundamental overhaul, not only of how this frighteningly powerful technology is conceived and engineered, but also of how we, as a corpus of nearly 8bn people, organise, value and educate ourselves. From Russell’s vantage point, we have come to a crossroads. In one direction lies “a golden age of humanity” where we are freed from drudgery by machines. The other direction is, well, darker. In his new book, called Human Compatible, Russell sums it up with what he calls “the gorilla problem”. Apes, our genetic progenitors, were eventually superseded. And now? “Their species has essentially no future beyond that which we deign to allow,” Russell says. “We do not want to be in a similar situation vis-à-vis super-intelligent machines.” Quite. Russell came to California in the 1980s to get a PhD after Oxford, and never left. He is an insider but with an outsider’s perspective. Talk to most computer scientists and they scoff at the idea that has him so worried: artificial general intelligence, or AGI. It is an important distinction. Most of the AI out in the world today involves what is known as “machine learning”. These are algorithms that crunch through inconceivably large volumes of data, draw out patterns, then use those patterns to make predictions. Unlike previous AI booms (and busts), dramatic reductions in the cost of data storage coupled with leaps in processing capability mean that algorithms finally have enough horsepower and raw data to train on. The result is a blossoming of suddenly competent tools that are also sometimes wildly powerful. They are, however, usually designed for very defined, limited tasks. Take, for example, a contest organised by several American universities last year between five experienced lawyers and an AI designed to read contracts. The goal was to see who was better at picking out the loopholes. It was not a great day for Homo sapiens. The AI was not only more accurate — it found 94% of the offending passages, the humans uncovered 85% — but it was faster. A lot faster. While the lawyers needed an average of 92 minutes to complete the task, the AI did it in 26 seconds. Ask that algorithm to do literally anything else, however, and it is utterly powerless. Such “tool AI”, Russell says, “couldn’t plan its way out of a paper bag”. This is why the industry, at least outwardly, is rather blasé about the threat, or even the possibility, of general intelligence. A Google executive confided recently that for years, the search giant’s auto-complete AI would turn every one of his emails to chief executive Sundar Pichai from “Dear Sundar” to “Dear Sugar”. It made for some awkward conversations. There are still many breakthroughs, Russell admits, that are needed to take AI beyond narrow jobs to create truly super-intelligent machines that can handle any task you throw at them. And the possibility that a technology so powerful would ever come to fruition just seems, well, bonkers. Scott Phoenix, founder of the Silicon Valley AI start-up Vicarious, explains what it might look like when (if?) it arrives: “Imagine a person who has a photographic memory and has read every document that any human has ever written. They can think for 60,000 years for every second that passes. If you have a brain like that, questions that were previously out of reach for our small minds — about the nature of the universe, how to build a fusion reactor, how to build a teleporter — are suddenly in reach.” Fantastical, you might think. But the same was once said of nuclear fission, Russell points out. The day after Lord Rutherford dismissed it as “moonshine” in 1933, another physicist, Leo Szilard, worked out how to do it. Twelve years later, Hiroshima and Nagasaki were levelled by atom bombs. So, how long do we have before the age of superintelligent machines? Russell reckons they will arrive “in my kid’s lifetime”. In other words, the next 70 or 80 years. He’ll be back: Stuart Russell despairs at the overuse of Terminator imagery DAVID VINTINER FOR THE SUNDAY TIMES MAGAZINE/REX That is not an invitation to relax. For one, Russell admits, he is probably wrong. Trying to predict technological leaps is a mug’s game. And neither will it be a “big bang” event, where one day we wake up and Hal 9000 is running the world. Rather, the rise of the machines will happen gradually, through a steady drumbeat of advances. He points to the example of Yann LeCun, Facebook’s chief AI scientist. In the 1990s, while he was at AT&T Labs, LeCun began developing a system to recognise handwriting. He cracked it. But only after he solved three other “holy grail problems” in the process: speech recognition, object recognition and machine translation. This is why, Russell argues, AI denialists are themselves in denial. Few people may actually be working on general intelligence per se, but their siloed advances are all dropped into the same soup. “People talk about tool AI as if, ‘Oh, it’s completely safe, there’s nothing harmful about a Go program or something that recognises, you know, tumours in x-rays.’ They say they don’t have any connection with general-purpose AI. That’s completely false,” he says. “Google was founded to achieve human-level AI — the search engine is just how they get funds to finance the long-term goal.” Which is why we must start working — now — not just on how we overhaul AI, but society itself. We’ll cover the former first. The way algorithms work today — the way Russell has taught thousands of students to do it — is simple. Specify a clear, limited objective, and the machine figures out the optimal way to achieve it. It turns out this is a very bad way to build AI. Consider social media. The content- selection algorithms at Facebook, YouTube, Twitter and the rest populate your feed with posts they think you’ll find interesting, but their ultimate goal is something else entirely: revenue maximisation. It turns out the best way to do that is to get you to click on advertisements, and the best way to do that is to disproportionately promote incendiary content that runs alongside them. “These simple machine-learning algorithms don’t understand anything about human psychology, but they are super-powerful because they interact with you for hours and hours a day, they interact with billions of people and, because they have so much contact with you, they can manipulate you,” Russell says.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages13 Page
-
File Size-