Making Ai More Human
Total Page:16
File Type:pdf, Size:1020Kb
MACHINE LEARNING MAKING AI MORE HUMAN 60 Scientifi c American, June 2017 Illustration by Simon Prades sad0617Gopn3p.indd 60 4/19/17 2:53 PM Artificial intelligence has staged a revival by starting to incorporate what we know about how children learn By Alison Gopnik sad0617Gopn3p.indd 61 4/19/17 2:53 PM Alison Gopnik is a professor of psychology and an affi liate professor of philosophy at the University of California, Berkeley. Her research focuses on how young children learn about the world around them. , ’ young human beings can possibly learn so much so quickly. Philoso- phers, going all the way back to Plato, have wondered, too, but they’ve never found a satisfying answer. My fi ve-year-old grandson, Augie, I has learned about plants, animals and clocks, not to mention dino- saurs and spaceships. He also can fi gure out what other people want and how they think and feel. He can use that knowledge to classify what he sees and hears and make new predictions. He recently proclaimed, for example, that the newly discovered species of titanosaur on dis- play at the American Museum of Natural History in New York City is a plant eater, so that means it really isn’t that scary. Yet all that reaches Augie from his environment is a stream dictions have emerged about what those advances mean. They of photons hitting his retina and disturbances of air contacting have, quite literally, been taken to presage either immortality or his eardrums. The neural computer that sits behind his blue the end of the world, and a lot has been written about both eyes manages somehow to start with that limited information these possibilities. from his senses and to end up making predictions about plant- I suspect that developments in AI lead to such strong feel- eating titanosaurs. One lingering question is whether electron- ings because of our deep-seated fear of the almost human. The ic computers can do the same. idea that creatures might bridge the gap between the human During the past 15 years or so computer scientists and psy- and the artifi cial has always been deeply disturbing, from the chologists have been trying to fi nd an answer. Children acquire medieval golem to Frankenstein’s monster to Ava, the sexy a great deal of knowledge with little input from teachers or par- robot fatale in the movie Ex Machina. ents. Despite enormous strides in machine intelligence, even But do computers really learn as well as humans? How the most powerful computers still cannot learn as well as a fi ve- much of the heated rhetoric points to revolutionary change, year-old does. and how much is just hype? The details of how computers learn Figuring out how the child brain actually functions—and to recognize, say, a cat, a spoken word or a Japanese character then creating a digital version that will work as e ectively— can be hard to follow. But on closer inspection, the basic ideas will challenge computer scientists for decades to come. But in behind machine learning are not as ba ing as they fi rst seem. the meantime, they are beginning to develop artifi cial intelli- One approach tries to solve the problem by starting with the gence that incorporates some of what we know about how hu - stream of photons and air vibrations that Augie, and all of us, mans learn. receives—and that reaches the computer as pixels of a digital image and sound samples of an audio recording. It then tries to THIS WAY UP extract a series of patterns in the digital data that can detect of enthusiasm in the 1950s and 1960s, the and identify whole objects in the surrounding world. This so- quest for AI languished for decades. In the past few years, called bottom-up approach has roots in the ideas of philoso- though, there have been striking advances, especially in the phers such as David Hume and John Stuart Mill and psycholo- fi eld of ma chine learning, and AI has become one of the hottest gists such as Ivan Pavlov and B. F. Skinner, among others. developments in technology. Many utopian or apocalyptic pre- In the 1980s scientists fi gured out a compelling and inge- IN BRIEF How do young children know what they know? That Specialists in artifi cial intelligence are studying the Two rival machine-learning strategies—both halting question has long preoccupied philosophers and psy- mental reasoning powers of preschoolers to develop attempts to mimic what children do naturally—have chologists—and now computer scientists. ways to teach machines about the world. begun to transform AI as a discipline. 62 Scientifi c American, June 2017 sad0617Gopn4p.indd 62 4/20/17 6:18 PM OPPOSING STRATEGIES Two Paths to AI’s Resurgence Problems the average fi ve-year-old solves readily can stump even the most powerful computers. AI has made a spirited comeback in recent years by teaching computers to learn about the world somewhat like a child does. The machine recognizes the letter “A” either from raw sensory information—a bottom-up approach—or by making a guess from preexisting knowledge—a top-down approach. Bottom Up (Deep Learning) Top Down (Bayesian Methods) System is primed Examples of the letter A teach a computer to Output: Pixel by pixel, A single example of the letter A suffi ces with one example of a new distinguish patterns of light and dark pixels this character to recognize similar examples when using concept, enough to support for various versions of the letter. Then, when resembles the training Bayesian methods. The machine builds a model a range of output tasks the machine receives a new input, it assesses raw data set; therefore, of the letter from its own internal library of whether the pixels match the confi guration it is an A “parts,” assembling a fi gure made up of an acute from the training set, confi rming the letter is, angle joined by a crossbar, an A that can then be in fact, an A. Deep learning is a more complex used to identify slightly diff erent versions of the version of this approach. letter or to modify it in various ways. Input Input System is trained with raw data (that is, pixels) Output: Output: Output: Output: Classifi cation of Generation of Parsing object Generation of input examples new examples into parts new concepts nious way to apply bottom-up methods to let computers hunt learning—and it played a big role in early AI. In the 2000s such for meaningful patterns in data. “Connectionist,” or “neural net- methods also experienced their own rebirth in the form of prob- work,” systems take inspiration from the way that neurons con- abilistic, or Bayesian, modeling. vert light patterns at your retina into representations of the Like scientists, top-down systems start out by formulating world around you. A neural network does something similar. It abstract and wide-ranging hypotheses about the world. The sys- ) uses interconnected processing elements, akin to biological tems then make predictions about what the data should look cells, to transform pixels at one layer of the network into in - like if those hypotheses are correct. Also like scientists, the sys- creasingly abstract representations—a nose or an entire face— tems then revise their hypotheses, depending on the outcome of Bayesian example Bayesian ( as data are crunched at progressively higher layers. those predictions. Neural-network ideas have gone through a recent revival be - cause of new techniques called deep learning—technology now NIGERIA, VIAGRA AND SPAM being commercialized by Google, Facebook and other tech - are perhaps the most readily understood, giants. The ever increasing power of computers—the exponen- so let’s consider them fi rst. Imagine that you are trying to get VOL. 350; DECEMBER 11, 2015 2015 350; DECEMBER 11, VOL. tial increase in computing capability that is captured by what is your computer to separate important messages from the spam SCIENCE, known as Moore’s law—also has a part in the new success of that arrives in your in-box. You might notice that spam tends to these systems. So does the development of enormously large have certain distinguishing characteristics: a long list of recipi- data sets. With better processing capabilities and more data to ent addressees, an originating address in Nigeria or Bulgaria, crunch, connectionist systems can learn far more e ectively references to $1-million prizes or perhaps mention of Viagra. than we might have once thought. But perfectly useful messages might look the same. You don’t Over the years the AI community has seesawed between want to miss the announcement that you have earned a promo- favoring these kinds of bottom-up solutions to machine learn- tion or an academic award. ing and alternative top-down approaches. Top-down approach- If you compare enough examples of spam against other es leverage what a system already knows to help it learn some- types of e-mails, you might notice that only the spam tends to thing new. Plato, as well as so-called rationalist philosophers have qualities that combine in certain telltale ways—Nigeria, SOURCE: “HUMAN-LEVEL CONCEPT LEARNING THROUGH PROBABILISTIC PROGRAM INDUCTION,” BY BRENDEN M. LAKE, LAKE, BRENDEN M. BY PROGRAM INDUCTION,” PROBABILISTIC THROUGH LEARNING CONCEPT “HUMAN-LEVEL SOURCE: IN TENENBAUM, B. AND JOSHUA RUSLAN SALAKHUTDINOV such as René Descartes, believed in a top-down approach to for in stance, plus a promise of a $1-million prize together spell Illustration by Jen Christiansen June 2017, Scientifi cAmerican.com 63 sad0617Gopn4p.indd 63 4/20/17 10:17 PM trouble.