Computational Models of Human Intelligence
Total Page:16
File Type:pdf, Size:1020Kb
COMPUTATIONAL MODELS OF HUMAN INTELLIGENCE CMHI Report Number 1 15 December 2018 The Genesis Enterprise: Taking Artificial Intelligence to another Level via a Computational Account of Human Story Understanding Patrick Henry Winston and Dylan Holmes Computer Science and Artificial Intelligence Laboratory Center for Brains, Minds, and Machines We propose to develop computational accounts of human intelligence and to take intelligent systems to another level using those computational accounts. To develop computational accounts of human intelligence, we believe we must develop biologically plau- sible models of human story understanding, and then use those models to implement story-understanding systems that embody computational imperatives. We illustrate our approach by describing the development of the Genesis Story Understanding System and by explaining how Genesis goes about understanding short, up to 100-sentence stories, expressed in English. The stories include, for example, summaries of plays, fairy tales, international conflicts, and Native American creation myths. Genesis answers questions, interprets with controllable allegiances and cultural biases, notes personality traits, anticipates trouble, measures conceptual similarity, aligns stories, reasons analogically, summarizes, tells persuasively, composes new stories, and performs story-grounded hypothetical reasoning. We explain how we ensure that work on Genesis is scientifically grounded; we identify representative questions to be answered by our Brain and Cognitive Science colleagues; and we note why story understanding has much to offer not only to Artificial Intelligence but also to fields such as business, economics, education, humanities, law, neuroscience, medicine, and politics. Keywords: computational models of human intelligence; story understanding; computational imperatives; inference reflexes; concept recognition; Genesis story-understanding system. Copyright © 2018 Patrick Henry Winston and Dylan Holmes—All Rights Reserved The Genesis Enterprise: Taking Artificial Intelligence to another Level via a Computational Account of Human Story Understanding Patrick Henry Winston and Dylan Holmes Computer Science and Artificial Intelligence Laboratory Center for Brains, Minds, and Machines 15 December 2018 Abstract We propose to develop computational accounts of human intelligence and to take intelligent systems to another level using those computational accounts. To develop computational accounts of human intelligence, we believe we must develop biologically plau- sible models of human story understanding, and then use those models to implement story-understanding systems that embody computational imperatives. We illustrate our approach by describing the development of the Genesis Story Understanding System and by explaining how Genesis goes about understanding short, up to 100-sentence stories, expressed in English. The stories include, for example, summaries of plays, fairy tales, international conflicts, and Native American creation myths. Genesis answers questions, interprets with controllable allegiances and cultural biases, notes personality traits, anticipates trouble, measures conceptual similarity, aligns stories, reasons analogically, summarizes, tells persuasively, composes new stories, and performs story-grounded hypothetical reasoning. We explain how we ensure that work on Genesis is scientifically grounded; we identify representative questions to be answered by our Brain and Cognitive Science colleagues; and we note why story understanding has much to offer not only to Artificial Intelligence but also to fields such as business, economics, education, humanities, law, neuroscience, medicine, and politics. Keywords: computational models of human intelligence; story understanding; computational imperatives; inference reflexes; concept recognition; Genesis story-understanding system . 1 Vision Most of today’s AI research focuses on statistical mechanisms associated with the field of Machine Learning. Those statistical mechanisms exhibit various kinds of perceptual intelligence, often im- pressively, but shed little light on aspects of intelligence that are uniquely human. We believe that tomorrow’s AI will focus on an understanding of our uniquely human intelligence emerging from Copyright © 2018 Patrick Henry Winston—All Rights Reserved 1 discoveries on par with the discoveries of Copernicus about our universe, Darwin about our evolu- tion, and Watson and Crick about our biology. Those cognitive mechanisms will take to another level applications aimed at reasoning, planning, control, and cooperation. Tomorrow’s AI applications will astonish the world because they will think and explain them- selves, just as we humans think and explain. To develop an account of our uniquely human intelligence, we believe we must answer two key questions: first, what computational competences are uniquely human; and second, how dothe uniquely human competences support and benefit from the computational competences we share with other animals. Our answer to the uniquely human question is that we became the symbolic species and that becoming symbolic made it possible to become the story-understanding species. Our answer to the support-and-benefit question is that our symbolic competence, and the story-understanding compe- tence that it enables, could not have evolved without myriad elements already in place. Our position on the uniquely-human question—that we are symbolic story understanders—takes us out of the mainstream, even among those who study story understanding. As explained in sec- tion 7.2, we believe what should be studied is not how problem solving enables story understand- ing, but rather how story understanding provides the substrate for problem solving and everything enabled by problem solving. As explained in section 6.2, we believe that no neural net can have humanlike intelligence without the equivalent of a symbolic abstraction layer. Because our position is controversial, we devote the rest of this section to explaining in more detail how we decided to focus our research on story understanding. We start by explaining what we mean when we say we are symbolic, how being symbolic enables story understanding, how we approach our study of story understanding, and how developing an account of human story under- standing causes us to focus on behaviors of interest, rather than on statistical measures appropriate to application evaluation. Our work on the support-and-benefit question constitutes another story that is not yet ready to be told. 1.1 Merge is the sine qua non of being symbolic We begin by noting recent work by Robert Berwick and Noam Chomsky. In Why Only Us (2016), they argue that only humans have what they call merge, an operation that combines two expressions to make a larger expression without disturbing the two merged expressions. Berwick and Chom- sky emphasize that having merge is an incremental evolutionary step, a step that requires only the completion of an anatomical loop that is almost complete in other primates. While completion of an anatomical loop is not a big change anatomically, we hypothesize that it enables a giant increase in intelligence, because merge gives us, and only us, an inner language with which we build complex, highly nested symbolic descriptions of classes, properties, relations, actions, and events. When we write that we are symbolic, we mean that we have a merge-enabled inner language. Using our inner language, we can record, for example, that a hawk is a kind of bird, that hawks are fast, that a particular hawk is above a field, that the hawk is hunting, that a squirrel appears, and that John thinks the hawk will try to catch the squirrel. Other animals seem to have internal representations of some aspects of the outside world, but they seem incapable of constructing complex, highly nested symbolic descriptions. Recent reexam- ination of work with chimpanzees, for example, shows that chimpanzees do not have humanlike compositional abilities. Charles Yang, in a seminal study of child and chimpanzee corpora, has noted that young children generate novel combinations of words very freely, but Nim Chimpsky, the famous chimpanzee who was exposed to American Sign Language, never provided evidence, via sign- 2 ing, that suggested he had a merge-enabled compositional capability (2013). Evidently, chimpanzees have some ability to understand the names of things and memorize sign sequences, but they do not express via their signing any indication that they have a merge-enabled inner language of complex, highly nested symbolic descriptions. 1.2 Being symbolic is the sine qua non of story understanding We believe that our merge-enabled inner language, which seems to have emerged only about 80,000 years ago (Tattersall, 1998, 2010, 2012), made possible another distinguishing competence: we con- nect the complex, highly nested symbolic descriptions with various sorts of constraints, including, for example, causal, means-ends, enablement, and time constraints. With such constraints, we form even more complex and highly nested symbolic descriptions. An inner story: A collection of complex, highly nested symbolic descriptions of properties, relations, actions, and events, usefully connected with constraints such as causal, enable- ment, and time constraints. Note that we exclude what others would include. We have no doubt that rats and other animals remember useful sequences, and we have no objection to calling those sequences