Evaluating Commonsense Reasoning in Neural Machine Translation

Total Page:16

File Type:pdf, Size:1020Kb

Evaluating Commonsense Reasoning in Neural Machine Translation The Box is in the Pen: Evaluating Commonsense Reasoning in Neural Machine Translation Jie Hey∗, Tao Wangz∗, Deyi Xiongy, and Qun Liux y College of Intelligence and Computing, Tianjin University, Tianjin, China z School of Computer Science and Technology, Soochow University, Suzhou, China x Huawei Noah’s Ark Lab, Hong Kong, China [email protected], [email protected] [email protected], [email protected] Abstract evidence for the need of commonsense knowl- edge in machine translation is “The box is in the Does neural machine translation yield transla- pen”, where machine translation is expected to per- tions that are congenial with common sense? form reasoning on the relative sizes of “box” and In this paper, we present a test suite to evalu- “pen”. Bar-Hillel also doubts that a machine, even ate the commonsense reasoning capability of equipped with extra-linguistic knowledge, would neural machine translation. The test suite consists of three test sets, covering lexical be able to reason with such knowledge sponta- and contextless/contextual syntactic ambiguity neously as human translators do (Bar-Hillel, 1960a; that requires commonsense knowledge to re- Macklovitch, 1995). solve. We manually create 1,200 triples, each Modern natural language processing (NLP) has of which contain a source sentence and two made tremendous progress, not only in building contrastive translations, involving 7 different abundant resources to develop linguistic insights, common sense types. Language models pre- but also in plenty of methodological practices. On trained on large-scale corpora, such as BERT, GPT-2, achieve a commonsense reasoning ac- the one hand, machine translation has been sub- curacy of lower than 72% on target transla- stantially advanced with large-scale parallel data tions of this test suite. We conduct extensive and statistical models. Recent results even suggest experiments on the test suite to evaluate com- that the quality of machine-generated translations monsense reasoning in neural machine trans- is approaching professional human translators (Wu lation and investigate factors that have impact et al., 2016; Hassan et al., 2018). On the other hand, on this capability. Our experiments and anal- a wide variety of efforts have been conducted to yses demonstrate that neural machine transla- either examine the commonsense reasoning capa- tion performs poorly on commonsense reason- ing of the three ambiguity types in terms of bility of neural models in natural language under- both reasoning accuracy ( 6 60:1%) and rea- standing, establish commonsense reasoning chal- soning consistency (6 31%). We will release lenges or enhance neural models in commonsense our test suite as a machine translation com- reasoning (Zhang et al., 2018; Talmor et al., 2018; monsense reasoning testbed to promote future Huang et al., 2019; Sap et al., 2019b). work in this direction. Comparing with Bar-Hillel’s doubts and recent progress on machine translation and commonsense 1 Introduction reasoning, it is natural for us to ask questions: do we solve the machine translation impasse related Sixty years ago, the pioneering machine transla- to commonsense reasoning? Or particularly, are tion researcher and linguist Bar-Hillel published current neural machine translation models able to his well-known argument on the non-feasibility of learn common sense? And if so, how much do general-purpose fully automatic high-quality ma- they learn? Does neural machine translation ac- chine translation (FAHQT) due to the inevitable quire sufficient commonsense knowledge and have requirement of world knowledge to help machine strong ability in commonsense reasoning to gener- translation to infer correct translations for am- ate human-level high-quality translations? Method- biguous words or linguistic structures (Bar-Hillel, ological discussion on the feasibility of FAHQT 1960a). The example that Bar-Hillel uses as an given the recent progress is far beyond the scope ∗Equal Contributions. of this work. Instead, we focus on empirically ana- 3662 Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3662–3672 November 16 - 20, 2020. c 2020 Association for Computational Linguistics (1) 这个 人 戴 的 表 走 了 3 分钟 。 commonsense reasoning accuracy is higher The watch worn by this person went/walked for 3 than 50%, the reasoning consistency rate is minutes. far lower than 50% (random guess). (2) 吃 了 游客 的 鳄鱼 。 The crocodile who ate the tourist/Ate the tourist's 2 Related work crocodile. (3) 当 地震 袭击 中国 时 , 援助 的 是 中国 。 We briefly review recent efforts related to common- sense reasoning in NLP. We refer readers to Storks When the earthquake hit China, China received aid/China provided aid. et al.(2019)’s article for a thorough survey in this area. Figure 1: Examples of the lexical ambiguity (1), con- Commonsense Datasets textless syntactic ambiguity (2) and contextual syntac- According to Gunning(2018), commonsense tic ambiguity (3). English Translations in bold are cor- knowledge normally consists of a general theory of rect while underlined translations are incorrect. how the physical world works and a basic under- lyzing the capability of state-of-the-art neural ma- standing of human motives and behaviors. In recent chine translation models in using extra-linguistic years, a wide variety of datasets on the two kinds commonsense knowledge to resolve ambiguity at of commonsense knowledge have been proposed. different linguistic levels and select correct transla- Sap et al.(2019b) introduce Social IQA, containing tions after disambiguation. 38k multiple choice questions for probing the com- In order to achieve this goal, we manually build monsense reasoning about emotional and social a machine translation commonsense reasoning test in people’s daily life. Similarly, Event2mind and suite on Chinese-to-English translation with three Atomic (Rashkin et al., 2018; Sap et al., 2019a) types of commonsense-related ambiguities: lexi- focus on inferred knowledge in the form of if-then cal ambiguity, contextless and contextual syntactic to reason about people’s daily life behavior. For ambiguity (see Section 3.1 for more details). Exam- datasets on physical common sense, PIQA (Bisk ples are shown in Figure1. With this test suite, we et al., 2020) on commonsense phenomena in the thoroughly evaluate the commonsense reasoning physical world contains 21K QA pairs. SWAG and ability of state-of-the-art neural machine transla- HellaSwag (Zellers et al., 2018, 2019) are datasets tion models, e.g., LSTM- and Transformer-based on commonsense NLI, where materials from video NMT (Bahdanau et al., 2015; Vaswani et al., 2017). subtitles and wikihow articles are used to construct We also conduct analyses on the commonsense cloze tests. Bhagavatula et al.(2019) propose a reasoning capability according to commonsense dataset for abductive reasoning on events. The well- knowledge types, sentence length and reasoning known Winograd Schema Challenge (WSC) test consistency and the size of training data. set (Levesque et al., 2012; Sakaguchi et al., 2020) To the best of our knowledge, this is the first focus on solving the commonsense problems in work to understand and measure the commonsense the form of coreference resolution. Different from reasoning capability in neural machine translation. them on monolingual data, we provide a bilingual The contributions of this paper can be summarized commonsense test suite for machine translation. as follows: Commonsense Reasoning in NLP In addition to common sense datasets, we have also 1 • We build a test suite to examine the abil- witnessed that commonsense knowledge has been ity of neural machine translation in common- recently explored in different NLP tasks. Just to sense reasoning, which provides a benchmark name a few, Trinh and Le(2018), He et al.(2019) testbed for tracking progress in this direction. and Klein and Nabi(2019) use language models • Based on our experiments and analyses on trained on huge text corpora to do inference on the evaluating commonsense reasoning in NMT, WSC dataset. Ding et al.(2019) use common- we find that: 1) commonsense reasoning re- sense knowledge in Atomic (Sap et al., 2019a) and lated to lexical ambiguity and contextual syn- Event2mind (Rashkin et al., 2018) on downstream tactic ambiguity is more difficult than con- tasks such as script event prediction. Bi et al. textless syntactic ambiguity; 2) although the (2019) exploit external commonsense knowledge 1The built commonsense test suite will be publicly avail- from ConceptNet (Speer et al., 2016)) in machine able at https://github.com/tjunlp-lab/CommonMT. reading comprehension. 3663 Commonsense Reasoning Evaluation or sentence reasonability judgment (i.e., “He put a With pre-trained language models, like BERT (De- turkey into the fridge” vs. “He put an elephant into vlin et al., 2019), GPT-2 (Radford et al., 2019) the fridge”) (Wang et al., 2019), where common- being widely used in various NLP tasks, studies sense reasoning normally happens in one language, have been performed to examine the commonsense commonsense reasoning in NMT can be done ei- reasoning capability in pre-trained neural language ther in the encoding of the source language (i.e., models. Wang et al.(2019) and Zhou et al.(2020) encoding reasonable source representations) or in propose to measure the success rate of the pre- the decoding of the target language (i.e., producing trained language models in commonsense inference reasonable target outputs). As it is difficult to de- by calculating LM probabilities. Two sentences tect whether reasonable senses are identified and which are used to test commonsense inference dif- encoded in the encoder, we check target outputs fer only in commonsense concepts. Feldman et al. from the decoder to test the commonsense reason- (2019) further explore unsupervised methods to ing capability of NMT. This is the first rule that we generate commonsense knowledge using the world follow to design the test suite. knowledge of pre-trained language models. Our In the second rule for building the test suite, we commonsense reasoning evaluation resonates with manually create source sentences with ambiguity these evaluation efforts.
Recommended publications
  • Commonsense Reasoning for Natural Language Processing
    Introductory Tutorial: Commonsense Reasoning for Natural Language Processing Maarten Sap 1 Vered Shwartz 1;2 Antoine Bosselut 1;2 Yejin Choi 1;2 Dan Roth 3 1 Paul G. Allen School of Computer Science & Engineering, Seattle, WA, USA 2 Allen Institute for Artificial Intelligence, Seattle, WA, USA 3 Department of Computer and Information Science, University of Pennsylvania fmsap, vereds, antoineb, yejing @cs.washington.edu, [email protected] 1 Introduction settings, such as social interactions (Sap et al., 2019b; Rashkin et al., 2018a) and physical situ- Commonsense knowledge, such as knowing that ations (Zellers et al., 2019; Talmor et al., 2019). “bumping into people annoys them” or “rain We hope that in the future, machines develop makes the road slippery”, helps humans navigate the kind of intelligence required to, for exam- everyday situations seamlessly (Apperly, 2010). ple, properly assist humans in everyday situations Yet, endowing machines with such human-like (e.g., a chatbot that anticipates the needs of an el- commonsense reasoning capabilities has remained derly person; Pollack, 2005). Current methods, an elusive goal of artificial intelligence research however, are still not powerful or robust enough to for decades (Gunning, 2018). be deployed in open-domain production settings, Commonsense knowledge and reasoning have despite the clear improvements provided by large- received renewed attention from the natural lan- scale pretrained language models. This shortcom- guage processing (NLP) community in recent ing is partially due to inadequacy in acquiring, years, yielding multiple exploratory research di- understanding and reasoning about commonsense rections into automated commonsense under- knowledge, topics which remain understudied by standing.
    [Show full text]
  • Review Articles
    review articles DOI:10.1145/2701413 AI has seen great advances of many kinds recently, but there is one critical area where progress has been extremely slow: ordinary commonsense. BY ERNEST DAVIS AND GARY MARCUS Commonsense Reasoning and Commonsense Knowledge in Artificial Intelligence WHO IS TALLER, Prince William or his baby son Prince key insights George? Can you make a salad out of a polyester shirt? ˽ To achieve human-level performance in domains such as natural language If you stick a pin into a carrot, does it make a hole processing, vision, and robotics, basic knowledge of the commonsense world— in the carrot or in the pin? These types of questions time, space, physical interactions, people, may seem silly, but many intelligent tasks, such as and so on—will be necessary. understanding texts, computer vision, planning, and ˽ Although a few forms of commonsense reasoning, such as taxonomic reasoning scientific reasoning require the same kinds of real- and temporal reasoning are well world knowledge and reasoning abilities. For instance, understood, progress has been slow. ˽ Extant techniques for implementing if you see a six-foot-tall person holding a two-foot-tall commonsense include logical analysis, handcrafting large knowledge bases, person in his arms, and you are told they are father Web mining, and crowdsourcing. Each of these is valuable, but none by itself is a and son, you do not have to ask which is which. If you full solution. need to make a salad for dinner and are out of lettuce, ˽ Intelligent machines need not replicate you do not waste time considering improvising by human cognition directly, but a better understanding of human commonsense taking a shirt of the closet and cutting might be a good place to start.
    [Show full text]
  • Downloading Programs
    Futures of Artificial Intelligence through Technology Readiness Levels Fernando Mart´ınez-Plumed1,2, Emilia Gomez´ 1, Jose´ Hernandez-Orallo´ 2 Abstract Artificial Intelligence (AI) offers the potential to transform our lives in radical ways. However, the main unanswered questions about this foreseen transformation are its depth, breadth and timelines. To answer them, not only do we lack the tools to determine what achievements will be attained in the near future, but we even ignore what various technologies in present-day AI are capable of. Many so-called breakthroughs in AI are associated with highly-cited research papers or good performance in some particular benchmarks. However, research breakthroughs do not directly translate into a technology that is ready to use in real-world environments. In this paper, we present a novel exemplar- based methodology to categorise and assess several AI technologies, by mapping them onto Technology Readiness Levels (TRL) (representing their depth in maturity and availability). We first interpret the nine TRLs in the context of AI, and identify several categories in AI to which they can be assigned. We then introduce a generality dimension, which represents increasing layers of breadth of the technology. These two dimensions lead to the new readiness-vs-generality charts, which show that higher TRLs are achievable for low-generality technologies, focusing on narrow or specific abilities, while high TRLs are still out of reach for more general capabilities. We include numerous examples of AI technologies in a variety of fields, and show their readiness-vs-generality charts, serving as exemplars. Finally, we show how the timelines of several AI technology exemplars at different generality layers can help forecast some short-term and mid-term trends for AI.
    [Show full text]
  • Commonsense Knowledge in Wikidata
    Commonsense Knowledge in Wikidata Filip Ilievski1, Pedro Szekely1, and Daniel Schwabe2 1 Information Sciences Institute, University of Southern California 2 Dept. of Informatics, Pontificia Universidade Cat´olicaRio de Janeiro filievski,[email protected], [email protected] Abstract. Wikidata and Wikipedia have been proven useful for reason- ing in natural language applications, like question answering or entity linking. Yet, no existing work has studied the potential of Wikidata for commonsense reasoning. This paper investigates whether Wikidata con- tains commonsense knowledge which is complementary to existing com- monsense sources. Starting from a definition of common sense, we devise three guiding principles, and apply them to generate a commonsense subgraph of Wikidata (Wikidata-CS). Within our approach, we map the relations of Wikidata to ConceptNet, which we also leverage to integrate Wikidata-CS into an existing consolidated commonsense graph. Our ex- periments reveal that: 1) albeit Wikidata-CS represents a small portion of Wikidata, it is an indicator that Wikidata contains relevant com- monsense knowledge, which can be mapped to 15 ConceptNet relations; 2) the overlap between Wikidata-CS and other commonsense sources is low, motivating the value of knowledge integration; 3) Wikidata-CS has been evolving over time at a slightly slower rate compared to the overall Wikidata, indicating a possible lack of focus on commonsense knowl- edge. Based on these findings, we propose three recommended actions to improve the coverage and quality of Wikidata-CS further. Keywords: Commonsense Knowledge · Wikidata · Knowledge Graphs 1 Introduction Common sense is \the basic ability to perceive, understand, and judge things that are shared by nearly all people and can be reasonably expected of nearly all people without need for debate" [10].
    [Show full text]
  • Natural Language Understanding with Commonsense Reasoning
    E.T.S. DE INGENIEROS INFORMÁTICOS UNIVERSIDAD POLITÉCNICA DE MADRID MASTER TESIS MSc IN ARTIFICIAL INTELLIGENCE (MUIA) NATURAL LANGUAGE UNDERSTANDING WITH COMMONSENSE REASONING: APPLICATION TO THE WINOGRAD SCHEMA CHALLENGE AUTHOR: ALFONSO LÓPEZ TORRES SUPERVISOR: MARTÍN MOLINA GONZÁLEZ JUNE, 2016 This is for my children Carla and Alonso, and my wife Véronique Thanks for their unconditional support and patient (also for the coming adventures…) v Acknowledgments: I’d like to thank the advices and help received from Martín. I was very lucky being your student. vi RESUMEN En 1950, Alan Turing propuso un test para evaluar el grado de inteligencia humana que podría presentar una máquina. La idea principal era realmente sencilla: llevar a cabo una charla abierta entre un evaluador y la máquina. Si dicho evaluador era incapaz de discernir si el examinado era una persona o una máquina, podría afirmarse que el test había sido superado. Desde entonces, a lo largo de los últimos 60 años se han presentado numerosas propuestas a través de los cuales se han puesto al descubierto ciertas debilidades del test. Quizás la más importante es el hecho de centrarse en la inteligencia humana, dejando a un lado otros tipos de inteligencia. El test obliga en gran medida a definir en la máquina un comportamiento antropomórfico y de imitación con el único fin de pasar el test. Con el fin de superar estos y otros puntos débiles, Hector Levesque propuso en 2011 un nuevo reto, “The Winograd Schema Challenge”. Un sencillo test basado en Pregunta y Respuesta sobre una frase que describe una situación cotidiana.
    [Show full text]
  • An Approach to Evaluate AI Commonsense Reasoning Systems∗
    An Approach to Evaluate AI Commonsense Reasoning Systems∗ Stellan Ohlsson and Robert H. Sloan Gyorgy¨ Turan´ Daniel Uber and Aaron Urasky University of Illinois at Chicago University of Illinois at Chicago, University of Illinois at Chicago University of Szeged Abstract ery schoolchild and the methods for making obvious infer- ences from this knowledge are called common sense” (Davis We propose and give a preliminary test of a new metric 1990). Without a generally accepted performance standard, for the quality of the commonsense knowledge and rea- it is impossible to evaluate claims and document progress. soning of large AI databases: Using the same measure- ment as is used for a four-year-old, namely, an IQ test We propose that tests of intelligence developed by psy- for young children. We report on results obtained us- chometricians can serve as one type of assessment of com- ing test questions we wrote in the spirit of the questions mon sense reasoning systems. Psychologists face the same of the Wechsler Preschool and Primary Scale of Intel- problem as artificial intelligence researchers: How to mea- ligence, Third Edition (WPPSI-III) on the ConceptNet sure something as broad and varied as common sense? Their system, which were, on the whole, quite strong. solution is to generate a multitude of diverse but simple tasks, called test items, and then collect empirical data on the association between performance on the proposed test items 1 Introduction and some criterion variable. This statistical solution is en- Computer systems have long outperformed humans on nu- capsulated in what are generally known as intelligence tests.
    [Show full text]
  • Commonsense Reasoning About Task Instructions
    Commonsense Reasoning about Task Instructions Rakesh Gupta Ken Hennacy Honda Research Institute USA, Inc. University of Maryland 800 California Street, Suite 300 Institute for Advanced Computer Studies Mountain View, CA 94041 College Park, MD 20742 [email protected] [email protected] Abstract to handle the various difficulties associated with change and scalability (Minker 1991). We have developed an approach for commonsense reason- ing using knowledge collected from volunteers over the web. As an alternative to formal methods that guarantee solu- This knowledge is collected in natural language, and includes tions, implementational approaches place more emphasis on information such as task instructions, locations of objects in adaptation and knowledge acquisition. Studies involving hu- homes, causes and effects, and uses of objects in the home. man interactions and natural language, for example, have This knowledge stored in tables in a relational database is shown promise in developing agents that can learn and rea- filtered using statistical methods and rule-based inference. son about human-centric information (W. L. Johnson 2004). Missing details within natural language task instructions are According to Rasmussen et al (1983), human behavior in fa- reasoned to determine the steps to be executed in the task. miliar environments doesn’t require extensive decision mak- These missing details are handled by meta-rules which work ing. It is controlled by a set of rules or steps which have across the knowledge categories and interact with the appro- priate tables to extract the right information. Our reasoning proven successful previously. The sequence of steps may approach is illustrated for common household tasks.
    [Show full text]
  • Using Conceptnet to Teach Common Sense to an Automated Theorem Prover∗
    Using ConceptNet to Teach Common Sense to an Automated Theorem Prover∗ Claudia Schon Institute for Web Science and Technologies, University of Koblenz-Landau, 56070 Koblenz, Germany [email protected] Sophie Siebert Frieder Stolzenburg Harz University of Applied Sciences, Automation and Computer Sciences Department, 38855 Wernigerode, Germany fssiebert,[email protected] The CoRg system is a system to solve commonsense reasoning problems. The core of the CoRg sys- tem is the automated theorem prover Hyper that is fed with large amounts of background knowledge. This background knowledge plays a crucial role in solving commonsense reasoning problems. In this paper we present different ways to use knowledge graphs as background knowledge and discuss challenges that arise. 1 Introduction In recent years, numerous benchmarks for commonsense reasoning have been presented which cover different areas: the Choice of Plausible Alternatives Challenge (COPA) [17] requires causal reasoning in everyday situations, the Winograd Schema Challenge [8] addresses difficult cases of pronoun disam- biguation, the TriangleCOPA Challenge [9] focuses on human relationships and emotions, and the Story Cloze Test with the ROCStories Corpora [11] focuses on the ability to determine a plausible ending for a given short story, to name just a few. In our system, we focus on the COPA challenge where each problem consists of a problem description (the premise), a question, and two answer candidates (called alternatives). See Fig. 1 for an example. Most approaches tackling these problems are based on machine learning or exploit statistical properties of the natural language input (see e.g. [14, 16]) and are therefore unable to provide explanations for the decisions made.
    [Show full text]
  • Machine Common Sense Concept Paper David Gunning DARPA/I2O October 14, 2018
    Machine Common Sense Concept Paper David Gunning DARPA/I2O October 14, 2018 Introduction This paper summarizes some of the technical background, research ideas, and possible development strategies for achieving machine common sense. This concept paper is not a solicitation and is provided for informational purposes only. The concepts are organized and described in terms of a modified set of Heilmeier Catechism questions. What are you trying to do? Machine common sense has long been a critical—but missing—component of Artificial Intelligence (AI). Recent advances in machine learning have resulted in new AI capabilities, but in all of these applications, machine reasoning is narrow and highly specialized. Developers must carefully train or program systems for every situation. General commonsense reasoning remains elusive. Wikipedia defines common sense as, the basic ability to perceive, understand, and judge things that are shared by ("common to") nearly all people and can reasonably be expected of nearly all people without need for debate. It is common sense that helps us quickly answer the question, “can an elephant fit through the doorway?” or understand the statement, “I saw the Grand Canyon flying to New York.” The vast majority of common sense is typically not expressed by humans because there is no need to state the obvious. We are usually not conscious of the vast sea of commonsense assumptions that underlie every statement and every action. This unstated background knowledge includes: a general understanding of how the physical world works (i.e., intuitive physics); a basic understanding of human motives and behaviors (i.e., intuitive psychology); and knowledge of the common facts that an average adult possesses.
    [Show full text]
  • Knowledge Representation and Commonsense Reasoning: Reviews of Four Books
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Artificial Intelligence 170 (2006) 1239–1250 www.elsevier.com/locate/artint Book reviews Ronald J. Brachman, Hector J. Levesque, Knowledge Representation and Reasoning, Morgan Kaufmann, ISBN 1558609326, 2004, 381 pages. Raymond Reiter, Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical Systems, MIT Press, ISBN 0262182181, 2001, 448 pages. Erik T. Mueller, Commonsense Reasoning, Morgan Kaufmann, ISBN 0123693888, 2006, 432 pages. Chitta Baral, Knowledge Representation, Reasoning and Declarative Problem Solving, Cambridge University Press, ISBN 0521818028, 2003, 544 pages. Knowledge representation and commonsense reasoning: Reviews of four books Leora Morgenstern IBM T.J. Watson Research Center, Hawthorne, NY 10532, USA 1. Characterizing knowledge representation and commonsense reasoning Knowledge representation and the automation of commonsense reasoning have been bound together since the publication of John McCarthy’s “Programs With Common Sense” [15]. McCarthy argued that common sense in a computer could best be achieved using a declarative representation of commonsense knowledge and some means for deducing the consequences of that knowledge. That is, in order to get a computer to behave in a way that exhibits common sense, one must first provide it with a fair amount of basic facts about the world. Moreover, these facts should be represented in a declarative language like first-order logic. This description of the formal commonsense reasoning (CSR) enterprise seems quite similar to our understanding of the knowledge representation (KR) enterprise: the development of formal languages and structures for representing knowledge, along with methods for reasoning with that knowledge.
    [Show full text]
  • An Architecture of Diversity for Commonsense Reasoning
    reasoning, logic, neural nets, genetic algorithms, and heuristic search. We proposed to organize a meeting at which we would bring together many of the major established researchers in the area of commonsense knowledge and reasoning. Riecken organized a preliminary meeting at the IBM Thomas J. Watson Research Cen- ter in March 2002, at which many IBM researchers were invited to discuss and react to this general sub- ject as well as to present their own ideas. Afterwards, the specific proposal was discussed in more detail by specialists in commonsense knowledge and rea- soning at a meeting held on St. Thomas, Virgin Is- lands, in April 2002, and hosted by Jeffrey Epstein. This Technical Forum contribution focuses on the preliminary meeting, but also contains some mate- rial presented at the April meeting, including some material from Minsky’s forthcoming book The Emo- tion Machine. 1 At the IBM meeting, a broad consensus was reached An architecture of diversity for on three main points. First, there was agreement that commonsense reasoning the community should strive toward solving a non- trivial problem that would require a level of knowl- Although computers excel at certain bounded tasks edge, and a capability of reasoning with that knowl- that are difficult for humans, such as solving inte- edge, beyond what is demonstrated by current grals, they have difficulty performing commonsense systems. The problem put forward was that of story tasks that are easy for humans, such as understand- understanding. An important advantage of the story ing stories. In this Technical Forum contribution, we understanding task is that standardized tests are discuss commonsense reasoning and what makes it available to evaluate students on their reading com- difficult for computers.
    [Show full text]
  • PIQA: Reasoning About Physical Commonsense in Natural Language
    The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) PIQA: Reasoning about Physical Commonsense in Natural Language Yonatan Bisk,1,2,3,4 Rowan Zellers,1,4 Ronan Le Bras,1 Jianfeng Gao,2 Yejin Choi1,4 1Allen Institute for Artificial Intelligence 2Microsoft Research AI 3Carnegie Mellon University 4Paul G. Allen School for Computer Science and Engineering, University of Washington http://yonatanbisk.com/piqa Abstract To separate egg whites from the yolk To apply eyeshadow without a brush, should I use a cotton using a water bottle, you should… swab or a toothpick? Questions requiring this kind of phys- ical commonsense pose a challenge to today’s natural lan- a. Squeeze the water b. Place the water bottle guage understanding systems. While recent pretrained mod- bottle and press it and press it against the els (such as BERT) have made progress on question answer- against the yolk. yolk. Keep pushing, ing over more abstract domains – such as news articles and Release, which creates which creates suction encyclopedia entries, where text is plentiful – in more physi- cal domains, text is inherently limited due to reporting bias. suction and lifts the yolk. and lifts the yolk. Can AI systems learn to reliably answer physical common- sense questions without experiencing the physical world? In this paper, we introduce the task of physical commonsense ??? reasoning and a corresponding benchmark dataset Physical Interaction: Question Answering or PIQA . Though hu- mans find the dataset easy (95% accuracy), large pretrained a! models struggle (∼75%). We provide analysis about the di- mensions of knowledge that existing models lack, which of- fers significant opportunities for future research.
    [Show full text]