Do Fine-Tuned Commonsense Language Models Really Generalize?

Do Fine-Tuned Commonsense Language Models Really Generalize?

Do Fine-tuned Commonsense Language Models Really Generalize? Mayank Kejriwal, Ke Shen Information Sciences Institute USC Viterbi School of Engineering 4676 Admiralty Way 1001 Marina Del Rey, California 90292 Abstract related reasons why commonsense reasoning is such an im- portant topic in AI. Commonsense reasoning is an innately Recently, transformer-based methods such as RoBERTa and GPT-3 have led to significant experimental advances in nat- human ability that machines have (thus far) not proven adept ural language processing tasks such as question answering at ‘conquering’ unlike other task-specific domains such as and commonsense reasoning. The latter is typically evaluated face recognition (Liu et al. 2017). Perhaps for that reason, it through multiple benchmarks framed as multiple-choice in- has always presented an enticing challenge to many AI re- stances of the former. According to influential leaderboards searchers throughout the decades (Lenat, Prakash, and Shep- hosted by the Allen Institute (evaluating state-of-the-art per- herd 1985), (Marcus 1998), (Singh 2002), (Chklovski 2003). formance on commonsense reasoning benchmarks), models There is also the widely held belief that, for a ‘general AI’ based on such transformer methods are approaching human- to truly emerge, commonsense reasoning is one problem like performance and have average accuracy well over 80% (among others) that will need to be solved in a sufficiently on many benchmarks. Since these are commonsense bench- robust manner (Baroni et al. 2017). A more functional rea- marks, a model that generalizes on commonsense reasoning should not experience much performance loss across multi- son for increased interest in commonsense reasoning is the ple commonsense benchmarks. In this paper, we study the rise of chatbots and other such ‘conversational AI’ services generalization issue in detail by designing and conducting (e.g., Siri and Alexa) that represent an important area of in- a rigorous scientific study. Using five common benchmarks, novation in industry (Ram et al. 2018), (Gao, Galley, and Li multiple controls and statistical analysis, we find clear evi- 2018), (Basu 2019), (Young et al. 2017). Recently, the US dence that fine-tuned commonsense language models still do Department of Defense also launched a machine common not generalize well, even with moderate changes to the exper- sense (MCS) program in which a diverse set of researchers imental setup, and may, in fact, be susceptible to dataset bias. and organizations, including the Allen Institute of Artificial We also perform selective studies, including qualitative and Intelligence, is involved (Sap et al. 2019a). consistency analyses, to gain deeper insight into the problem. Despite the success of these models, there is some evi- dence (not necessarily all quantitative) to suggest that the Introduction models are still superficial i.e. do not have the same com- Commonsense reasoning has become a resurgent area of monsense abilities as humans, despite what the performance research in both the NLP and broader AI communities1 numbers suggest. (Davis and Marcus 2015) suggested in a (Davis 2014), (Zang et al. 2013), (Storks, Gao, and Chai seminal review article that, for truly human-level, perfor- 2019), despite having been introduced as an early AI chal- mance ‘knowledge of the commonsense world– time, space, lenge more than 50 years ago (in the context of machine physical interactions, people, and so on—will be necessary.’ translation) (Bar-Hillel 1960). Traditionally, it was believed While we do not deny the theoretical possibility that a lan- that the problem could only be solved through a combina- guage representation model such as BERT, RoBERTa or arXiv:2011.09159v1 [cs.CL] 18 Nov 2020 tion of techniques, including Web mining, logical reason- GPT-3 may have learned these different aspects of the real ing, handcrafted knowledge bases and crowdsourcing (Davis world purely by ‘reading’ large corpora of natural language and Marcus 2015), (Liu and Singh 2004), (Moore 1982). (Devlin et al. 2018), (Liu et al. 2019), (Brown et al. 2020), More recently, the advent of powerful ‘transformer’ neu- we do claim that such possibilities can (and must) be tested ral networks, especially in NLP (Devlin et al. 2018), (Liu through rigorous evaluation. Unfortunately, as we cover in et al. 2019), suggests that the time is right to build common- the Related Work, there has been little to no work by way of sense reasoners that generalize to a wide variety of situa- conducting such a systematic and focused analysis (with the tions, including those involving social and physical reason- central goal of evaluating generalization of a system on com- ing (Sap et al. 2019b), (Bisk et al. 2020). There are several monsense reasoning), using a publicly available and replica- ble system, though there is plenty of precedent for this type Copyright © 2018, Association for the Advancement of Artificial Related Work Intelligence (www.aaai.org). All rights reserved. of study, as discussed in the . 1Two other example domains include computer vision (Zellers In this paper, we attempt to address this gap by care- et al. 2019a) and social networks (Dinakar et al. 2012). fully designing and conducting an empirical study with the specific intent of answering the question of whether fine- tuned commonsense language models generalize in robust ways. Our goal is not to attack either a model or a particular benchmark (or a set of benchmarks) but to present clear (and cautionary) evidence that the current set of evaluations (and evaluation practices) and reported results need to be consid- ered with more skepticism by the community. Considering the pace at which research on commonsense reasoning con- tinues, we posit that this is a timely study and could serve as a methodology for future such studies assessing the general- ization of commonsense AI. Related Work As noted in the Introduction, commonsense reasoning has recently experienced a resurgence in the AI research com- munity. Central references that attest to this resurgence in- clude (Davis 2017), (Davis 2014), (Zang et al. 2013), (Tan- don, Varde, and de Melo 2018), (Sap et al. 2020), (Zellers et al. 2019a). We also noted that commonsense reasoning has also been an ambitious agenda in the past. It is not feasible to cite all relevant work herein; instead, we refer the reader both to the review article by (Davis and Marcus 2015), as well as more recent surveys on commonsense rea- soning tasks and benchmarks (Storks, Gao, and Chai 2019). Figure 1: Question-answer instances from five common- Much progress has been made on specific kinds of com- sense benchmark datasets used for the evaluations in this monsense reasoning, especially in reasoning about time and paper. The question-like ‘prompt’ is highlighted in yellow, internal relations (Ladkin 1986), (Pinto and Reiter 1995), and the correct answer in blue. reasoning about actions and change (Narayanan 2000), and the sign calculus (Davis and Marcus 2015). Semantics have played an important role in some of these successes (Ra- the model in any way. Our results show, in fact, that so- jagopal et al. 2013) including ‘semantic networks’ and other phisticated adversarial modifications are not necessary for structured sources, important ones of which include Con- concluding that generalization is a concern for transformer- ceptNet (Havasi, Speer, and Alonso 2007). WordNet (Miller based QA models. 1995) and Cyc (Lenat 1995). These resources have been fre- Theoretical work on commonsense reasoning along the quently applied in multiple reasoning systems (Botschen, lines of cognitive science and computational commonsense Sorokin, and Gurevych 2018), (Angeli and Manning 2014), paradigms should also be noted (Hobbs et al. 1987), (Hobbs (Lin, Sun, and Han 2017). In contrast with WordNet and and Kreinovich 2001), (Gordon and Hobbs 2017). We note ConceptNet, Cyc focuses on designing a universal schema this line of work because it could potentially be used for (a higher-order logic) to represent commonsense asser- designing better evaluations, as well as for diagnosing why tions, which also supports reasoning systems (Panton et al. some transformer-based models are not generalizing better, 2006), (Ramachandran, Reagan, and Goolsbey 2005) con- despite (individually) good performance across the board on duct richer logical inference. many benchmark datasets. In the last decade, in particular, as more accurate but also more ‘non-interpretable’ (or explainable) models like neu- Background and Preliminaries ral networks have become more prevalent, a relevant line Commonsense Question Answering (QA) of research has developed in ‘adversarially attacking’ these models to understand their weaknesses in a variety of do- Benchmarks mains (Akhtar and Mian 2018), (Zugner,¨ Akbarnejad, and As noted in both the introduction and the related work, com- Gunnemann¨ 2018), (Ilyas et al. 2018). Other problems, that monsense reasoning has emerged as an important and chal- require more precise inputs and prompts, include bias in the lenging research agenda in the last several years. The usual data and also in the model (Kim et al. 2019), (Lu et al. 2018). way to evaluate systems (with the state-of-the-art systems This line of work is valuable precedent for our own work, being based, in some significant way, on the transformer- and there has been some early work already on conducting based models described in the next section) purporting to such robustness tests on transformer-based language

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us