Understanding Natural Language with Commonsense Knowledge Representation, Reasoning, and Simulation

Understanding Natural Language with Commonsense Knowledge Representation, Reasoning, and Simulation

Understanding Natural Language with Commonsense Knowledge Representation, Reasoning, and Simulation Antoine Bosselut A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Washington 2020 Reading Committee: Yejin Choi, Chair Gina-Anne Levow Noah Smith Program Authorized to Offer Degree: Computer Science and Engineering c Copyright 2020 Antoine Bosselut ii University of Washington Abstract Understanding Natural Language with Commonsense Knowledge Representation, Reasoning, and Simulation Antoine Bosselut Chair of the Supervisory Committee: Associate Professor Yejin Choi Computer Science and Engineering For machines to understand language, they must intuitively grasp the commonsense knowledge that underlies the situations they encounter in text. A simple statement such as it is raining immediately implies a bank of shared context for any human reader: they should bring an umbrella, roads will be slippery, increased traffic may make them late, rain boots are preferable to sandals, and many more. Language understanding systems must be able to robustly use this commonsense knowledge to make decisions or take actions. Observations of the world are always more rich and detailed than the information that is explicitly transmitted through language, and machines must be able to fill in remaining details with commonsense inferences. Recent advances in natural language processing have made considerable progress in identifying the commonsense implications of situations described in text. These methods generally involve training high- parameter language models on large language corpora and have shown marked improvement on a variety of benchmark end tasks in natural language understanding. However, these systems are brittle – often failing when presented with out-of-distribution inputs – and uninterpretable – incapable of providing insights into why these different inputs cause shifted behavior. Meanwhile, traditional approaches to natural language understanding, which focus on linking language to background knowledge from large ontologies, remain iii limited by their inability to scale to the situational diversity expressed through language. In this dissertation, we argue that for natural language understanding agents to function in less controlled test environments, they must learn to reason more explicitly about the commonsense knowledge underlying textual situations. In furtherance of these goals, we draw from both traditional symbolic and modern neural approaches to natural language understanding. We present four studies on learning commonsense represen- tations from language, and integrating and reasoning about these representations in NLP systems to achieve more robust textual understanding. iv Acknowledgements My completion of this degree would not have been possible without a supportive network of mentors, col- laborators, colleagues, friends, and family. First and foremost, I’d like to thank my advisor, Yejin Choi, for her counsel, support, and feedback over the last six years. Her mentorship and advising were crucial for my growth as a researcher. And her guidance and flexibility allowed me to explore challenging problems in commonsense representation and reasoning for NLP, and learn to develop applicable solutions for them. I’m also incredibly grateful to Asli Çelikyilmaz, a friend and colleague, who advised me throughout my stay as a student researcher at Microsoft, and who motivated my passion for NLP topics in language generation. I’d also like thank Peter Clark and Oren Etzioni, who invited me to form fruitful collaborations at the Allen Institute for AI, and whose interest in my work motivated me to continue exploring the ideas in commonsense representation that became the foundation of my thesis. I am also grateful to my committee members Noah Smith, Gina- Anne Levow, and Kevin Knight, who have been mentors I could turn to for counsel at various stages of my research career. I’d also like to thank the many great collaborators, co-authors, and colleagues I’ve had the privilege of working with during my PhD. None of of my work would have been as exciting or well-rounded without insights from Dieter Fox, Hanna Hajishirzi, Hannah Rashkin, Maarten Sap, Lianhui Qin, Saadia Gabriel, Aida Amini, Andrew Hoang, Ari Holtzman, Jan Buys, Max Forbes, Peter West, Omer Levy, Corin Ennis, Elizabeth Clark, David Golub, Bhavana Dalvi, Niket Tandon, Chandra Bhagavatula, Ronan Le Bras, Chai- tanya Malaviya, Vered Shwartz, Kyle Lo, Scott Yih, Jena Hwang, Keisuke Sakaguchi, Xinya Du, Jianfeng Gao, Xiaodong He, Po-sen Huang, Urvashi Khandelwal, Marjan Ghazvininejad, Thomas Wolf, Sasha Rush, and Jianfu Chen. My research agenda was broadened and enhanced by the discussions held with them and v the collaborative projects pursued with them. I’ve also been lucky to be part of a large NLP community at the University of Washington. Throughout my PhD, I have been able to interact with and learn from the many great researchers that make up UW NLP, including, but not limited to: Luke Zettlemoyer, Yonatan Bisk, Eunsol Choi, Chloe Kiddon, Ioannis Kon- stas, Dallas Card, Roy Schwartz, Nicholas Fitzgerald, Sam Thomson, Jesse Thomason, Gabriel Stanovsky, Swabha Swayamdipta, Mark Yatskar, Rowan Zellers, Mike Lewis, Kenton Lee, Luheng He, and Xi Victoria Lin. One of the most rewarding parts of pursuing a PhD is being able to make friends and form strong bonds with fellow students on the same journey. I’d specifically like to thank Maaz Ahmad, Terra Blevins, Jiechen Chen, Srini Iyer, Kiron Lebeck, Niel Lebeck, Jacob Schreiber, and Dave Wadden for making the successes more meaningful, the failures less embittering, and for the great times between asynchronous conference deadlines. Finally, I want to thank my family: Anne, François, Marion, Remy, Antoinette, Mengsha, Chloe and Theo. It would take too long to list all the ways you’ve supported and helped me throughout this endeavor, but I couldn’t have done it with you. vi DEDICATION To my parents, Anne and Remy viii Contents Acknowledgements v List of Figures xv List of Tables xvii 1 Introduction 1 1.1 Thesis Outline..........................................3 1.2 Publications...........................................4 2 Background 7 2.1 Basics..............................................7 2.1.1 Logistic Regression...................................7 2.1.2 Feedforward Neural Networks.............................8 2.1.3 Activation Functions..................................9 2.1.4 Layer Normalization.................................. 10 2.2 Deep Learning for Natural Language Processing........................ 11 2.2.1 Language Models.................................... 11 2.2.2 Word Embeddings................................... 12 2.3 Recurrent Neural Networks................................... 13 2.3.1 Elman Networks.................................... 13 2.3.2 Long Short-term Memory (LSTM)........................... 14 2.3.3 Gated Recurrent Units (GRU)............................. 16 ix 2.4 Transformers.......................................... 17 2.4.1 Transformer Components............................... 17 2.4.2 Left-to-Right Transformer Language Models..................... 20 2.4.3 Bidirectional Transformer Language Models..................... 20 3 Commonsense Transformers as Neural Representations of Knowledge Graphs 23 3.1 Introduction........................................... 24 3.2 Related Work.......................................... 25 3.2.1 Knowledge Base Construction............................. 25 3.2.2 Knowledge Base Completion............................. 26 3.2.3 Commonsense Knowledge Base Completion..................... 26 3.2.4 Transformers and Pre-training............................. 27 3.3 Learning to Generate Commonsense Knowledge Descriptions................ 27 3.3.1 Task........................................... 27 3.3.2 Transformer Language Model............................. 27 3.4 Training COMET ........................................ 29 3.5 ATOMIC Study......................................... 31 3.5.1 Setup.......................................... 31 3.5.2 Results......................................... 33 3.6 ConceptNet Study........................................ 42 3.6.1 Setup.......................................... 42 3.6.2 Results......................................... 43 3.7 Summary............................................ 47 4 Commonsense Reasoning with Dynamic Knowledge Graph Construction 49 4.1 Introduction........................................... 50 4.2 Related Work.......................................... 51 4.2.1 Question Answering with Knowledge Graphs..................... 51 4.2.2 Multi-hop Reading Comprehension.......................... 52 x 4.2.3 Automatic Commonsense Knowledge Graph Construction.............. 52 4.3 Dynamic Knowledge Graph Construction for Question Answering.............. 53 4.3.1 Constructing a Probabilistic Commonsense Knowledge Graph............ 53 4.4 Reasoning over Probabilistic Knowledge Graphs....................... 55 4.4.1 Computing Answer Scores............................... 55 4.4.2 Inference........................................ 56 4.5 Experimental Setup....................................... 58 4.5.1 Datasets and Processing................................ 58 4.5.2 Experimental Settings................................. 59 4.6 SOCIALIQA Study....................................... 60 4.7 STORYCOMMONSENSE Study................................. 65 4.8 Summary...........................................

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    172 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us