
Neural Representation Learning in Linguistic Structured Prediction Lingpeng Kong CMU-LTI-17-008 Language Technologies Institute School of Computer Science Carnegie Mellon University 5000 Forbes Ave., Pittsburgh, PA 15213 www.lti.cs.cmu.edu Thesis Committee: Noah A. Smith (co-Chair), Carnegie Mellon University / University of Washington Chris Dyer (co-Chair), Carnegie Mellon University / Google DeepMind Alan W. Black, Carnegie Mellon University Michael Collins, Columbia University / Google Research Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy In Language and Information Technologies c 2017, Lingpeng Kong For . Y iv Abstract Advances in neural network architectures and training algorithms have demonstrated the effectiveness of representation learning in nat- ural language processing. This thesis argues for the importance of modeling discrete structure in language, even when learning contin- uous representations. We propose that explicit structure representations and learned dis- tributed representations can be efficiently combined for improved per- formance over (i) traditional approaches to structure and (ii) unin- formed neural networks that ignore all but surface sequential struc- ture. We demonstrate, on three distinct problems, how assumptions about structure can be integrated naturally into neural representation learners for NLP problems, without sacrificing computational efficiency. First, we propose segmental recurrent neural networks (SRNNs) which define, given an input sequence, a joint probability distribution over segmentations of the input and labelings of the segments and show that, compared to models that do not explicitly represent seg- ments such as BIO tagging schemes and connectionist temporal classi- fication (CTC), SRNNs obtain substantially higher accuracies on tasks including phone recognition and handwriting recognition. Second, we propose dynamic recurrent acyclic graphical neural net- works (DRAGNN), a modular neural architecture that generalizes the encoder/decoder concept to include explicit linguistic structures. Lin- guistic structures guide the building process of the neural networks by following the transitions and encoding the (partial) structures con- structed those transitions explicitly into the hidden layer activations. We show that our framework is significantly more accurate and ef- ficient than sequence-to-sequence with attention for syntactic depen- dency parsing and yields more accurate multi-task learning for extrac- tive summarization tasks. Third, we propose to use discrete stochastic attention to model the alignment structures explicitly in the neural sequence-to-sequence trans- lation model. We regularize the posterior distributions of the latent alignment decisions using the posteriors computed from models that make stronger independence assumptions but that have the same la- tent variables. We show that our posterior regularization scheme leads to substantially improved generalization. Since the posterior regular- ization objective can be generally expensive to compute, we propose several approximations based on importance sampling and find that they are either as good as or better than the exact objective in terms of held-out generalization. The techniques proposed in this thesis automatically learn struc- turally informed representations of the inputs. Linguistically moti- vated inductive biases help learning better representations in the neu- ral models and these representations and components can be better integrated with other end-to-end deep learning systems within and beyond NLP. vi Acknowledgments I cannot choose the best. The best chooses me. — Rabindranath Tagore, Stray Birds In the view of scientists, can represent the entire unknown world. Y It can be zero or one, the perception of the world, the truths and lies behind this symbol. We are trying to understand the world, to explain why certain things could happen, and to predict what will happen in the future. We learn to think more rationally when facing a phe- nomenon, to question the essentials of a thing, to describe feelings us- ing statistical methods, and to model the cause and effect of the world. It seems that the pursuit of the Ph.D. is a journey from ignorance to all-knowing. At least that was what I thought when I began mine in the states. This is ridiculous of course, because even today, the world still puzzles me, more than ever before. However, I do not feel depressed. Quite the contrary, I am filled with joy, for the love that I have towards creation and exploration, and more importantly, for the people that I met along the way. I want to thank my advisor Noah Smith for bringing me to this fantastic world of research and for all the inspiring words both in re- search and life. I could not have imagined a better advisor. I also want to thank my co-advisor Chris Dyer for all the brilliant ideas and tech- nical suggestions, and of course, for his boundless imagination when telling a story in his own interesting way. Besides my advisors, I would like to thank the rest of my thesis committee: Alan Black and Michael Collins for their insightful com- ments and for being my heroes in speech and NLP. I am very lucky to work with my fellow ARK/clab members, past and present: Shay Cohen, Dipanjan Das, Andre´ Martins, Kevin Gim- pel, Tae Yano, Nathan Schneider, Brendan O’Connor, Dani Yogatama, David Bamman, Waleed Ammar, Yanchuan Sim, Fei Liu, Victor Chahuneau, Rohan Ramanath, Mengqiu Wang, Yi Zhu, Bill McDowell, Naomi Saphra, Miguel Ballesteros, Dallas Card, Elizabeth Clark, Jesse Dodge, Jeffrey Flanigan, Yangfeng Ji, Lucy Lin, Nelson Liu, Yijia Liu, Kelvin Luu, Phoebe Mulcaire, Hao Peng, Nikko Rush, Maarten Sap, Roy Schwartz, Swabha Swayamdipta, Chenhao Tan, Sam Thomson, Manaal Faruqui, Kazuya Kawakami, Guillaume Lample, Wang Ling, Austin Matthews, Avneesh Saluja, Yulia Tsvetkov, Kartik Goyal, Eva Schlinger. Google Research and Google DeepMind offered me two amazing opportunities to intern in New York and London and had the chance to work with the best people in the field — David Weiss, Chris Al- berti, Daniel Andor, Ivan Bogatyy, Hao Zhang, Zhuoran Yu, Kuzman Ganchev, Dipanjan Das, Emily Pitler, Michael Collins, Slav Petrov, and, Stephen Clark, Chris Dyer, Phil Blunsom, Dani Yogatama, Wang Ling, Yishu Miao, Lei Yu, Angeliki Lazaridou. I am very lucky to have many friends in many cities — Beijing, Shanghai, Hangzhou, Shenzhen, Pittsburgh, Seattle, New York, Moun- tain View, London, and Melbourne. I wish I could come to visit you more often. Thanks my best graduate school friend Ting-Hao (Ken- neth) Huang, for always asking me (benign) questions that I do not have answers for. I want to thank my family members, my parents X.C. and X.K, my sisters L.H. and Y.Z., Y.H., my shiba Hikaru, for standing by me and being the light in my life. This research was supported in part by the U.S. Army Research viii Laboratory and the U.S. Army Research Office under contract/grant number W911NF-10-1-0533, by NSF grants IIS-1054319 and IIS-1352440, by Defense Advanced Research Projects Agency (DARPA) Information Innovation Office (I2O) under the Low Resource Languages for Emer- gent Incidents (LORELEI) program issued by DARPA/I2O under Con- tract No. HR0011-15-C-0114, by grants from Google, and computa- tional resources provided by Google, the Pittsburgh Supercomputing Center, and Amazon Web Services. This thesis is for , for the world unknown, for the danger unseen, Y for the story untold, for the person unmet. There is no better answer than unknown. There is no better experi- ence than future. There is no better arrival than departure. — September 2017, London ix x Contents 1 Introduction 1 2 Notation and Representations 5 3 Segmental Recurrent Neural Networks 7 3.1 Model . .8 3.2 Parameter Learning . 10 3.3 Inference with Dynamic Programming . 12 3.3.1 Computing Segment Embeddings . 12 3.3.2 Computing the most probable segmentation/labeling and Z(x) ................................ 13 3.3.3 Computing Z(x, y) ........................ 14 3.4 Connectionist Temporal Classification . 15 3.5 Experiments . 16 3.5.1 Online Handwriting Recognition . 16 3.5.2 Joint Chinese Word Segmentation and POS tagging . 19 3.5.3 End-to-end Speech Recognition . 22 3.6 Related Work . 27 3.7 Conclusion . 30 4 A Transition-based Framework for Dynamically Connected Neural Net- xi works 31 4.1 Background . 33 4.2 Transition Systems . 40 4.3 Transition Based Recurrent Networks . 41 4.3.1 Connecting multiple TBRUs to learn shared representations 45 4.3.2 How to Train a DRAGNN . 49 4.4 Experiments . 51 4.5 Conclusion . 54 5 Stochastic Attention and Posterior Regularization for Neural Machine Translation 55 5.1 Background . 56 5.2 Model . 58 5.2.1 Marginal likelihood and training objective . 61 5.2.2 Decoding . 62 5.3 Approximating the Marginal Likelihood . 62 5.4 Experiment: Deterministic vs. Stochastic Attention . 64 5.5 Posterior Regularization . 66 5.6 Experiment: The Effect of Posterior Regularization . 69 5.7 Related Work . 71 5.8 Conclusion . 71 6 Conclusion and Future work 73 Bibliography 77 xii Chapter 1 Introduction Computationally modeling the structure in language is crucial for two reasons. First, for the larger goal of automated language understanding, linguists have found that language meaning is derived through composition (Manning, 2016). Understanding novel and complex sentences crucially depends on being able to construct their meaning from smaller parts/atoms (e.g., words in a sentence) com- positionally. Structure in language tells what these atoms are and how they fit together (e.g., syntactic or semantic parses) in composition. Second, from the per- spective of machine learning, linguistic structures can be understood as a form of inductive bias, which helps learning succeed with less or less ideal data (Mitchell, 1980). The inductive bias of a learning algorithm is the set of assumptions that the learner uses to predict outputs given inputs that it has not encountered (Mitchell, 1980). Taking a broader view, all the things that aren’t estimated from the data directly, we call it inductive bias.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages100 Page
-
File Size-