Grammar Induction and Parsing with Dependency-And-Boundary Models

Total Page:16

File Type:pdf, Size:1020Kb

Grammar Induction and Parsing with Dependency-And-Boundary Models GRAMMAR INDUCTION AND PARSING WITH DEPENDENCY-AND-BOUNDARY MODELS A DISSERTATION SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Valentin Ilyich Spitkovsky December 2013 iv Preface For Dan and Hayden, with even greater variances... Unsupervised learning of hierarchical syntactic structure from free-form natural language text is an important and difficult problem, with implications for scientific goals, such as un- derstanding human language acquisition, or engineering applications, including question answering, machine translation and speech recognition. As is the case with many unsu- pervised settings in machine learning, grammar induction usually reduces to a non-convex optimization problem. This dissertation proposes a novel family of head-outward genera- tive dependency parsing models and a curriculum learning strategy, co-designed to effec- tively induce grammars despite local optima, by taking advantage of multiple views of data. The dependency-and-boundary models are parameterized to exploit, as much as possi- ble, any observable state, such as words at sentence boundaries, which limits the prolif- eration of optima that is ordinarily caused by presence of latent variables. They are also flexible in their modeling of overlapping subgrammars and sensitive to different kinds of input types. These capabilities allow training data to be split into simpler text fragments, in accordance with proposed parsing constraints, thereby increasing the numbers of visible edges. An optimization strategy then gradually exposes learners to more complex data. The proposed suite of constraints on possible valid parse structures, which can be ex- tracted from unparsed surface text forms, helps guide language learners towards linguisti- cally plausible syntactic constructions. These constraints are efficient, easy to implement and applicable to a variety of naturally-occurring partial bracketings, including capitaliza- tion changes, punctuation and web markup. Connections between traditional syntax and v HTML annotations, for instance, were not previously known, and are one of several discov- eries about statistical regularities in text that this thesis contributes to the science linguistics. Resulting grammar induction pipelines attain state-of-the-art performance not only on a standard English dependency parsing test bed, but also as judged by constituent structure metrics, in addition to a more comprehensive multilingual evaluation that spans disparate language families. This work widens the scope and difficulty of the evaluation methodol- ogy for unsupervised parsing, testing against nineteen languages (rather than just English), evaluating on all (not just short) sentence lengths, and using disjoint (blind) training and test data splits. The proposed methods also show that it is possible to eliminate commonly used supervision signals, including biased initializers, manually tuned training subsets, custom termination criteria and knowledge of part-of-speech tags, and still improve performance. Empirical evidence presented in this dissertation strongly suggests that complex learn- ing tasks like grammar induction can cope with non-convexity and discover more correct syntactic structures by pursuing learning strategies that begin with simple data and basic models and progress to more complex data instances and more expressive model parameter- izations. A contribution to artificial intelligence more broadly is thus a collection of search techniques that make expectation-maximization and other optimization algorithms less sen- sitive to local optima. The proposed tools include multi-objective approaches for avoiding or escaping fixed points, iterative model recombination and “starting small” strategies that gradually improve candidate solutions, and a generic framework for transforming these and other already-found locally optimal models. Such transformations make for informed, intelligent, non-random restarts, enabling the design of comprehensive search networks that are capable of exploring combinatorial parameter spaces more rapidly and more thoroughly than conventional optimization methods. vi Dedicated to my loving family. vii Acknowledgements I must thank many people who have contributed to the successful completion of this thesis, starting with the oral examination committee members: my advisor, Daniel S. Jurafsky, for his patience, encouragement and wisdom; Hayden Shaw, who has been a close collaborator at Google Research and a de facto mentor; Christopher D. Manning, from whom I learned NLP and IR; Serafim Batzoglou, with whom I coauthored my earliest academic paper; and Arthur B. Owen, who became my first guide into the world of Statistics. I am also grateful to others — professors, coauthors, coworkers, labmates, other collaborators, anonymous reviewers, recommendation letter writers, graduate program coordinators, friends, well- wishers, as well as the numerous dance, martial arts, and yoga instructors who helped to keep me (relatively) sane through it all. Any attempt to name everyone is doomed to failure. Here are the results of one such effort: Omri Abend, Eneko Agirre, Lauren Anas, Kelly Ariagno, Michael Bachmann, Prachi Balaji, Edna Barr, Alexei Barski, John Bauer, Steven Bethard, Yonatan Bisk, Anna Botelho, Stephen P. Boyd, Thorsten Brants, Helen Buendi- cho, Jerry Cain, Luz Castineiras, Daniel Cer, Nathanael Chambers, Cynthia Chan, Angel X. Chang, Helen Chang, Ming Chang, Pi-Chuan Chang, Jean Chao, Wanxiang Che, Johnny Chen, Renate & Ron Chestnut, Rich Chin, Yejin Choi, Daniel Clancy, John Clark, Ralph L. Cohen, Glenn Corteza, Chris Cosner, Luke Dahl, Maria David, Amir Dembo, Persi Dia- conis, Kathi DiTommaso, Lynda K. Dunnigan, Jason Eisner, Song Feng, Jenny R. Finkel, Susan Fox, Michael Genesereth, Suvan Gerlach, Kevin Gimpel, Andy Golding, Leslie Gor- don, Bill Graham, Spence Green, Sonal Gupta, David L.W. Hall, Krassi Harwell, Cynthia Hayashi, Julia Hockenmaier, John F. Holzrichter, Anna Kazantseva, Carla Murray Ken- worthy, Steven P. Kerckhoff, Sadri Khalessi, Jam Kiattinant, Donald E. Knuth, Daphne Koller, Mikhail Kozhevnikov, Linda Kubiak, Polina Kuznetsova, Cristina N. & Homer G. viii Ladas, Diego Lanau, Leo Landa, Beth Levin, Eisar Lipkovitz, Claudia Lissette, Ting Liu, Sherman Lo, Gabby Magana, Kim Marinucci, Marie-Catherine de Marneffe, Felipe Mar- tinez, Andrea McBride, David McClosky, Ryan McDonald, R. James Milgram, Elizabeth Morin, Rajeev Motwani, Andrew Y. Ng, Natasha Ng, Barbara Nichols, Peter Norvig, In- gram Olkin, Jennifer Olson, Nick Parlante, Marius Pas¸ca, Fernando Pereira, Lisette Perelle, Daniel Peters, Leslie Miller Peters, Slav Petrov, Ann Marie Pettigrew, Keyko Pintz, Daniel Pipes, Igor Polkovnikov, Elias Ponvert, Zuby Pradhan, Agnieszka Purves, Bala Rajarat- nam, Daniel Ramage, Julian Miller Ramil, Marta Recasens, Roi Reichart, David Ro- gosa, Joseph P. Romano, Mendel Rosenblum, Vicente Rubio, Roy Schwartz, Xiaolin Shi, David O. Siegmund, Noah A. Smith, Richard Socher, Alfred Spector, Yun-Hsuan Sung, Mihai Surdeanu, Julie Tibshirani, Ravi Vakil, Raja Velu, Pier Voulkos, Mengqiu Wang, Tsachy Weissman, Jennifer Widom, Verna L. Wong, Adrianne Kishiyama & Bruce Wonna- cott, Lowell Wood, Wei Xu, Eric Yeh, Ayano Yoneda, Alexander Zeyliger, Nancy R. Zhang, and many others of Stanford’s Natural Language Processing Group, the Computer Science, Electrical Engineering, Linguistics, Mathematics, and Statistics Departments, Aikido and Argentine Tango Clubs, as well as the Fannie & John Hertz Foundation and Google Inc. Portions of the work presented in this dissertation were supported, in part, by research grants from the National Science Foundation (award number IIS-0811974) and the Defense Advanced Research Projects Agency’s Air Force Research Laboratory (contract numbers FA8750-09-C-0181 and FA8750-13-2-0040), as well as by a graduate Fellowship from the Fannie & John Hertz Foundation. I am grateful to these organizations, and to the organizers of TAC, NAACL, LREC, ICGI, EMNLP, CoNLL and ACL conferences and workshops. Last but certainly not least, I thank Jayanthi Subramanian in the Ph.D. Program Office and Ron L. Racilis at the Office of the University Registrar, who sprung into action when, at the last moment, one of the doctoral dissertation reading committee members left the state before conferring his approval with a physical signature. It’s been an exciting ride until the very end. I am grateful to (and for) all those who stood by me when life threw us curves. ix Contents Preface v Acknowledgements viii 1 Introduction 1 2 Background 8 2.1 TheDependencyModelwithValence . 12 2.2 EvaluationMetrics .............................. 13 2.3 EnglishData.................................. 15 2.4 MultilingualData ............................... 15 2.5 ANoteonInitializationStrategies . ..... 15 I Optimization 19 3 Baby Steps 21 3.1 Introduction.................................. 21 3.2 Intuition.................................... 22 3.3 RelatedWork ................................. 23 3.4 NewAlgorithmsfortheClassicModel . ... 25 3.4.1 Algorithm #0: Ad-Hoc∗ —AVariationonOriginalAd-HocInitialization . 25 x 3.4.2 Algorithm #1: Baby Steps —AnInitialization-IndependentScaffolding . 26 3.4.3 Algorithm #2: Less is More — Ad-Hoc∗ whereBabyStepsFlatlines . 26 3.4.4 Algorithm #3: Leapfrog —APracticalandEfficientHybridMixture . 26 3.4.5 Reference Algorithms —Baselines,aSkylineandPublishedArt
Recommended publications
  • Unsupervised Recurrent Neural Network Grammars
    Unsupervised Recurrent Neural Network Grammars Yoon Kim Alexander Rush Lei Yu Adhiguna Kuncoro Chris Dyer G´abor Melis Code: https://github.com/harvardnlp/urnng 1/36 Language Modeling & Grammar Induction Goal of Language Modeling: assign high likelihood to held-out data Goal of Grammar Induction: learn linguistically meaningful tree structures without supervision Incompatible? For good language modeling performance, need little independence assumptions and make use of flexible models (e.g. deep networks) For grammar induction, need strong independence assumptions for tractable training and to imbue inductive bias (e.g. context-freeness grammars) 2/36 Language Modeling & Grammar Induction Goal of Language Modeling: assign high likelihood to held-out data Goal of Grammar Induction: learn linguistically meaningful tree structures without supervision Incompatible? For good language modeling performance, need little independence assumptions and make use of flexible models (e.g. deep networks) For grammar induction, need strong independence assumptions for tractable training and to imbue inductive bias (e.g. context-freeness grammars) 2/36 This Work: Unsupervised Recurrent Neural Network Grammars Use a flexible generative model without any explicit independence assumptions (RNNG) =) good LM performance Variational inference with a structured inference network (CRF parser) to regularize the posterior =) learn linguistically meaningful trees 3/36 Background: Recurrent Neural Network Grammars [Dyer et al. 2016] Structured joint generative model of sentence x and tree z pθ(x; z) Generate next word conditioned on partially-completed syntax tree Hierarchical generative process (cf. flat generative process of RNN) 4/36 Background: Recurrent Neural Network Language Models Standard RNNLMs: flat left-to-right generation xt ∼ pθ(x j x1; : : : ; xt−1) = softmax(Wht−1 + b) 5/36 Background: RNNG [Dyer et al.
    [Show full text]
  • Deep Learning for Source Code Modeling and Generation: Models, Applications and Challenges
    Deep Learning for Source Code Modeling and Generation: Models, Applications and Challenges TRIET H. M. LE, The University of Adelaide HAO CHEN, The University of Adelaide MUHAMMAD ALI BABAR, The University of Adelaide Deep Learning (DL) techniques for Natural Language Processing have been evolving remarkably fast. Recently, the DL advances in language modeling, machine translation and paragraph understanding are so prominent that the potential of DL in Software Engineering cannot be overlooked, especially in the field of program learning. To facilitate further research and applications of DL in this field, we provide a comprehensive review to categorize and investigate existing DL methods for source code modeling and generation. To address the limitations of the traditional source code models, we formulate common program learning tasks under an encoder-decoder framework. After that, we introduce recent DL mechanisms suitable to solve such problems. Then, we present the state-of-the-art practices and discuss their challenges with some recommendations for practitioners and researchers as well. CCS Concepts: • General and reference → Surveys and overviews; • Computing methodologies → Neural networks; Natural language processing; • Software and its engineering → Software notations and tools; Source code generation; Additional Key Words and Phrases: Deep learning, Big Code, Source code modeling, Source code generation 1 INTRODUCTION Deep Learning (DL) has recently emerged as an important branch of Machine Learning (ML) because of its incredible performance in Computer Vision and Natural Language Processing (NLP) [75]. In the field of NLP, it has been shown that DL models can greatly improve the performance of many classic NLP tasks such as semantic role labeling [96], named entity recognition [209], machine translation [256], and question answering [174].
    [Show full text]
  • GRAINS: Generative Recursive Autoencoders for Indoor Scenes
    GRAINS: Generative Recursive Autoencoders for INdoor Scenes MANYI LI, Shandong University and Simon Fraser University AKSHAY GADI PATIL, Simon Fraser University KAI XU∗, National University of Defense Technology SIDDHARTHA CHAUDHURI, Adobe Research and IIT Bombay OWAIS KHAN, IIT Bombay ARIEL SHAMIR, The Interdisciplinary Center CHANGHE TU, Shandong University BAOQUAN CHEN, Peking University DANIEL COHEN-OR, Tel Aviv University HAO ZHANG, Simon Fraser University Bedrooms Living rooms Kitchen Fig. 1. We present a generative recursive neural network (RvNN) based on a variational autoencoder (VAE) to learn hierarchical scene structures, enabling us to easily generate plausible 3D indoor scenes in large quantities and varieties (see scenes of kitchen, bedroom, office, and living room generated). Using the trained RvNN-VAE, a novel 3D scene can be generated from a random vector drawn from a Gaussian distribution in a fraction of a second. We present a generative neural network which enables us to generate plau- learned distribution. We coin our method GRAINS, for Generative Recursive sible 3D indoor scenes in large quantities and varieties, easily and highly Autoencoders for INdoor Scenes. We demonstrate the capability of GRAINS efficiently. Our key observation is that indoor scene structures are inherently to generate plausible and diverse 3D indoor scenes and compare with ex- hierarchical. Hence, our network is not convolutional; it is a recursive neural isting methods for 3D scene synthesis. We show applications of GRAINS network or RvNN. Using a dataset of annotated scene hierarchies, we train including 3D scene modeling from 2D layouts, scene editing, and semantic a variational recursive autoencoder, or RvNN-VAE, which performs scene scene segmentation via PointNet whose performance is boosted by the large object grouping during its encoding phase and scene generation during quantity and variety of 3D scenes generated by our method.
    [Show full text]
  • Unsupervised Recurrent Neural Network Grammars
    Unsupervised Recurrent Neural Network Grammars Yoon Kimy Alexander M. Rushy Lei Yu3 Adhiguna Kuncoroz;3 Chris Dyer3 Gabor´ Melis3 yHarvard University zUniversity of Oxford 3DeepMind fyoonkim,[email protected] fleiyu,akuncoro,cdyer,[email protected] Abstract The standard setup for unsupervised structure p (x; z) Recurrent neural network grammars (RNNG) learning is to define a generative model θ are generative models of language which over observed data x (e.g. sentence) and unob- jointly model syntax and surface structure by served structure z (e.g. parse tree, part-of-speech incrementally generating a syntax tree and sequence), and maximize the log marginal like- P sentence in a top-down, left-to-right order. lihood log pθ(x) = log z pθ(x; z). Success- Supervised RNNGs achieve strong language ful approaches to unsupervised parsing have made modeling and parsing performance, but re- strong conditional independence assumptions (e.g. quire an annotated corpus of parse trees. In context-freeness) and employed auxiliary objec- this work, we experiment with unsupervised learning of RNNGs. Since directly marginal- tives (Klein and Manning, 2002) or priors (John- izing over the space of latent trees is in- son et al., 2007). These strategies imbue the learn- tractable, we instead apply amortized varia- ing process with inductive biases that guide the tional inference. To maximize the evidence model to discover meaningful structures while al- lower bound, we develop an inference net- lowing tractable algorithms for marginalization; work parameterized as a neural CRF con- however, they come at the expense of language stituency parser. On language modeling, unsu- modeling performance, particularly compared to pervised RNNGs perform as well their super- vised counterparts on benchmarks in English sequential neural models that make no indepen- and Chinese.
    [Show full text]
  • Semantic Analysis of Multi Meaning Words Using Machine Learning and Knowledge Representation by Marjan Alirezaie
    Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Institutionen f¨or datavetenskap Department of Computer and Information Science Final Thesis Semantic Analysis Of Multi Meaning Words Using Machine Learning And Knowledge Representation by Marjan Alirezaie LiU/IDA-EX-A- -11/011- -SE 2011-04-10 Link¨opings universitet Link¨opings universitet SE-581 83 Link¨opings, Sweden 581 83 Link¨opings Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Final Thesis Semantic Analysis Of Multi Meaning Words Using Machine Learning And Knowledge Representation by Marjan Alirezaie LiU/IDA-EX-A- -11/011- -SE Supervisor, Examiner : Professor Erik Sandewall Dept. of Computer and Information Science at Link¨opings Universitet Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Abstract The present thesis addresses machine learning in a domain of natural- language phrases that are names of universities. It describes two approaches to this problem and a software implementation that has made it possible to evaluate them and to compare them. In general terms, the system's task is to learn to 'understand' the signif- icance of the various components of a university name, such as the city or region where the university is located, the scientific disciplines that are studied there, or the name of a famous person which may be part of the university name. A concrete test for whether the system has acquired this understanding is when it is able to compose a plausible university name given some components that should occur in the name.
    [Show full text]
  • Arxiv:2104.07874V1 [Cs.CL] 16 Apr 2021 Tersection of Basic Science and Applied Technolo- Search Aims to Understand Why One Translation Suc- Gies
    Translational NLP: A New Paradigm and General Principles for Natural Language Processing Research Denis Newman-Griffis1 Jill Fain Lehman2 Carolyn Rosé3 Harry Hochheiser1 1Department of Biomedical Informatics, University of Pittsburgh, USA 2Human-Computer Interaction Institute, Carnegie Mellon University, USA 3Language Technologies Institute, Carnegie Mellon University, USA {dnewmangriffis, harryh}@pitt.edu, {jfl, cprose}@cs.cmu.edu Abstract Applications Enables application Natural language processing (NLP) research needs andDrive constraints modeling goals combines the study of universal principles, Translational through basic science, with applied science tar- questions NLP geting specific use cases and settings. How- Identifies new research phenomena ever, the process of exchange between ba- Discovers underlyingProvides evidence for linguistic theory sic NLP and applications is often assumed Linguistics Modeling to emerge naturally, resulting in many inno- vations going unapplied and many important Guides model questions left unstudied. We describe a new development paradigm of Translational NLP, which aims to Figure 1: Interactions between linguistic theory, model structure and facilitate the processes by which development, and applications in NLP research. Solid basic and applied NLP research inform one lines indicate moving from basic research to applica- another. Translational NLP thus presents a tions, and dashed lines indicate how applied research third research paradigm, focused on under- feeds back into basic study. Translational
    [Show full text]
  • Max-Margin Synchronous Grammar Induction for Machine Translation
    Max-Margin Synchronous Grammar Induction for Machine Translation Xinyan Xiao and Deyi Xiong∗ School of Computer Science and Technology Soochow University Suzhou 215006, China [email protected], [email protected] Abstract is not elegant theoretically. It brings an undesirable gap that separates modeling and learning in an SMT Traditional synchronous grammar induction system. estimates parameters by maximizing likeli- Therefore, researchers have proposed alternative hood, which only has a loose relation to trans- lation quality. Alternatively, we propose a approaches to learning synchronous grammars di- max-margin estimation approach to discrim- rectly from sentence pairs without word alignments, inatively inducing synchronous grammars for via generative models (Marcu and Wong, 2002; machine translation, which directly optimizes Cherry and Lin, 2007; Zhang et al., 2008; DeNero translation quality measured by BLEU. In et al., 2008; Blunsom et al., 2009; Cohn and Blun- the max-margin estimation of parameters, we som, 2009; Neubig et al., 2011; Levenberg et al., only need to calculate Viterbi translations. 2012) or discriminative models (Xiao et al., 2012). This further facilitates the incorporation of Theoretically, these approaches describe how sen- various non-local features that are defined on the target side. We test the effectiveness of our tence pairs are generated by applying sequences of max-margin estimation framework on a com- synchronous rules in an elegant way. However, they petitive hierarchical phrase-based system. Ex- learn synchronous grammars by maximizing likeli- periments show that our max-margin method hood,1 which only has a loose relation to transla- significantly outperforms the traditional two- tion quality (He and Deng, 2012).
    [Show full text]
  • Compound Probabilistic Context-Free Grammars for Grammar Induction
    Compound Probabilistic Context-Free Grammars for Grammar Induction Yoon Kim Chris Dyer Alexander M. Rush Harvard University DeepMind Harvard University Cambridge, MA, USA London, UK Cambridge, MA, USA [email protected] [email protected] [email protected] Abstract non-parametric models (Kurihara and Sato, 2006; Johnson et al., 2007; Liang et al., 2007; Wang and We study a formalization of the grammar in- Blunsom, 2013), and manually-engineered fea- duction problem that models sentences as be- tures (Huang et al., 2012; Golland et al., 2012) to ing generated by a compound probabilistic context free grammar. In contrast to traditional encourage the desired structures to emerge. formulations which learn a single stochastic We revisit these aforementioned issues in light grammar, our context-free rule probabilities of advances in model parameterization and infer- are modulated by a per-sentence continuous ence. First, contrary to common wisdom, we latent variable, which induces marginal de- find that parameterizing a PCFG’s rule probabil- pendencies beyond the traditional context-free ities with neural networks over distributed rep- assumptions. Inference in this grammar is resentations makes it possible to induce linguis- performed by collapsed variational inference, tically meaningful grammars by simply optimiz- in which an amortized variational posterior is placed on the continuous variable, and the la- ing log likelihood. While the optimization prob- tent trees are marginalized with dynamic pro- lem remains non-convex, recent work suggests gramming. Experiments on English and Chi- that there are optimization benefits afforded by nese show the effectiveness of our approach over-parameterized models (Arora et al., 2018; compared to recent state-of-the-art methods Xu et al., 2018; Du et al., 2019), and we in- for grammar induction from words with neu- deed find that this neural PCFG is significantly ral language models.
    [Show full text]
  • Ethical and Societal Issues in National Security Applications of Emerging
    Ethical and Societal Issues in National Security Applications of Emerging Technologies Herb Lin Study Director Computer Science and Telecommunications Board National Research Council Charge The National Academies will develop a consensus report on the topic of ethical, legal, and societal issues relating to research on, development, and use of democratized and rapidly-changing technologies with potential military application, such as information technologies, synthetic biology, and nanotechnology. This report will articulate a framework for policy makers, institutions, and individual researchers to think about such issues as they relate to these technologies of military relevance and to the extent feasible make recommendations for how each of these groups should approach these considerations in their research activities. A workshop to be held as early as practical in the study would be convened to obtain perspectives and foster discussion on these matters. A final report would be issued within 21 months of the project start providing the National Research Council's findings and recommendations. Funding agency: DARPA, at the request of the director Committee William Ballhaus, Co-Chair, NAE James Moor Retired President and CEO, The Aerospace Corporation Professor of Intellectual and Moral Philosophy, Dartmouth College Jean-Lou Chameau, Co-Chair, NAE President, California Institute of Technology Jonathan Moreno, IOM Professor of Philosophy, University of Pennsylvania Bran Ferren Co-founder and Chief Creative Officer, Applied Minds Joel Moses,
    [Show full text]
  • Grammar Induction
    Grammar Induction Regina Barzilay MIT October, 2005 Three non-NLP questions 1. Which is the odd number out? 625,361,256,197,144 2. Insert the missing letter: B,E,?,Q,Z 3. Complete the following number sequence: 4, 6, 9, 13 7, 10, 15, ? How do you solve these questions? • Guess a pattern that generates the sequence Insert the missing letter: B,E,?,Q,Z 2,5,?,17, 26 k2 + 1 • Select a solution based on the detected pattern k = 3 ! 10th letter of the alphabet ! J More Patterns to Decipher: Byblos Script Image removed for copyright reasons. More Patterns to Decipher: Lexicon Learning Ourenemiesareinnovativeandresourceful,andsoarewe. Theyneverstopthinkingaboutnewwaystoharmourcountry andourpeople,andneitherdowe. Which is the odd word out? Ourenemies . Enemies . We . More Patterns to Decipher: Natural Language Syntax Which is the odd sentence out? The cat eats tuna. The cat and the dog eats tuna. Today • Vocabulary Induction – Word Boundary Detection • Grammar Induction – Feasibility of language acquisition – Algorithms for grammar induction Vocabulary Induction Task: Unsupervised learning of word boundary segmentation • Simple: Ourenemiesareinnovativeandresourceful,andsoarewe. Theyneverstopthinkingaboutnewwaystoharmourcountry andourpeople,andneitherdowe. • More ambitious: Image of Byblos script removed for copyright reasons. Word Segmentation (Ando&Lee, 2000) Key idea: for each candidate boundary, compare the frequency of the n-grams adjacent to the proposed boundary with the frequency of the n-grams that straddle it. S ? S 1 2 T I N G E V I D t 1 t 2 t3 For N = 4, consider the 6 questions of the form: ”Is #(si ) � #(tj )?”, where #(x) is the number of occurrences of x Example: Is “TING” more frequent in the corpus than ”INGE”? Algorithm for Word Segmentation s n non-straddling n-grams to the left of location k 1 s n non-straddling n-grams to the right of location k 2 tn straddling n-gram with j characters to the right of location k j I� (y; z) indicator function that is 1 when y � z, and 0 otherwise.
    [Show full text]
  • Dependency Grammar Induction with a Neural Variational Transition
    The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19) Dependency Grammar Induction with a Neural Variational Transition-Based Parser Bowen Li, Jianpeng Cheng, Yang Liu, Frank Keller Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB, UK fbowen.li, jianpeng.cheng, [email protected], [email protected] Abstract Though graph-based models achieve impressive results, their inference procedure requires O(n3) time complexity. Dependency grammar induction is the task of learning de- Meanwhile, features in graph-based models must be decom- pendency syntax without annotated training data. Traditional graph-based models with global inference achieve state-of- posable over substructures to enable dynamic programming. the-art results on this task but they require O(n3) run time. In comparison, transition-based models allow faster infer- Transition-based models enable faster inference with O(n) ence with linear time complexity and richer feature sets. time complexity, but their performance still lags behind. Although relying on local inference, transition-based mod- In this work, we propose a neural transition-based parser els have been shown to perform well in supervised parsing for dependency grammar induction, whose inference proce- (Kiperwasser and Goldberg 2016; Dyer et al. 2015). How- dure utilizes rich neural features with O(n) time complex- ever, unsupervised transition parsers are not well-studied. ity. We train the parser with an integration of variational in- One exception is the work of Rasooli and Faili (2012), in ference, posterior regularization and variance reduction tech- which search-based structure prediction (Daume´ III 2009) is niques.
    [Show full text]
  • Are Pre-Trained Language Models Aware of Phrases?Simplebut Strong Baselinesfor Grammar Induction
    Published as a conference paper at ICLR 2020 ARE PRE-TRAINED LANGUAGE MODELS AWARE OF PHRASES?SIMPLE BUT STRONG BASELINES FOR GRAMMAR INDUCTION Taeuk Kim1, Jihun Choi1, Daniel Edmiston2 & Sang-goo Lee1 1Dept. of Computer Science and Engineering, Seoul National University, Seoul, Korea 2Dept. of Linguistics, University of Chicago, Chicago, IL, USA ftaeuk,jhchoi,[email protected], [email protected] ABSTRACT With the recent success and popularity of pre-trained language models (LMs) in natural language processing, there has been a rise in efforts to understand their in- ner workings. In line with such interest, we propose a novel method that assists us in investigating the extent to which pre-trained LMs capture the syntactic notion of constituency. Our method provides an effective way of extracting constituency trees from the pre-trained LMs without training. In addition, we report intriguing findings in the induced trees, including the fact that some pre-trained LMs outper- form other approaches in correctly demarcating adverb phrases in sentences. 1 INTRODUCTION Grammar induction, which is closely related to unsupervised parsing and latent tree learning, allows one to associate syntactic trees, i.e., constituency and dependency trees, with sentences. As gram- mar induction essentially assumes no supervision from gold-standard syntactic trees, the existing approaches for this task mainly rely on unsupervised objectives, such as language modeling (Shen et al., 2018b; 2019; Kim et al., 2019a;b) and cloze-style word prediction (Drozdov et al., 2019) to train their task-oriented models. On the other hand, there is a trend in the natural language pro- cessing (NLP) community of leveraging pre-trained language models (LMs), e.g., ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019), as a means of acquiring contextualized word represen- tations.
    [Show full text]