Forgotten Islands.Key

Total Page:16

File Type:pdf, Size:1020Kb

Forgotten Islands.Key AYJ FORGOTTEN ISLANDS OF REGULARITY REGULAR UNIVERSE OF LANGUAGE MODELS AND ITS CONTINUING EXPANSION ANSSI YLI-JYRÄ Markov Kleene chains closure, union, caten. LOCALITY & REGULARITY 18/12/2017 Moorehttps://upload.wikimedia.org/wikipedia/commons/1/1f/Moore-Automat-en.svg RNN transition logicfinite-stateoutput machine logic input state memory output S Σ 0 S n+1 S S n T G Λ R clock reset https://upload.wikimedia.org/wikipedia/commons/1/1f/Moore-Automat-en.svg 1/1 TRIVIAL FINITE-STATE PHONOLOGY GENERATIVE PHONOLOGY (Chomsky & Halle 1968) Universal computing (Chomsky 1963, Johnson 1972, Ristad 1990) Computes PARTIAL functions Problematic as a Theory (Popper 1959, Johnson 1972) NAIVE FINITE-STATE PHONOLOGY right linear derivation α → βγ → ββ’γ’ →… based on a limited view of regularity - not linguistically intriguing TRUE FINITE-STATE PHONOLOGY NON-ITERATED FUNCTIONAL RULES (Johnson 1972) Generative phonological rules have context conditions: α → β / γ _ γ’ Practical grammars with simultaneous and linear application modes Test contexts with a bi(directional) machine (Schützenberger 1961) Surprisingly reduced linguistically interesting rule into FS machines LIMITATION No cyclic rules, but composition (Schützenberger 1961) FINITE-STATE UNIVERSAL MODELS •BRAIN COMPATIBLE •PRACTICAL DECIDABLE •EFFICIENT •GOOD THEORY •ALGORITHMIC •ADEQUATE REGULAR •SAFE AYJ (4) just backs up until it has tested the precondition. In our example, the precondition is just the suffix[C][y][T]: (4.1) !!!!!!!!$ # g l o s s y T $ $ $ $ (4.2) (4) (4.3) !!!!!!s i e s t # $ With this change, long words are produced in a zigzag style (5) where every rule application may back up some letters. (4) just backs up until it has tested the precondition. In our example, the precondition is just the suffix[C][y][T]: !! - (4.1) !!!!!!!! , $ # g l o s s y T $ $ $ $ !! - (4.2) (4) (5) , (4.3) !! !!!!!!s i e s t # $ ... - With this change, long words are produced in a zigzag style (5) where every , rule application may back up some letters. !! Since the union of the affix-rules is applied repeatedly to its own output, the !! - standard two-part regularity condition of phonological grammars does not apply. , However, as long as the derivation deletes and appends new material only at the !! - BUT (5) , right end of the string, the resulting process is linear and, intuitively, a regular ITERATED!! DERIVATION... grammar. In addition, IN theHUNSPELL moves taken by the TM can now be deterministic because- the machine does not completely rewind the tape at any point but , GOES BEYONDalways!! KAPLAN makes relative moves& KAY that allow (1994) it to remember its previous position. Since the union of the affix-rules is applied repeatedly to its own output, the standard two-part regularity condition of phonological2.3 grammars Linear does Encoding not apply. However, as long as the derivation deletes and appends new material only at the right end of the string, the resulting process is linearAlthough and, intuitively, the grammar a regular represented by a hunspell lexicon does not satisfy the grammar. In addition, the moves taken by the TM can now be deterministic classical two-part condition of finite-state phonology, it is equivalent to a finite- becauseLászló the Németh, machine Viktor does not Trón, completely Péter rewind Halácsy, the András tape at any Kornai, point András but Rung, and István Szakadát. always makes relative moves that allow it to rememberstate transducer its previous position. when restricted to the suffixrules. Leveraging the open source ispell codebaseThere for are minority now some language methods analysis. to compile hunspell lexicons to finite-state 2.3Proceedings Linear Encoding of the SALTMIL Workshoptransducers. at LREC 2004 Early experiments on compilation are due to Gyorgy Gyepesi (p.c., Although the grammar represented by a hunspell2007)lexicon and does others not satisfy in Budapest. the The author developed his solution (Yli-Jyr¨a, classical two-part condition of finite-state phonology,2009) it is using equivalent a variant to a finite- of Two-Level Morphology (Koskenniemi, 1983). This state transducer when restricted to the suffixrules.method viewed the lexicon as a collection of constraints that described linearly There are now some methods to compile hunspellencodedlexicons backing to finite-state up and suffixation in derivations. The method included an effi- transducers. Early experiments on compilation are due to Gyorgy Gyepesi (p.c., 2007) and others in Budapest. The author developedcient one-shot his solution compilation (Yli-Jyr¨a, algorithm to compile and intersect several hundreds 2009) using a variant of Two-Level Morphologyof thousands(Koskenniemi, of 1983). lexical This context restriction rules in parallel as if the lexical contin- method viewed the lexicon as a collection of constraintsuations that (morphotaxis) described linearly were phonological constraints. A similar method, finally encoded backing up and suffixation in derivations.implemented The method included by his an colleagues, effi- Pirinen and Lind´en (2010), separated the lexi- cient one-shot compilation algorithm to compilecal and continuations intersect several hundredsfrom the phonological changes at morpheme boundaries and of thousands of lexical context restriction rules in parallel as if the lexical contin- uations (morphotaxis) were phonological constraints.used A a similar three-step method, approach finally where the final step composed the lexicon with the implemented by his colleagues, Pirinen and Lind´en (2010), separated the lexi- cal continuations from the phonological changes at morpheme boundaries and used a three-step approach where the final step composed the lexicon with the 6 6 GOAL: CHARACTERIZE ALL REGULAR GRAMMARS AND LANGUAGE MODELS BOUNDS OF REGULARITY AYJ People’s Daily Online © CEN REGULARITY MEANS FINITE PARALLELISM REGULARITY MEANS LINEAR BOUNDED SPACE REGULARITY MEANS FINITE COMPOSITION VISIBLE TRACES CC-BY-SA Xvazquez AND … WRITING HEAD BOUNDED NUMBER OF SPIDER WEBS OUTER LINKS Kornai & Tuza 1992: Narrowness, Path-width and their application in NLP REGULARITY À LA HENNIE (1965) BOUNDED LTIME ONE-TAPE TURING MACHINE O(k) CONTROL STATES (BOUNDED PARALLELISM) O(n) TAPE CELLS MSO Definable String Transductions 247 • (BOUNDED SPACE) O(n) TIME STEPS (BOUNDED TIME) CAN BE NONDETERMINISTIC Fig. 8. Track for a3b2aba , Example 9. ⊢ ⊣ (TADAKI ET AL. 2010) Finally, the first visiting sequence of a computation should start with a visit ( , q , ϵ, α), and exactly one visiting sequence should end with a visit ∗ in + (−ϵ, q f , , λ). Since∗ the number of visits to each position is bounded, the visiting sequences come from a finite set, and we can interprete these sequences as symbols from a finite alphabet. Each k-visiting computation is specified by a string over this alphabet, and we will call these strings k-tracks, e.g., the 3-track in Figure 8 specifies the computation of the Hennie machine of Example 9 on input a3b2aba. It should be obvious from the above remarks that the language of such spec- ifications is regular (e.g., see Lemma 2.2 of Greibach [1978c], or Lemma 1 of Chytil and Jakl´ [1977]). For instance, it is the heart of the proof in Hopcroft and Ullman [1979, Theorem 2.5] of the result that two-way finite-state automata are equivalent to their one-way counterparts [Rabin and Scott 1959; Shepherdson 1959]. PROPOSITION 23. Let be a Hennie machine, and let k be a constant. The k-tracks for successful k-visitingM computations of form a regular language. M 7.2 Characterizations Using Hennie Machines From Proposition 23, using standard techniques (e.g., see Chytil and Jakl´ [1977, Lemma 1]) we obtain the following decomposition of nondeterministic Hennie transductions. Note that this decomposition already features in Theorem 20 as characterization of NMSOS. LEMMA 24. NHM MREL 2DGSM NMSOS. ⊆ ◦ = PROOF. Let be a Hennie machine, finite-visit for constant k; each pair (w, z) in the transductionM realized by can be computed by a k-visiting com- putation. M We may decompose the behavior of on input w as follows. First, a rela- beling of w guesses a string of k-visitingM sequences, one for each position of the input⊢ tape,⊣ such that the first symbol of each visiting sequence matches the input symbol of the corresponding tape position. Then, a 2DGSM verifies in a left-to-right scan whether the string specifies a valid computation, a track, of for w, cf. Proposition 23. If this is the case, the 2DGSM returns to the left end markerM and simulates on this input, following the k-visiting computation previously⊢ guessed. M ACM Transactions on Computational Logic, Vol. 2, No. 2, April 2001. REGULARITY KEEPS SURPRISING A. HENNIE (1965) GOES BEYOND THE CLASSICAL FS. PHONOLOGY 1. Restricted Application JOHNSON (1972); KK (1994) 2. Iterated application IN HUNSPELL (NÉMETH ET AL 2004) B. HENNIE (1965) IS RELEVANT TO REPRESENTATION OF 3. Syntax (Nederhof & YJ 2017; Y-J 2017a, 2017b) 4. Semantics and Pragmatics (Gordon & Hobbs 2017; Kornai 2017 manus.) AYJ 5. RNN, including backpropagation Table 1: The coverage of UD v.2 data with depth bounded weak edge bracketing lang N depth 0 depth 1 depth 2 depth 3 depth 4 depth 5 depth 6 depth 7 Arabic 26722 4.42% 20.93% 65.09% 94.39% 99.64% 99.99% +0.011% (3) Catalan 14832 1.27% 19.01% 70.39% 96.07% 99.62% 99.99% +0.007% (1) Czech 102660 10.60% 43.11% 86.77% 98.47% 99.89% 99.99% +0.010% (10) German 14917 2.06% 43.11% 87.52% 98.56% 99.91% 99.97% +0.027% (4) English 19785 16.61% 53.59% 91.10% 99.21% 99.96% 100.00% Spanish 31546 1.59% 24.61% 77.27% 97.24% 99.79%
Recommended publications
  • Arxiv:1908.07448V1
    Evaluating Contextualized Embeddings on 54 Languages in POS Tagging, Lemmatization and Dependency Parsing Milan Straka and Jana Strakova´ and Jan Hajicˇ Charles University Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics {strakova,straka,hajic}@ufal.mff.cuni.cz Abstract Shared Task (Zeman et al., 2018). • We report our best results on UD 2.3. The We present an extensive evaluation of three addition of contextualized embeddings im- recently proposed methods for contextualized 25% embeddings on 89 corpora in 54 languages provements range from relative error re- of the Universal Dependencies 2.3 in three duction for English treebanks, through 20% tasks: POS tagging, lemmatization, and de- relative error reduction for high resource lan- pendency parsing. Employing the BERT, guages, to 10% relative error reduction for all Flair and ELMo as pretrained embedding in- UD 2.3 languages which have a training set. puts in a strong baseline of UDPipe 2.0, one of the best-performing systems of the 2 Related Work CoNLL 2018 Shared Task and an overall win- ner of the EPE 2018, we present a one-to- A new type of deep contextualized word repre- one comparison of the three contextualized sentation was introduced by Peters et al. (2018). word embedding methods, as well as a com- The proposed embeddings, called ELMo, were ob- parison with word2vec-like pretrained em- tained from internal states of deep bidirectional beddings and with end-to-end character-level word embeddings. We report state-of-the-art language model, pretrained on a large text corpus. results in all three tasks as compared to results Akbik et al.
    [Show full text]
  • Extended and Enhanced Polish Dependency Bank in Universal Dependencies Format
    Extended and Enhanced Polish Dependency Bank in Universal Dependencies Format Alina Wróblewska Institute of Computer Science Polish Academy of Sciences ul. Jana Kazimierza 5 01-248 Warsaw, Poland [email protected] Abstract even for languages with rich morphology and rel- atively free word order, such as Polish. The paper presents the largest Polish Depen- The supervised learning methods require gold- dency Bank in Universal Dependencies for- mat – PDBUD – with 22K trees and 352K standard training data, whose creation is a time- tokens. PDBUD builds on its previous ver- consuming and expensive process. Nevertheless, sion, i.e. the Polish UD treebank (PL-SZ), dependency treebanks have been created for many and contains all 8K PL-SZ trees. The PL- languages, in particular within the Universal De- SZ trees are checked and possibly corrected pendencies initiative (UD, Nivre et al., 2016). in the current edition of PDBUD. Further The UD leaders aim at developing a cross- 14K trees are automatically converted from linguistically consistent tree annotation schema a new version of Polish Dependency Bank. and at building a large multilingual collection of The PDBUD trees are expanded with the en- hanced edges encoding the shared dependents dependency treebanks annotated according to this and the shared governors of the coordinated schema. conjuncts and with the semantic roles of some Polish is also represented in the Universal dependents. The conducted evaluation exper- Dependencies collection. There are two Polish iments show that PDBUD is large enough treebanks in UD: the Polish UD treebank (PL- for training a high-quality graph-based depen- SZ) converted from Składnica zalezno˙ sciowa´ 1 and dency parser for Polish.
    [Show full text]
  • Universal Dependencies According to BERT: Both More Specific and More General
    Universal Dependencies According to BERT: Both More Specific and More General Tomasz Limisiewicz and David Marecekˇ and Rudolf Rosa Institute of Formal and Applied Linguistics, Faculty of Mathematics and Physics Charles University, Prague, Czech Republic {limisiewicz,rosa,marecek}@ufal.mff.cuni.cz Abstract former based systems particular heads tend to cap- ture specific dependency relation types (e.g. in one This work focuses on analyzing the form and extent of syntactic abstraction captured by head the attention at the predicate is usually focused BERT by extracting labeled dependency trees on the nominal subject). from self-attentions. We extend understanding of syntax in BERT by Previous work showed that individual BERT examining the ways in which it systematically di- heads tend to encode particular dependency verges from standard annotation (UD). We attempt relation types. We extend these findings by to bridge the gap between them in three ways: explicitly comparing BERT relations to Uni- • We modify the UD annotation of three lin- versal Dependencies (UD) annotations, show- ing that they often do not match one-to-one. guistic phenomena to better match the BERT We suggest a method for relation identification syntax (x3) and syntactic tree construction. Our approach • We introduce a head ensemble method, com- produces significantly more consistent depen- dency trees than previous work, showing that bining multiple heads which capture the same it better explains the syntactic abstractions in dependency relation label (x4) BERT. • We observe and analyze multipurpose heads, At the same time, it can be successfully ap- containing multiple syntactic functions (x7) plied with only a minimal amount of supervi- sion and generalizes well across languages.
    [Show full text]
  • Universal Dependencies for Japanese
    Universal Dependencies for Japanese Takaaki Tanaka∗, Yusuke Miyaoy, Masayuki Asahara}, Sumire Uematsuy, Hiroshi Kanayama♠, Shinsuke Mori|, Yuji Matsumotoz ∗NTT Communication Science Labolatories, yNational Institute of Informatics, }National Institute for Japanese Language and Linguistics, ♠IBM Research - Tokyo, |Kyoto University, zNara Institute of Science and Technology [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract We present an attempt to port the international syntactic annotation scheme, Universal Dependencies, to the Japanese language in this paper. Since the Japanese syntactic structure is usually annotated on the basis of unique chunk-based dependencies, we first introduce word-based dependencies by using a word unit called the Short Unit Word, which usually corresponds to an entry in the lexicon UniDic. Porting is done by mapping the part-of-speech tagset in UniDic to the universal part-of-speech tagset, and converting a constituent-based treebank to a typed dependency tree. The conversion is not straightforward, and we discuss the problems that arose in the conversion and the current solutions. A treebank consisting of 10,000 sentences was built by converting the existent resources and currently released to the public. Keywords: typed dependencies, Short Unit Word, multiword expression, UniDic 1. Introduction 2. Word unit The definition of a word unit is indispensable in UD an- notation, which is not a trivial question for Japanese, since The Universal Dependencies (UD) project has been de- a sentence is not segmented into words or morphemes by veloping cross-linguistically consistent treebank annota- white space in its orthography.
    [Show full text]
  • Universal Dependencies for Persian
    Universal Dependencies for Persian *Mojgan Seraji, **Filip Ginter, *Joakim Nivre *Uppsala University, Department of Linguistics and Philology, Sweden **University of Turku, Department of Information Technology, Finland *firstname.lastname@lingfil.uu.se **figint@utu.fi Abstract The Persian Universal Dependency Treebank (Persian UD) is a recent effort of treebanking Persian with Universal Dependencies (UD), an ongoing project that designs unified and cross-linguistically valid grammatical representations including part-of-speech tags, morphological features, and dependency relations. The Persian UD is the converted version of the Uppsala Persian Dependency Treebank (UPDT) to the universal dependencies framework and consists of nearly 6,000 sentences and 152,871 word tokens with an average sentence length of 25 words. In addition to the universal dependencies syntactic annotation guidelines, the two treebanks differ in tokenization. All words containing unsegmented clitics (pronominal and copula clitics) annotated with complex labels in the UPDT have been separated from the clitics and appear with distinct labels in the Persian UD. The treebank has its original syntactic annota- tion scheme based on Stanford Typed Dependencies. In this paper, we present the approaches taken in the development of the Persian UD. Keywords: Universal Dependencies, Persian, Treebank 1. Introduction this paper, we present how we adapt the Universal Depen- In the past decade, the development of numerous depen- dencies to Persian by converting the Uppsala Persian De- dency parsers for different languages has frequently been pendency Treebank (UPDT) (Seraji, 2015) to the Persian benefited by the use of syntactically annotated resources, Universal Dependencies (Persian UD). First, we briefly de- or treebanks (Bohmov¨ a´ et al., 2003; Haverinen et al., 2010; scribe the Universal Dependencies and then we present the Kromann, 2003; Foth et al., 2014; Seraji et al., 2015; morphosyntactic annotations used in the extended version Vincze et al., 2010).
    [Show full text]
  • Language Technology Meets Documentary Linguistics: What We Have to Tell Each Other
    Language technology meets documentary linguistics: What we have to tell each other Language technology meets documentary linguistics: What we have to tell each other Trond Trosterud Giellatekno, Centre for Saami Language Technology http://giellatekno.uit.no/ . February 15, 2018 . Language technology meets documentary linguistics: What we have to tell each other Contents Introduction Language technology for the documentary linguist Language technology for the language society Conclusion . Language technology meets documentary linguistics: What we have to tell each other Introduction Introduction I Giellatekno: started in 2001 (UiT). Research group for language technology on Saami and other northern languages Gramm. modelling, dictionaries, ICALL, corpus analysis, MT, ... I Trond Trosterud, Lene Antonsen, Ciprian Gerstenberger, Chiara Argese I Divvun: Started in 2005 (UiT < Min. of Local Government). Infrastructure, proofing tools, synthetic speech, terminology I Sjur Moshagen, Thomas Omma, Maja Kappfjell, Børre Gaup, Tomi Pieski, Elena Paulsen, Linda Wiechetek . Language technology meets documentary linguistics: What we have to tell each other Introduction The most important languages we work on . Language technology meets documentary linguistics: What we have to tell each other Language technology for the documentary linguist Language technology for documentary linguistics ... Language technology meets documentary linguistics: What we have to tell each other Language technology for the documentary linguist ... what’s in it for the language community I work for? I Let’s pretend there are two types of language communities: 1. Language communities without plans for revitalisation or use in domains other than oral use 2. Language communities with such plans . Language technology meets documentary linguistics: What we have to tell each other Language technology for the documentary linguist Language communities without such plans I Gather empirical material and do your linguistic analysis I (The triplet: Grammar, text collection and dictionary) I ..
    [Show full text]
  • Foreword to the Special Issue on Uralic Languages
    Northern European Journal of Language Technology, 2016, Vol. 4, Article 1, pp 1–9 DOI 10.3384/nejlt.2000-1533.1641 Foreword to the Special Issue on Uralic Languages Tommi A Pirinen Hamburger Zentrum für Sprachkorpora Universität Hamburg [email protected] Trond Trosterud HSL-fakultehta UiT Norgga árktalaš universitehta [email protected] Francis M. Tyers Veronika Vincze HSL-fakultehta MTA-SZTE UiT Norgga árktalaš universitehta Szegedi Tudomány Egyetem [email protected] [email protected] Eszter Simon Jack Rueter Research Institute for Linguistics Helsingin yliopisto Hungarian Academy of Sciences Nykykielten laitos [email protected] [email protected] March 7, 2017 Abstract In this introduction we have tried to present concisely the history of language tech- nology for Uralic languages up until today, and a bit of a desiderata from the point of view of why we organised this special issue. It is of course not possible to cover everything that has happened in a short introduction like this. We have attempted to cover the beginnings of the (Uralic) language-technology scene in 1980’s as far as it’s relevant to much of the current work, including the ones presented in this issue. We also go through the Uralic area by the main languages to survey on existing resources, to also form a systematic overview of what is missing. Finally we talk about some possible future directions on the pan-Uralic level of language technology management. Northern European Journal of Language Technology, 2016, Vol. 4, Article 1, pp 1–9 DOI 10.3384/nejlt.2000-1533.1641 Figure 1: A map of the Uralic language area show approximate distribution of languages spoken by area.
    [Show full text]
  • The Universal Dependencies Treebank of Spoken Slovenian
    The Universal Dependencies Treebank of Spoken Slovenian Kaja Dobrovoljc1, Joakim Nivre2 1Institute for Applied Slovene Studies Trojina, Ljubljana, Slovenia 1Department of Slovenian Studies, Faculty of Arts, University of Ljubljana 2Department of Linguistics and Philology, Uppsala University [email protected], joakim.nivre@lingfil.uu.se Abstract This paper presents the construction of an open-source dependency treebank of spoken Slovenian, the first syntactically annotated collection of spontaneous speech in Slovenian. The treebank has been manually annotated using the Universal Dependencies annotation scheme, a one-layer syntactic annotation scheme with a high degree of cross-modality, cross-framework and cross-language interoper- ability. In this original application of the scheme to spoken language transcripts, we address a wide spectrum of syntactic particularities in speech, either by extending the scope of application of existing universal labels or by proposing new speech-specific extensions. The initial analysis of the resulting treebank and its comparison with the written Slovenian UD treebank confirms significant syntactic differences between the two language modalities, with spoken data consisting of shorter and more elliptic sentences, less and simpler nominal phrases, and more relations marking disfluencies, interaction, deixis and modality. Keywords: dependency treebank, spontaneous speech, Universal Dependencies 1. Introduction actually out-performs state-of-the-art pipeline approaches (Rasooli and Tetreault, 2013; Honnibal and Johnson, 2014). It is nowadays a well-established fact that data-driven pars- Such heterogeneity of spoken language annotation schemes ing systems used in different speech-processing applica- inevitably leads to a restricted usage of existing spoken tions benefit from learning on annotated spoken data, rather language treebanks in linguistic research and parsing sys- than using models built on written language observation.
    [Show full text]
  • Proceedings of the First Workshop on Natural Language Processing for Indigenous Languages of the Americas
    Zurich Open Repository and Archive University of Zurich Main Library Strickhofstrasse 39 CH-8057 Zurich www.zora.uzh.ch Year: 2021 Proceedings of the First Workshop on Natural Language Processing for Indigenous Languages of the Americas Edited by: Mager, Manuel ; Oncevay, Arturo ; Rios, Annette ; Meza Ruiz, Ivan Vladimir ; Palmer, Alexis ; Neubig, Graham ; Kann, Katharina Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-203436 Edited Scientific Work Published Version The following work is licensed under a Creative Commons: Attribution 4.0 International (CC BY 4.0) License. Originally published at: Proceedings of the First Workshop on Natural Language Processing for Indigenous Languages of the Americas. Edited by: Mager, Manuel; Oncevay, Arturo; Rios, Annette; Meza Ruiz, Ivan Vladimir; Palmer, Alexis; Neubig, Graham; Kann, Katharina (2021). Online: Association for Computational Linguistics. NAACL-HLT 2021 Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP) Proceedings of the First Workshop June 11, 2021 ©2021 The Association for Computational Linguistics These workshop proceedings are licensed under a Creative Commons Attribution 4.0 International License. Order copies of this and other ACL proceedings from: Association for Computational Linguistics (ACL) 209 N. Eighth Street Stroudsburg, PA 18360 USA Tel: +1-570-476-8006 Fax: +1-570-476-0860 [email protected] ISBN 978-1-954085-44-2 ii Preface This area is in all probability unmatched, anywhere in the world, in its linguistic multiplicity and diversity. A couple of thousand languages and dialects, at present divided into 17 large families and 38 small ones, with several hundred unclassified single languages, are on record.
    [Show full text]
  • Conll-2017 Shared Task
    TurkuNLP: Delexicalized Pre-training of Word Embeddings for Dependency Parsing Jenna Kanerva1,2, Juhani Luotolahti1,2, and Filip Ginter1 1Turku NLP Group 2University of Turku Graduate School (UTUGS) University of Turku, Finland [email protected], [email protected], [email protected] Abstract word and sentence segmentation as well as mor- phological tags for the test sets, which they could We present the TurkuNLP entry in the choose to use as an alternative to developing own CoNLL 2017 Shared Task on Multilingual segmentation and tagging. These baseline seg- Parsing from Raw Text to Universal De- mentations and morphological analyses were pro- pendencies. The system is based on the vided by UDPipe v1.1 (Straka et al., 2016). UDPipe parser with our focus being in ex- In addition to the manually annotated treebanks, ploring various techniques to pre-train the the shared task organizers also distributed a large word embeddings used by the parser in or- collection of web-crawled text for all but one of der to improve its performance especially the languages in the shared task, totaling over 90 on languages with small training sets. The billion tokens of fully dependency parsed data. system ranked 11th among the 33 partici- Once again, these analyses were produced by the pants overall, being 8th on the small tree- UDPipe system. This automatically processed banks, 10th on the large treebanks, 12th on large dataset was intended by the organizers to the parallel test sets, and 26th on the sur- complement the manually annotated data and, for prise languages. instance, support the induction of word embed- dings.
    [Show full text]
  • Conferenceabstracts
    TENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION Held under the Honorary Patronage of His Excellency Mr. Borut Pahor, President of the Republic of Slovenia MAY 23 – 28, 2016 GRAND HOTEL BERNARDIN CONFERENCE CENTRE Portorož , SLOVENIA CONFERENCE ABSTRACTS Editors: Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Marko Grobelnik , Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis. Assistant Editors: Sara Goggi, Hélène Mazo The LREC 2016 Proceedings are licensed under a Creative Commons Attribution- NonCommercial 4.0 International License LREC 2016, TENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION Title: LREC 2016 Conference Abstracts Distributed by: ELRA – European Language Resources Association 9, rue des Cordelières 75013 Paris France Tel.: +33 1 43 13 33 33 Fax: +33 1 43 13 33 30 www.elra.info and www.elda.org Email: [email protected] and [email protected] ISBN 978-2-9517408-9-1 EAN 9782951740891 ii Introduction of the Conference Chair and ELRA President Nicoletta Calzolari Welcome to the 10 th edition of LREC in Portorož, back on the Mediterranean Sea! I wish to express to his Excellency Mr. Borut Pahor, the President of the Republic of Slovenia, the gratitude of the Program Committee, of all LREC participants and my personal for his Distinguished Patronage of LREC 2016. Some figures: previous records broken again! It is only the 10 th LREC (18 years after the first), but it has already become one of the most successful and popular conferences of the field. We continue the tradition of breaking previous records. We received 1250 submissions, 23 more than in 2014. We received 43 workshop and 6 tutorial proposals.
    [Show full text]
  • A Multilingual Collection of Conll-U-Compatible Morphological Lexicons
    A multilingual collection of CoNLL-U-compatible morphological lexicons Benoˆıt Sagot Inria 2 rue Simone Iff, CS 42112, 75589 Paris Cedex 12, France [email protected] Abstract We introduce UDLexicons, a multilingual collection of morphological lexicons that follow the guidelines and format of the Universal Dependencies initiative. We describe the three approaches we use to create 53 morphological lexicons covering 38 languages, based on existing resources. These lexicons, which are freely available, have already proven useful for improving part-of-speech tagging accuracy in state-of-the-art architectures. Keywords: Morphological Lexicons, Universal Dependencies, Freely Available Language Resources 1. Introduction guages following a universal set of guidelines. The obvious Morphological information belongs to the most fundamen- choice would be to make use of the UD guidelines them- tal types of linguistic knowledge. It is often either encoded selves. into morphological analysers or gathered in the form of We have therefore developed a multilingual collection of morphological lexicons. Such lexicons, which constitute morphological lexicons that follow the UD guidelines re- the focus of this paper, are collections of lexical entries that garding part-of-speech and morphological features. We typically associate a wordform with a part-of-speech (or used three main sources of lexical information: morphosyntactic category), morphological features (such • In the context of the CoNLL 2017 UD morphologi- as gender, tense, etc.) and a lemma. Beyond direct lexicon cal and syntactic analysis shared task (Zeman et al., lookup, used in virtually all types of natural language pro- 2017) based on UD treebank data, we used lexical in- cessing applications and computational linguistic studies, formation available in the Apertium2 and Giellatekno3 morphological lexicons have been shown to significantly projects.
    [Show full text]