The SIGMORPHON 2019 Shared Task: Morphological Analysis In

Total Page:16

File Type:pdf, Size:1020Kb

The SIGMORPHON 2019 Shared Task: Morphological Analysis In The SIGMORPHON 2019 Shared Task: Morphological Analysis in Context and Cross-Lingual Transfer for Inflection Arya D. McCarthy♣, Ekaterina Vylomova♥, Shijie Wu♣, Chaitanya Malaviya♠, Lawrence Wolf-Sonkin♦, Garrett Nicolai♣, Christo Kirov♣,∗ Miikka Silfverberg♯, Sabrina J. Mielke♣, Jeffrey Heinz♭, Ryan Cotterell♣, and Mans Hulden♮ ♣Johns Hopkins University ♥University of Melbourne ♠Allen Institute for AI ♦Google ♯University of Helsinki ♭Stony Brook University ♮University of Colorado Abstract For instance, Romanian verb forms inflect for person, number, tense, mood, and voice; mean- The SIGMORPHON 2019 shared task on while, Archi verbs can take on thousands of forms cross-lingual transfer and contextual analysis (Kibrik, 1998). Such complex paradigms produce in morphology examined transfer learning of inflection between 100 language pairs, as well large inventories of words, all of which must be as contextual lemmatization and morphosyn- producible by a realistic system, even though a tactic description in 66 languages. The first large percentage of them will never be observed task evolves past years’ inflection tasks by ex- over billions of lines of linguistic input. Com- amining transfer of morphological inflection pounding the issue, good inflectional systems of- knowledge from a high-resource language to a ten require large amounts of supervised training low-resourcelanguage. This year also presents data, which is infeasible in many of the world’s a new second challenge on lemmatization and morphological feature analysis in context. All languages. submissions featured a neural component and This year’s shared task is concentrated on en- built on either this year’s strong baselines or couraging the construction of strong morpholog- highly ranked systems from previous years’ ical systems that perform two related but differ- shared tasks. Every participating team im- ent inflectional tasks. The first task asks partici- proved in accuracy over the baselines for the pants to create morphological inflectors for a large inflection task (though not Levenshtein dis- number of under-resourced languages, encourag- tance), and every team in the contextual analy- sis task improved on both state-of-the-art neu- ing systems that use highly-resourced, related lan- ral and non-neural baselines. guages as a cross-lingual training signal. The sec- ond task welcomes submissions that invert this op- 1 Introduction eration in light of contextual information: Given an unannotated sentence, lemmatize each word, While producing a sentence, humans combine var- and tag them with a morphosyntactic description. ious types of knowledge to produce fluent output— Both of these tasks extend upon previous morpho- various shades of meaning are expressed through logical competitions, and the best submitted sys- word selection and tone, while the language is tems now represent the state of the art in their re- arXiv:1910.11493v2 [cs.CL] 25 Feb 2020 made to conform to underlying structural rules via spective tasks. syntax and morphology. Native speakers are often quick to identify disfluency, even if the meaning 2 Tasks and Evaluation of a sentence is mostly clear. Automatic systems must also consider these 2.1 Task 1: Cross-lingual transfer for constraints when constructing or processing lan- morphological inflection guage. Strong enough language models can often Annotated resources for the world’s languages are reconstruct common syntactic structures, but are not distributed equally—some languages simply insufficient to properly model morphology. Many have more as they have more native speakers will- languages implement large inflectional paradigms ing and able to annotate more data. We explore that mark both function and content words with how to transfer knowledge from high-resource lan- a varying levels of morphosyntactic information. guages that are genetically related to low-resource ∗Now at Google languages. 1 The first task iterates on last year’s main task: and the average Levenshtein distance between the morphological inflection (Cotterell et al., 2018). predictions and their corresponding true forms. Instead of giving some number of training exam- ples in the language of interest, we provided only 2.2 Task 2: Morphological analysis in context a limited number in that language. To accom- Although inflection of words in a context-agnostic pany it, we provided a larger number of examples manner is a useful evaluation of the morphological in either a related or unrelated language. Each quality of a system, people do not learn morphol- test example asked participants to produce some ogy in isolation. other inflected form when given a lemma and a In 2018, the second task of the CoNLL– bundle of morphosyntactic features as input. The SIGMORPHON Shared Task (Cotterell et al., goal, thus, is to perform morphological inflection 2018) required submitting systems to complete an in the low-resource language, having hopefully ex- inflectional cloze task (Taylor, 1953) given only ploited some similarity to the high-resource lan- the sentential context and the desired lemma – an guage. Models which perform well here can aid example of the problem is given in the following downstream tasks like machine translation in low- lines: A successful system would predict the plu- resource settings. All datasets were resampled ral form “dogs”. Likewise, a Spanish word form from UniMorph, which makes them distinct from “ayuda” may be a feminine noun or a third-person past years. verb form, which must be disambiguated by con- The mode of the task is inspired by Zoph et al. text. (2016), who fine-tune a model pre-trained on a high-resource language to perform well on a low- The are barking. resource language. We do not, though, require (dog) that models be trained by fine-tuning. Joint mod- eling or any number of methods may be explored This year’s task extends the second task from instead. last year. Rather than inflect a single word in con- Example The model will have access to type- text, the task is to provide a complete morphologi- level data in a low-resource target language, plus cal tagging of a sentence: for each word, a success- a high-resource source language. We give an ex- ful system will need to lemmatize and tag it with a ample here of Asturian as the target language with morphsyntactic description (MSD). Spanish as the source language. The dogs are barking . Low-resource target training data (Asturian) the dog be bark . facer “fechu” V;V.PTCP;PST DET N;PL V;PRS;3;PL V;V.PTCP;PRS PUNCT aguar “agu`a” V;PRS;2;PL;IND . Context is critical—depending on the sentence, High-resource source language training data identical word forms realize a large number of po- (Spanish) tential inflectional categories, which will in turn tocar “tocando” V;V.PTCP;PRS influence lemmatization decisions. If the sen- bailar “bailaba” V;PST;IPFV;3;SG;IND tence were instead “The barking dogs kept us up mentir “minti´o” V;PST;PFV;3;SG;IND all night”, “barking” is now an adjective, and its . lemma is also “barking”. Test input (Asturian) baxar V;V.PTCP;PRS 3 Data Test output (Asturian) 3.1 Data for Task 1 “baxando” Language pairs We presented data in 100 lan- Table 1: Sample language pair and data format for guage pairs spanning 79 unique languages. Data Task 1 for all but four languages (Basque, Kurmanji, Murrinhpatha, and Sorani) are extracted from Evaluation We score the output of each system English Wiktionary, a large multi-lingual crowd- in terms of its predictions’ exact-match accuracy sourced dictionary with morphological paradigms 2 for many lemmata.1 20 of the 100 language pairs As these sets are nested, the highest-count triples are either distantly related or unrelated; this allows tend to appear in the smaller training sets.3 speculation into the relative importance of data The final 2000 triples were randomly shuffled quantity and linguistic relatedness. and then split in half to obtain development and test sets of 1000 forms each.4 The final shuffling Data format For each language, the basic data was performed to ensure that the development set consists of triples of the form (lemma, feature bun- is similar to the test set. By contrast, the devel- dle, inflected form), as in Table 1. The first fea- opment and test sets tend to contain lower-count ture in the bundle always specifies the core part of triples than the training set.5 speech (e.g., verb). For each language pair, sepa- rate files contain the high- and low-resource train- Other modifications We further adopted some ing examples. changes to increase compatibility. Namely, we cor- All features in the bundle are coded according rected some annotation errors created while scrap- to the UniMorph Schema, a cross-linguistically ing Wiktionary for the 2018 task, and we standard- consistent universal morphological feature set ized Romanian t-cedilla and t-comma to t-comma. (Sylak-Glassman et al., 2015a,b). (The same was done with s-cedilla and s-comma.) Extraction from Wiktionary For each of the 3.2 Data for Task 2 Wiktionary languages, Wiktionary provides a number of tables, each of which specifies the full Our data for task 2 come from the Universal inflectional paradigm for a particular lemma. As Dependencies treebanks (UD; Nivre et al., 2018, in the previous iteration, tables were extracted us- v2.3), which provides pre-defined training, devel- ing a template annotation procedure described in opment, and test splits and annotations in a uni- (Kirov et al., 2018). fied annotation schema for morphosyntax and de- pendency relationships. Unlike the 2018 cloze Sampling data splits From each language’s col- task which used UD data, we require no manual lection of paradigms, we sampled the training, de- data preparation and are able to leverage all 107 2 velopment, and test sets as in 2018. Crucially, monolingual treebanks.
Recommended publications
  • Wikipedia, the Free Encyclopedia 03-11-09 12:04
    Tea - Wikipedia, the free encyclopedia 03-11-09 12:04 Tea From Wikipedia, the free encyclopedia Tea is the agricultural product of the leaves, leaf buds, and internodes of the Camellia sinensis plant, prepared and cured by various methods. "Tea" also refers to the aromatic beverage prepared from the cured leaves by combination with hot or boiling water,[1] and is the common name for the Camellia sinensis plant itself. After water, tea is the most widely-consumed beverage in the world.[2] It has a cooling, slightly bitter, astringent flavour which many enjoy.[3] The four types of tea most commonly found on the market are black tea, oolong tea, green tea and white tea,[4] all of which can be made from the same bushes, processed differently, and in the case of fine white tea grown differently. Pu-erh tea, a post-fermented tea, is also often classified as amongst the most popular types of tea.[5] Green Tea leaves in a Chinese The term "herbal tea" usually refers to an infusion or tisane of gaiwan. leaves, flowers, fruit, herbs or other plant material that contains no Camellia sinensis.[6] The term "red tea" either refers to an infusion made from the South African rooibos plant, also containing no Camellia sinensis, or, in Chinese, Korean, Japanese and other East Asian languages, refers to black tea. Contents 1 Traditional Chinese Tea Cultivation and Technologies 2 Processing and classification A tea bush. 3 Blending and additives 4 Content 5 Origin and history 5.1 Origin myths 5.2 China 5.3 Japan 5.4 Korea 5.5 Taiwan 5.6 Thailand 5.7 Vietnam 5.8 Tea spreads to the world 5.9 United Kingdom Plantation workers picking tea in 5.10 United States of America Tanzania.
    [Show full text]
  • The Culture of Wikipedia
    Good Faith Collaboration: The Culture of Wikipedia Good Faith Collaboration The Culture of Wikipedia Joseph Michael Reagle Jr. Foreword by Lawrence Lessig The MIT Press, Cambridge, MA. Web edition, Copyright © 2011 by Joseph Michael Reagle Jr. CC-NC-SA 3.0 Purchase at Amazon.com | Barnes and Noble | IndieBound | MIT Press Wikipedia's style of collaborative production has been lauded, lambasted, and satirized. Despite unease over its implications for the character (and quality) of knowledge, Wikipedia has brought us closer than ever to a realization of the centuries-old Author Bio & Research Blog pursuit of a universal encyclopedia. Good Faith Collaboration: The Culture of Wikipedia is a rich ethnographic portrayal of Wikipedia's historical roots, collaborative culture, and much debated legacy. Foreword Preface to the Web Edition Praise for Good Faith Collaboration Preface Extended Table of Contents "Reagle offers a compelling case that Wikipedia's most fascinating and unprecedented aspect isn't the encyclopedia itself — rather, it's the collaborative culture that underpins it: brawling, self-reflexive, funny, serious, and full-tilt committed to the 1. Nazis and Norms project, even if it means setting aside personal differences. Reagle's position as a scholar and a member of the community 2. The Pursuit of the Universal makes him uniquely situated to describe this culture." —Cory Doctorow , Boing Boing Encyclopedia "Reagle provides ample data regarding the everyday practices and cultural norms of the community which collaborates to 3. Good Faith Collaboration produce Wikipedia. His rich research and nuanced appreciation of the complexities of cultural digital media research are 4. The Puzzle of Openness well presented.
    [Show full text]
  • Word Segmentation for Chinese Wikipedia Using N-Gram Mutual Information
    Word Segmentation for Chinese Wikipedia Using N-Gram Mutual Information Ling-Xiang Tang1, Shlomo Geva1, Yue Xu1 and Andrew Trotman2 1School of Information Technology Faculty of Science and Technology Queensland University of Technology Queensland 4000 Australia {l4.tang, s.geva, yue.xu}@qut.edu.au 2Department of Computer Science Universityof Otago Dunedin 9054 New Zealand [email protected] Abstract In this paper, we propose an unsupervised seg- is called 激光打印机 in mainland China, but 鐳射打印機 in mentation approach, named "n-gram mutual information", or Hongkong, and 雷射印表機 in Taiwan. NGMI, which is used to segment Chinese documents into n- In digital representations of Chinese text different encoding character words or phrases, using language statistics drawn schemes have been adopted to represent the characters. How- from the Chinese Wikipedia corpus. The approach allevi- ever, most encoding schemes are incompatible with each other. ates the tremendous effort that is required in preparing and To avoid the conflict of different encoding standards and to maintaining the manually segmented Chinese text for train- cater for people's linguistic preferences, Unicode is often used ing purposes, and manually maintaining ever expanding lex- in collaborative work, for example in Wikipedia articles. With icons. Previously, mutual information was used to achieve Unicode, Chinese articles can be composed by people from all automated segmentation into 2-character words. The NGMI the above Chinese-speaking areas in a collaborative way with- approach extends the approach to handle longer n-character out encoding difficulties. As a result, these different forms of words. Experiments with heterogeneous documents from the Chinese writings and variants may coexist within same pages.
    [Show full text]
  • Exeter's Chinese Community
    Telling our Stories, Finding our Roots: Exeter’s Multi-Coloured History Chinese Minority and its Contribution to Diversity in Exeter By Community Researchers, Gordon CHAN and Sasiporn PHONGPLOENPIS February 2013 I. Chinese in Modern Exeter The United Kingdom has long earned its reputation for being a multicultural and diverse society. People from around the world have come here for the sake of safety, jobs and a better life [1]. Being part of the big family, Chinese or British Chinese [2] residents have made up 1.7% [3] of the total population in Exeter. With the increasing number of Chinese overseas students in recent years [4], 7.5% [5] of the total students in the University of Exeter now originate from mainland China [6]. This figure (around 1,300 students) nearly catches up with the number of Chinese residents in Exeter and is equivalent to ~1.1% [7] of the total population. Figure 1. Population in Exeter (2011) according to ethnic groups. (Built based on data from [3]) 1 Exeter residents (118,000 people in 2011) ~1.1% Chinese university students 1.7% Chinese residents Figure 2. Chinese population in Exeter. (Not in scale) [3][7] Exeter is in the top 10 local authority districts in England for businesses that show high potential for growth [8]. Thus, it is not surprising that this city can attract a workforce and intelligent minds, including ethnic minorities from around the world. Even though ethnic minorities might possess different lifestyles, languages, cultures or origins from the majority [9], their existence and contribution could be beneficial to our everyday life.
    [Show full text]
  • The Wili Benchmark Dataset for Written Natural Language Identification
    1 The WiLI benchmark dataset for written II. WRITTEN LANGUAGE BASICS language identification A language is a system of communication. This could be spoken language, written language or sign language. A spoken language Martin Thoma can have several forms of written language. For example, the E-Mail: [email protected] spoken Chinese language has at least three writing systems: Traditional Chinese characters, Simplified Chinese characters and Pinyin — the official romanization system for Standard Chinese. Abstract—This paper describes the WiLI-2018 benchmark dataset for monolingual written natural language identification. Languages evolve. New words like googling, television and WiLI-2018 is a publicly available,1 free of charge dataset of Internet get added, but written languages are also refactored. short text extracts from Wikipedia. It contains 1000 paragraphs For example, the German orthography reform of 1996 aimed at of 235 languages, totaling in 235 000 paragraphs. WiLI is a classification dataset: Given an unknown paragraph written in making the written language simpler. This means any system one dominant language, it has to be decided which language it which recognizes language and any benchmark needs to be is. adapted over time. Hence WiLI is versioned by year. Languages do not necessarily have only one name. According to Wikipedia, the Sranan language is also known as Sranan Tongo, Sranantongo, Surinaams, Surinamese, Surinamese Creole and I. INTRODUCTION Taki Taki. This makes ISO 369-3 valuable, but not all languages are represented in ISO 369-3. As ISO 369-3 uses combinations The identification of written natural language is a task which of 3 Latin letters and has 547 reserved combinations, it can appears often in web applications.
    [Show full text]
  • 89 Annual Meeting
    Meeting Handbook Linguistic Society of America American Dialect Society American Name Society North American Association for the History of the Language Sciences Society for Pidgin and Creole Linguistics Society for the Study of the Indigenous Languages of the Americas The Association for Linguistic Evidence 89th Annual Meeting UIF+/0 7/-+Fi0N i0N XgLP(+I'L 5/hL- 7/-+Fi0N` 96 ;_AA Ti0(i-e` @\A= ANNUAL REVIEWS It’s about time. Your time. It’s time well spent. VISIT US IN BOOTH #1 LEARN ABOUT OUR NEW JOURNAL AND ENTER OUR DRAWING! New from Annual Reviews: Annual Review of Linguistics linguistics.annualreviews.org • Volume 1 • January 2015 Co-Editors: Mark Liberman, University of Pennsylvania and Barbara H. Partee, University of Massachusetts Amherst The Annual Review of Linguistics covers significant developments in the field of linguistics, including phonetics, phonology, morphology, syntax, semantics, pragmatics, and their interfaces. Reviews synthesize advances in linguistic theory, sociolinguistics, psycholinguistics, neurolinguistics, language change, biology and evolution of language, typology, and applications of linguistics in many domains. Complimentary online access to the first volume will be available until January 2016. TABLE OF CONTENTS: • Suppletion: Some Theoretical Implications, • Correlational Studies in Typological and Historical Jonathan David Bobaljik Linguistics, D. Robert Ladd, Seán G. Roberts, Dan Dediu • Ditransitive Constructions, Martin Haspelmath • Advances in Dialectometry, Martijn Wieling, John Nerbonne • Quotation and Advances in Understanding Syntactic • Sign Language Typology: The Contribution of Rural Sign Systems, Alexandra D'Arcy Languages, Connie de Vos, Roland Pfau • Semantics and Pragmatics of Argument Alternations, • Genetics and the Language Sciences, Simon E. Fisher, Beth Levin Sonja C.
    [Show full text]
  • Training Deep Neural Networks for Bottleneck Feature Extraction
    Training Deep Neural Networks for Bottleneck Feature Extraction Master’s Thesis of Jonas Gehring Interactive Systems Laboratories Carnegie Mellon University, Pittsburgh, USA Karlsruhe Institute of Technology, Germany Advisor: Prof. Alexander Waibel Second advisor: Prof. Florian Metze Reviewer: Prof. Alexander Waibel Second reviewer: Dr. Sebastian Stüker Duration: June 2012 – December 2012 KIT – University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association www.kit.edu I declare that I have developed and written the enclosed thesis completely by myself, and have not used sources or means without declaration in the text. Karlsruhe, December 7th, 2012 ......................................... Jonas Gehring Abstract In automatic speech recognition systems, preprocessing the audio signal to generate fea- tures is an important part of achieving a good recognition rate. Previous works have shown that artificial neural networks can be used to extract good, discriminative features that yield better recognition performance than manually engineered feature extraction al- gorithms. One possible approach for this is to train a network with a small bottleneck layer, and then use the activations of the units in this layer to produce feature vectors for the remaining parts of the system. Deep learning is a field of machine learning that deals with efficient training algorithms for neural networks with many hidden layers, and with automatic discovery of relevant features from data. While most frequently used in computer vision, multiple recent works have demonstrated the ability of deep networks to achieve superior performance on speech recognition tasks as well. In this work, a novel approach for extracting bottleneck features from deep neural net- works is proposed.
    [Show full text]
  • POWER LANGUAGE INDEX Which Are the World’S Most in Uential Languages?
    Kai L. Chan, PhD POWER LANGUAGE INDEX Which are the world’s most inuential languages? There are over 6,000 languages spoken in the world today, but some 2,000 of them count fewer than a thousand speakers. Moreover, just 15 of them account for half of the languages spoken in the world. Which are the world’s most inuential languages? What are the proper metrics to measure the reach and power of languages? Power Language Index (May 2016) Kai L. Chan, PhD WEF Agenda: These are the most powerful languages in the world1 There are over 6,000 languages spoken in the world today, but some 2,000 of them count fewer than 1,000 speakers. Moreover, just 15 account for half of the languages spoken in the world. In a globalised world with multilingual societies, knowledge of languages is paramount in facilitating communication and in allowing people to participate in society’s cultural, economic and social activities. A pertinent question to ask then is: which are the most useful languages? If an alien were to land on Earth, which language would enable it to most fully engage with humans? To understand the efficacy of language (and by extension culture), consider the doors (“opportunities”) opened by it. Broadly speaking, there are five opportunities provided by language: 1. Geography: The ability to travel 2. Economy: The ability to participate in an economy 3. Communication: The ability to engage in dialogue 4. Knowledge and media: The ability to consume knowledge and media 5. Diplomacy: The ability to engage in international relations So which languages are the most powerful? Based on the opportunities above an index can be constructed to compare/rank languages on their efficacy in the various domains.
    [Show full text]
  • 2018 MCM Problem B: How Many Languages?
    2018 MCM Problem B: How Many Languages? Background: There are currently about 6,900 languages spoken on Earth. About half the world’s population claim one of the following ten languages (in order of most speakers) as a native language: Mandarin (incl. Standard Chinese), Spanish, English, Hindi, Arabic, Bengali, Portuguese, Russian, Punjabi, and Japanese. However, much of the world’s population also speaks a second language. When considering total numbers of speakers of a particular language (native speakers plus second or third, etc. language speakers), the languages and their order change from the native language list provided. The total number of speakers of a language may increase or decrease over time because of a variety of influences to include, but not limited to, the language(s) used and/or promoted by the government in a country, the language(s) used in schools, social pressures, migration and assimilation of cultural groups, and immigration and emigration with countries that speak other languages. Moreover, in our globalized, interconnected world there are additional factors that allow languages that are geographically distant to interact. These factors include international business relations, increased global tourism, the use of electronic communication and social media, and the use of technology to assist in quick and easy language translation. Retrieved from https://en.wikipedia.org/wiki/List_of_languages_by_total_number_of_speakers on January 17, 2018. Problem: A large multinational service company, with offices in New York City in the United States and Shanghai in China, is continuing to expand to become truly international. This company is investigating opening additional international offices and desires to have the employees of each office speak both in English and one or more additional languages.
    [Show full text]
  • Language Distinctiveness*
    RAI – data on language distinctiveness RAI data Language distinctiveness* Country profiles *This document provides data production information for the RAI-Rokkan dataset. Last edited on October 7, 2020 Compiled by Gary Marks with research assistance by Noah Dasanaike Citation: Liesbet Hooghe and Gary Marks (2016). Community, Scale and Regional Governance: A Postfunctionalist Theory of Governance, Vol. II. Oxford: OUP. Sarah Shair-Rosenfield, Arjan H. Schakel, Sara Niedzwiecki, Gary Marks, Liesbet Hooghe, Sandra Chapman-Osterkatz (2021). “Language difference and Regional Authority.” Regional and Federal Studies, Vol. 31. DOI: 10.1080/13597566.2020.1831476 Introduction ....................................................................................................................6 Albania ............................................................................................................................7 Argentina ...................................................................................................................... 10 Australia ....................................................................................................................... 12 Austria .......................................................................................................................... 14 Bahamas ....................................................................................................................... 16 Bangladesh ..................................................................................................................
    [Show full text]
  • Dimensions in Variationist Sociolinguistics: A
    DIMENSIONS IN VARIATIONIST SOCIOLINGUISTICS: A SOCIOLINGUISTIC INVESTIGATION OF LANGUAGE VARIATION IN MACAU by WERNER BOTHA submitted in accordance with the requirements for the degree of MASTER OF ARTS WITH SPECIALISATION IN SOCIOLINGUISTICS at the UNIVERSITY OF SOUTH AFRICA Supervisor: PROF. L.A. BARNES November, 2011 Student Number: 34031863 I declare that Dimensions in Variationist Sociolinguistics: a Sociolinguistic Investigation of Language Variation in Macau is my own work and that all the sources that I have used or quoted have been indicated and acknowledged by means of complete references. __________________ 25/11/2011 Signature Date Urge and urge and urge, Always the procreant urge of the world. Out of the dimness opposite equals advance….Always substance and increase, Always a knit of identity….always distinction….always a breed of life. To elaborate is no avail….Learned and unlearned feel that it is so. - Walt Whitman, 1855 ABSTRACT At the very heart of variationist Sociolinguistics is the notion that language has an underlying structure, and that this structure varies according to external linguistic variables such as age, gender, social class, community membership, nationality, and so on. Specifically, this study examines variation in initial and final segments, as well as sentence final particles in Cantonese in Macau Special Administrative Region (SAR). Results of this study indicate that external linguistic constraint categories play a role in the realization of how and when initial and final segments, as well as sentence final particles are used in Macau Cantonese. Finally, this dissertation illustrates that pragmatic functions in the systematic use of linguistic variables requires explanations that draw from variationist sociolinguistic research that has an ethnographic and interpretive basis.
    [Show full text]
  • Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects, Pages 1–14, Osaka, Japan, December 12 2016
    VarDial 3 Third Workshop on NLP for Similar Languages, Varieties and Dialects Proceedings of the Workshop December 12, 2016 Osaka, Japan The papers are licenced under a Creative Commons Attribution 4.0 International License License details: http://creativecommons.org/licenses/by/4.0/ ISBN978-4-87974-716-7 ii Preface VarDial is a well-established series of workshops, attracting researchers working on a range of topics related to the study of linguistic variation, e.g., on building language resources for language varieties and dialects or in creating language technology and applications that make use of language closeness and exploit existing resources in a related language or a language variant. The research presented in the two previous editions, namely VarDial’2014, which was co-located with COLING’2014, and LT4VarDial’2015, which was held together with RANLP’2015, focused on topics such as machine translation between closely related languages, adaptation of POS taggers and parsers for similar languages and language varieties, compilation of corpora for language varieties, spelling normalization, and finally discrimination between and identification of similar languages. The latter was also the topic of the DSL shared task, held in conjunction with the workshop. We believe that this is a very timely series of workshops, as research in language variation is much needed in today’s multi-lingual world, where several closely-related languages, language varieties, and dialects are in daily use, not only as spoken colloquial language but also in written media, e.g., in SMS, chats, and social networks. Language resources for these varieties and dialects are sparse and extending them could be very labor-intensive.
    [Show full text]