On Understanding Character-Level Models for Representing Morphology
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Representation of Inflected Nouns in the Internal Lexicon
Memory & Cognition 1980, Vol. 8 (5), 415423 Represeritation of inflected nouns in the internal lexicon G. LUKATELA, B. GLIGORIJEVIC, and A. KOSTIC University ofBelgrade, Belgrade, Yugoslavia and M.T.TURVEY University ofConnecticut, Storrs, Connecticut 06268 and Haskins Laboratories, New Haven, Connecticut 06510 The lexical representation of Serbo-Croatian nouns was investigated in a lexical decision task. Because Serbo-Croatian nouns are declined, a noun may appear in one of several gram matical cases distinguished by the inflectional morpheme affixed to the base form. The gram matical cases occur with different frequencies, although some are visually and phonetically identical. When the frequencies of identical forms are compounded, the ordering of frequencies is not the same for masculine and feminine genders. These two genders are distinguished further by the fact that the base form for masculine nouns is an actual grammatical case, the nominative singular, whereas the base form for feminine nouns is an abstraction in that it cannot stand alone as an independent word. Exploiting these characteristics of the Serbo Croatian language, we contrasted three views of how a noun is represented: (1) the independent entries hypothesis, which assumes an independent representation for each grammatical case, reflecting its frequency of occurrence; (2) the derivational hypothesis, which assumes that only the base morpheme is stored, with the individual cases derived from separately stored inflec tional morphemes and rules for combination; and (3) the satellite-entries hypothesis, which assumes that all cases are individually represented, with the nominative singular functioning as the nucleus and the embodiment of the noun's frequency and around which the other cases cluster uniformly. -
A Survey of Orthographic Information in Machine Translation 3
Machine Translation Journal manuscript No. (will be inserted by the editor) A Survey of Orthographic Information in Machine Translation Bharathi Raja Chakravarthi1 ⋅ Priya Rani 1 ⋅ Mihael Arcan2 ⋅ John P. McCrae 1 the date of receipt and acceptance should be inserted later Abstract Machine translation is one of the applications of natural language process- ing which has been explored in different languages. Recently researchers started pay- ing attention towards machine translation for resource-poor languages and closely related languages. A widespread and underlying problem for these machine transla- tion systems is the variation in orthographic conventions which causes many issues to traditional approaches. Two languages written in two different orthographies are not easily comparable, but orthographic information can also be used to improve the machine translation system. This article offers a survey of research regarding orthog- raphy’s influence on machine translation of under-resourced languages. It introduces under-resourced languages in terms of machine translation and how orthographic in- formation can be utilised to improve machine translation. We describe previous work in this area, discussing what underlying assumptions were made, and showing how orthographic knowledge improves the performance of machine translation of under- resourced languages. We discuss different types of machine translation and demon- strate a recent trend that seeks to link orthographic information with well-established machine translation methods. Considerable attention is given to current efforts of cog- nates information at different levels of machine translation and the lessons that can Bharathi Raja Chakravarthi [email protected] Priya Rani [email protected] Mihael Arcan [email protected] John P. -
Reduplication in Distributed Morphology
Reduplication in Distributed Morphology Item Type text; Article Authors Haugen, Jason D. Publisher University of Arizona Linguistics Circle (Tucson, Arizona) Journal Coyote Papers: Working Papers in Linguistics, Linguistic Theory at the University of Arizona Rights http://rightsstatements.org/vocab/InC/1.0/ Download date 27/09/2021 03:15:12 Item License Copyright © is held by the author(s). Link to Item http://hdl.handle.net/10150/143067 The Coyote Papers 18 (May 2011), University of Arizona Linguistics Department, Tucson, AZ, U.S.A. Reduplication in Distributed Morphology Jason D. Haugen Oberlin College [email protected] Keywords reduplication, Distributed Morphology, allomorphy, reduplicative allomorphy, base-dependence, Hiaki (Yaqui), Tawala Abstract The two extant approaches to reduplication in Distributed Morphology (DM) are: (i) the read- justment approach, where reduplication is claimed to result from a readjustment operation on some stem triggered by a (typically null) affix; and (ii) the affixation approach, where reduplica- tion is claimed to result from the insertion of a special type of Vocabulary Item (i.e. a reduplica- tive affix–“reduplicant" or \Red") which gets inserted into a syntactic node in order to discharge some morphosyntactic feature(s), but which receives its own phonological content from some other stem (i.e. its \base") in the output. This paper argues from phonologically-conditioned allomorphy pertaining to base-dependence, as in the case of durative reduplication in Tawala, that the latter approach best accounts for a necessary distinction between \reduplicants" and \bases" as different types of morphemes which display different phonological effects, including \the emergence of the unmarked" effects, in many languages. -
Word Representation Or Word Embedding in Persian Texts Siamak Sarmady Erfan Rahmani
1 Word representation or word embedding in Persian texts Siamak Sarmady Erfan Rahmani (Abstract) Text processing is one of the sub-branches of natural language processing. Recently, the use of machine learning and neural networks methods has been given greater consideration. For this reason, the representation of words has become very important. This article is about word representation or converting words into vectors in Persian text. In this research GloVe, CBOW and skip-gram methods are updated to produce embedded vectors for Persian words. In order to train a neural networks, Bijankhan corpus, Hamshahri corpus and UPEC corpus have been compound and used. Finally, we have 342,362 words that obtained vectors in all three models for this words. These vectors have many usage for Persian natural language processing. and a bit set in the word index. One-hot vector method 1. INTRODUCTION misses the similarity between strings that are specified by their characters [3]. The Persian language is a Hindi-European language The second method is distributed vectors. The words where the ordering of the words in the sentence is in the same context (neighboring words) may have subject, object, verb (SOV). According to various similar meanings (For example, lift and elevator both structures in Persian language, there are challenges that seem to be accompanied by locks such as up, down, cannot be found in languages such as English [1]. buildings, floors, and stairs). This idea influences the Morphologically the Persian is one of the hardest representation of words using their context. Suppose that languages for the purposes of processing and each word is represented by a v dimensional vector. -
University of California Santa Cruz Minimal Reduplication
UNIVERSITY OF CALIFORNIA SANTA CRUZ MINIMAL REDUPLICATION A dissertation submitted in partial satisfaction of the requirements for the degree of DOCTOR OF PHILOSOPHY in LINGUISTICS by Jesse Saba Kirchner June 2010 The Dissertation of Jesse Saba Kirchner is approved: Professor Armin Mester, Chair Professor Jaye Padgett Professor Junko Ito Tyrus Miller Vice Provost and Dean of Graduate Studies Copyright © by Jesse Saba Kirchner 2010 Some rights reserved: see Appendix E. Contents Abstract vi Dedication viii Acknowledgments ix 1 Introduction 1 1.1 Structureofthethesis ...... ....... ....... ....... ........ 2 1.2 Overviewofthetheory...... ....... ....... ....... .. ....... 2 1.2.1 GoalsofMR ..................................... 3 1.2.2 Assumptionsandpredictions. ....... 7 1.3 MorphologicalReduplication . .......... 10 1.3.1 Fixedsize..................................... ... 11 1.3.2 Phonologicalopacity. ...... 17 1.3.3 Prominentmaterialpreferentiallycopied . ............ 22 1.3.4 Localityofreduplication. ........ 24 1.3.5 Iconicity ..................................... ... 24 1.4 Syntacticreduplication. .......... 26 2 Morphological reduplication 30 2.1 Casestudy:Kwak’wala ...... ....... ....... ....... .. ....... 31 2.2 Data............................................ ... 33 2.2.1 Phonology ..................................... .. 33 2.2.2 Morphophonology ............................... ... 40 2.2.3 -mut’ .......................................... 40 2.3 Analysis........................................ ..... 48 2.3.1 Lengtheningandreduplication. -
Language Resource Management System for Asian Wordnet Collaboration and Its Web Service Application
Language Resource Management System for Asian WordNet Collaboration and Its Web Service Application Virach Sornlertlamvanich1,2, Thatsanee Charoenporn1,2, Hitoshi Isahara3 1Thai Computational Linguistics Laboratory, NICT Asia Research Center, Thailand 2National Electronics and Computer Technology Center, Pathumthani, Thailand 3National Institute of Information and Communications Technology, Japan E-mail: [email protected], [email protected], [email protected] Abstract This paper presents the language resource management system for the development and dissemination of Asian WordNet (AWN) and its web service application. We develop the platform to establish a network for the cross language WordNet development. Each node of the network is designed for maintaining the WordNet for a language. Via the table that maps between each language WordNet and the Princeton WordNet (PWN), the Asian WordNet is realized to visualize the cross language WordNet between the Asian languages. We propose a language resource management system, called WordNet Management System (WNMS), as a distributed management system that allows the server to perform the cross language WordNet retrieval, including the fundamental web service applications for editing, visualizing and language processing. The WNMS is implemented on a web service protocol therefore each node can be independently maintained, and the service of each language WordNet can be called directly through the web service API. In case of cross language implementation, the synset ID (or synset offset) defined by PWN is used to determined the linkage between the languages. English PWN. Therefore, the original structural 1. Introduction information is inherited to the target WordNet through its sense translation and sense ID. The AWN finally Language resource becomes crucial in the recent needs for cross cultural information exchange. -
Morphological Sources of Phonological Length
Morphological Sources of Phonological Length by Anne Pycha B.A. (Brown University) 1993 M.A. (University of California, Berkeley) 2004 A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Linguistics in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Sharon Inkelas (chair) Professor Larry Hyman Professor Keith Johnson Professor Johanna Nichols Spring 2008 Abstract Morphological Sources of Phonological Length by Anne Pycha Doctor of Philosophy in Linguistics University of California, Berkeley Professor Sharon Inkelas, Chair This study presents and defends Resizing Theory, whose claim is that the overall size of a morpheme can serve as a basic unit of analysis for phonological alternations. Morphemes can increase their size by any number of strategies -- epenthesizing new segments, for example, or devoicing an existing segment (and thereby increasing its phonetic duration) -- but it is the fact of an increase, and not the particular strategy used to implement it, which is linguistically significant. Resizing Theory has some overlap with theories of fortition and lenition, but differs in that it uses the independently- verifiable parameter of size in place of an ad-hoc concept of “strength” and thereby encompasses a much greater range of phonological alternations. The theory makes three major predictions, each of which is supported with cross-linguistic evidence. First, seemingly disparate phonological alternations can achieve identical morphological effects, but only if they trigger the same direction of change in a morpheme’s size. Second, morpheme interactions can take complete control over phonological outputs, determining surface outputs when traditional features and segments fail to do so. -
Generating Sense Embeddings for Syntactic and Semantic Analogy for Portuguese
Generating Sense Embeddings for Syntactic and Semantic Analogy for Portuguese Jessica´ Rodrigues da Silva1, Helena de Medeiros Caseli1 1Federal University of Sao˜ Carlos, Department of Computer Science [email protected], [email protected] Abstract. Word embeddings are numerical vectors which can represent words or concepts in a low-dimensional continuous space. These vectors are able to capture useful syntactic and semantic information. The traditional approaches like Word2Vec, GloVe and FastText have a strict drawback: they produce a sin- gle vector representation per word ignoring the fact that ambiguous words can assume different meanings. In this paper we use techniques to generate sense embeddings and present the first experiments carried out for Portuguese. Our experiments show that sense vectors outperform traditional word vectors in syn- tactic and semantic analogy tasks, proving that the language resource generated here can improve the performance of NLP tasks in Portuguese. 1. Introduction Any natural language (Portuguese, English, German, etc.) has ambiguities. Due to ambi- guity, the same word surface form can have two or more different meanings. For exam- ple, the Portuguese word banco can be used to express the financial institution but also the place where we can rest our legs (a seat). Lexical ambiguities, which occur when a word has more than one possible meaning, directly impact tasks at the semantic level and solving them automatically is still a challenge in natural language processing (NLP) applications. One way to do this is through word embeddings. Word embeddings are numerical vectors which can represent words or concepts in a low-dimensional continuous space, reducing the inherent sparsity of traditional vector- space representations [Salton et al. -
Morphological Well-Formedness As a Derivational Constraint on Syntax: Verbal Inflection in English1
Morphological Well-Formedness as a Derivational Constraint on Syntax: Verbal Inflection in English 1 John Frampton and Sam Gutmann Northeastern University November 1999 ::: the English verb and its auxiliaries are always a source of fascination. Indeed, I think we can say, following Neumeyer (1980), that much of the intellectual success of Chomsky (1957) is due precisely to its strikingly attractive analysis of this inflectional system. (Emonds, 1985, A Unified Theory of Syntactic Categories) The now well-known differences between verb raising in English and French that were first explored by Emonds (1978), and later analyzed in more depth by Pollock (1989), played an important role in the early development of Chomsky's (1995) Min- imalist Program (MP). See Chomsky (1991), in particular, for an analysis in terms of economy of derivation. As the MP developed, the effort to analyze verb raising largely receded. Lasnik (1996), however, took the question up again and showed that fairly broad empirical coverage could be achieved within the MP framework. One important empirical gap (which will be discussed later) remained, and various stipulations about the feature makeup of inserted lexical items were required. In this paper we hope both to remove the empirical gap and to reduce the stipulative aspects of Lasnik's proposal. The larger ambition of this paper is to support the effort, begun in Frampton and Gutmann (1999), to reorient the MP away from economy principles, particularly away from the idea that comparison of derivations is relevant to grammaticality. Here, we will assume certain key ideas from that paper and show how they provide a productive framework for an analysis of verb raising. -
TEI and the Documentation of Mixtepec-Mixtec Jack Bowers
Language Documentation and Standards in Digital Humanities: TEI and the documentation of Mixtepec-Mixtec Jack Bowers To cite this version: Jack Bowers. Language Documentation and Standards in Digital Humanities: TEI and the documen- tation of Mixtepec-Mixtec. Computation and Language [cs.CL]. École Pratique des Hauts Études, 2020. English. tel-03131936 HAL Id: tel-03131936 https://tel.archives-ouvertes.fr/tel-03131936 Submitted on 4 Feb 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Préparée à l’École Pratique des Hautes Études Language Documentation and Standards in Digital Humanities: TEI and the documentation of Mixtepec-Mixtec Soutenue par Composition du jury : Jack BOWERS Guillaume, JACQUES le 8 octobre 2020 Directeur de Recherche, CNRS Président Alexis, MICHAUD Chargé de Recherche, CNRS Rapporteur École doctorale n° 472 Tomaž, ERJAVEC Senior Researcher, Jožef Stefan Institute Rapporteur École doctorale de l’École Pratique des Hautes Études Enrique, PALANCAR Directeur de Recherche, CNRS Examinateur Karlheinz, MOERTH Senior Researcher, Austrian Center for Digital Humanities Spécialité and Cultural Heritage Examinateur Linguistique Emmanuel, SCHANG Maître de Conférence, Université D’Orléans Examinateur Benoit, SAGOT Chargé de Recherche, Inria Examinateur Laurent, ROMARY Directeur de recherche, Inria Directeur de thèse 1. -
Creating Language Resources for Under-Resourced Languages: Methodologies, and Experiments with Arabic
Language Resources and Evaluation manuscript No. (will be inserted by the editor) Creating Language Resources for Under-resourced Languages: Methodologies, and experiments with Arabic Mahmoud El-Haj · Udo Kruschwitz · Chris Fox Received: 08/11/2013 / Accepted: 15/07/2014 Abstract Language resources are important for those working on computational methods to analyse and study languages. These resources are needed to help ad- vancing the research in fields such as natural language processing, machine learn- ing, information retrieval and text analysis in general. We describe the creation of useful resources for languages that currently lack them, taking resources for Arabic summarisation as a case study. We illustrate three different paradigms for creating language resources, namely: (1) using crowdsourcing to produce a small resource rapidly and relatively cheaply; (2) translating an existing gold-standard dataset, which is relatively easy but potentially of lower quality; and (3) using manual effort with ap- propriately skilled human participants to create a resource that is more expensive but of high quality. The last of these was used as a test collection for TAC-2011. An evaluation of the resources is also presented. The current paper describes and extends the resource creation activities and evaluations that underpinned experiments and findings that have previously appeared as an LREC workshop paper (El-Haj et al 2010), a student conference paper (El-Haj et al 2011b), and a description of a multilingual summarisation pilot (El-Haj et al 2011c; Giannakopoulos et al 2011). M. El-Haj School of Computing and Communications, Lancaster University, UK Tel.: +44(0)1524 51 0348 E-mail: [email protected] U. -
Adaptation of Deep Bidirectional Transformers for Afrikaans Language
Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), pages 2475–2478 Marseille, 11–16 May 2020 c European Language Resources Association (ELRA), licensed under CC-BY-NC Adaptation of Deep Bidirectional Transformers for Afrikaans Language Sello Ralethe Rand Merchant Bank Johannesburg, South Africa [email protected] Abstract The recent success of pretrained language models in Natural Language Processing has sparked interest in training such models for languages other than English. Currently, training of these models can either be monolingual or multilingual based. In the case of multilingual models, such models are trained on concatenated data of multiple languages. We introduce AfriBERT, a language model for the Afrikaans language based on Bidirectional Encoder Representation from Transformers (BERT). We compare the performance of AfriBERT against multilingual BERT in multiple downstream tasks, namely part-of-speech tagging, named-entity recognition, and dependency parsing. Our results show that AfriBERT improves the current state-of-the-art in most of the tasks we considered, and that transfer learning from multilingual to monolingual model can have a significant performance improvement on downstream tasks. We release the pretrained model for AfriBERT. Keywords: Afrikaans, BERT, AfriBERT, Multilingual art for most of these tasks when compared with 1. Introduction multilingual approach. In summary, we make three main contributions: Pretrained language models use large-scale unlabelled data to learn highly effective general language representations. These models have become ubiquitous in Natural • We use transfer learning to train a monolingual BERT model on the Afrikaans language using a Language Processing (NLP), pushing the state-of-the-art in combined corpus.