Lexical Resources for Low-Resource Pos Tagging in Neural Times

Total Page:16

File Type:pdf, Size:1020Kb

Lexical Resources for Low-Resource Pos Tagging in Neural Times Lexical Resources for Low-Resource PoS Tagging in Neural Times Barbara Plank Sigrid Klerke Department of Computer Science Department of Computer Science ITU, IT University of Copenhagen ITU, IT University of Copenhagen Denmark Denmark [email protected] [email protected] Abstract benefits from external lexical knowledge. In par- More and more evidence is appearing that ticular: integrating symbolic lexical knowledge a) we evaluate the neural tagger across a total into neural models aids learning. This of 20+ languages, proposing a novel baseline contrasts the widely-held belief that neural which uses retrofitting; networks largely learn their own feature b) we investigate the reliance on dictionary size representations. For example, recent work and properties; has shown benefits of integrating lexicons c) we analyze model-internal representations to aid cross-lingual part-of-speech (PoS). via a probing task to investigate to what ex- However, little is known on how com- tent model-internal representations capture plementary such additional information is, morphosyntactic information. and to what extent improvements depend on the coverage and quality of these exter- Our experiments confirm the synergetic effect nal resources. This paper seeks to fill this between a neural tagger and symbolic linguistic gap by providing a thorough analysis on knowledge. Moreover, our analysis shows that the the contributions of lexical resources for composition of the dictionary plays a more impor- cross-lingual PoS tagging in neural times. tant role than its coverage. 1 Introduction 2 Methodology In natural language processing, the deep learning Our base tagger is a bidirectional long short-term revolution has shifted the focus from conventional memory network (bi-LSTM) (Graves and Schmid- hand-crafted symbolic representations to dense in- huber, 2005; Hochreiter and Schmidhuber, 1997; puts, which are adequate representations learned Plank et al., 2016) with a rich word encoding automatically from corpora. However, particu- model which consists of a character-based bi- larly when working with low-resource languages, LSTM representation ~cw paired with pre-trained small amounts of symbolic lexical resources such word embeddings ~w. Sub-word and especially as user-generated lexicons are often available even character-level modeling is currently pervasive in when gold-standard corpora are not. Recent work top-performing neural sequence taggers, owing has shown benefits of combining conventional lex- to its capacity to effectively capture morpholog- ical information into neural cross-lingual part-of- ical features that are useful in labeling out-of- speech (PoS) tagging (Plank and Agic,´ 2018). vocabulary (OOV) items. Sub-word information is However, little is known on how complementary often coupled with standard word embeddings to such additional information is, and to what extent mitigate OOV issues. Specifically, i) word embed- improvements depend on the coverage and quality dings are typically built from massive unlabeled of these external resources. datasets and thus OOVs are less likely to be en- The contribution of this paper is in the analysis countered at test time, while ii) character embed- of the contributions of models’ components (tag- dings offer further linguistically plausible fallback ger transfer through annotation projection vs. the for the remaining OOVs through modeling intra- contribution of encoding lexical and morphosyn- word relations. Through these approaches, multi- tactic resources). We seek to understand un- lingual PoS tagging has seen tangible gains from der which conditions a low-resource neural tagger neural methods in the recent years. 2.1 Lexical resources icons by comparing tag sets to the gold treebank We use linguistic resources that are user-generated data, inspired by Li et al. (2012). In particular, let and available for many languages. The first is T be the dictionary derived from the gold treebank W WIKTIONARY, a word type dictionary that maps (development data), and be the user-generated words to one of the 12 Universal PoS tags (Li dictionary, i.e., the respective Wiktionary (as we et al., 2012; Petrov et al., 2012). The second re- are looking at PoS tags). For each word type, we T W source is UNIMORPH, a morphological dictionary compare the tag sets in and and distinguish that provides inflectional paradigms for 350 lan- six cases: guages (Kirov et al., 2016). For Wiktionary, we 1.N ONE: The word type is in the training data use the freely available dictionaries from Li et al. but not in the lexicon (out-of-lexicon). (2012). UniMorph covers between 8-38 morpho- logical properties (for English and Finnish, re- 2.E QUAL: W = T spectively).1 The sizes of the dictionaries vary 3.D ISJOINT: W \ T = ; considerably, from a few thousand entries (e.g., for Hindi and Bulgarian) to 2M entries (Finnish Uni- 4.O VERLAP: W \ T 6= ; Morph). We study the impact of smaller dictionary sizes in Section 4.1. 5.S UBSET: W ⊂ T The tagger we analyze in this paper is an exten- 6.S UPERSET: W ⊃ T sion of the base tagger, called distant supervision from disparate sources (DSDS) tagger (Plank and In an ideal setup, the dictionaries contain no dis- Agic,´ 2018). It is trained on projected data and joint tag sets, and larger amounts of equal tag sets further differs from the base tagger by the integra- or superset of the treebank data. This is particu- tion of lexicon information. In particular, given larly desirable for approaches that take lexical in- a lexicon src,DSDS uses ~esrc to embed the lex- formation as type-level supervision. icon into an l-dimensional space, where ~esrc is the concatenation of all embedded m properties of 2.2 Experimental setup length l (empirically set, see Section 2.2), and a In this section we describe the baselines, the data zero vector for words not in the lexicon. A prop- and the tagger hyperparameters. erty here is a possible PoS tag (for Wiktionary) or Data We use the 12 Universal PoS tags (Petrov a morphological feature (for Unimorph). To inte- et al., 2012). The set of languages is motivated by grate the type-level supervision, the lexicon em- accessibility to embeddings and dictionaries. We beddings vector is created and concatenated to the here focus on 21 dev sets of the Universal Depen- word and character-level representations for every dencies 2.1 (Nivre and et al., 2017), test set results token: ~w ◦ ~cw ◦ ~e. are reported by Plank and Agic´ (2018) showing We compare DSDS to alternative ways of that DSDS provides a viable alternative. using lexical information. The first approach uses lexical information directly during decod- Annotation projection To build the taggers for ing (Tackstr¨ om¨ et al., 2013). The second approach new languages, we resort to annotation projec- is more implicit and uses the lexicon to induce tion following Plank and Agic´ (2018). In par- better word embeddings for tagger initialization. ticular, they employ the approach by Agic´ et al. In particular, we use the dictionary for retrofitting (2016), where labels are projected from multi- off-the-shelf embeddings (Faruqui et al., 2015) to ple sources to multiple targets and then decoded initialize the tagger with those. The latter is a through weighted majority voting with word align- novel approach which, to the best of our knowl- ment probabilities and source PoS tagger confi- edge, has not yet been evaluated in the neural tag- dences. The wide-coverage Watchtower corpus ging literature. The idea is to bring the off-the- (WTC) by Agic´ et al. (2016) is used, where 5k shelf embeddings closer to the PoS tagging task instances are selected via data selection by align- by retrofitting the embeddings with syntactic clus- ment coverage following Plank and Agic´ (2018). ters derived from the lexicon. Baselines We compare to the following alterna- We take a deeper look at the quality of the lex- tives: type-constraint Wiktionary supervision (Li 1More details: http://unimorph.org/ et al., 2012) and retrofitting initialization. DEV SETS (UD2.1) LANGUAGE 5k TCW RETRO DSDS Bulgarian (bg) 89.8 89.9 87.1 91.0 Croatian (hr) 84.7 85.2 83.0 85.9 Czech (cs) 87.5 87.5 84.9 87.4 Danish (da) 89.8 89.3 88.2 90.1 Dutch (nl) 88.6 89.2 86.6 89.6 English (en) 86.4 87.6 82.5 87.3 Finnish (fi) 81.7 81.4 79.2 83.1 French (fr) 91.5 90.0 89.8 91.3 Figure 1: Analysis of Wiktionary vs gold (dev set) German (de) 85.8 87.1 84.7 87.5 tag sets. ‘None’: percentage of word types not Greek (el) 80.9 86.1 79.3 79.2 covered in the lexicon. ‘Disjoint’: the gold data Hebrew (he) 75.8 75.9 71.7 76.8 Hindi (hi) 63.8 63.9 63.0 66.2 and Wiktionary do not agree on the tag sets. See Hungarian (hu) 77.5 77.5 75.5 76.2 Section 2.1 for details on other categories. Italian (it) 92.2 91.8 90.0 93.7 Norwegian (no) 91.0 91.1 88.8 91.4 Persian (fa) 43.6 43.8 44.1 43.6 pirically find it to be best to not update the word Polish (pl) 84.9 84.9 83.3 85.4 embeddings in this noisy training setup, as that re- Portuguese 92.4 92.2 88.6 93.1 Romanian (ro) 84.2 84.2 80.2 86.0 sults in better performance, see Section 4.4. Spanish (es) 90.7 88.9 88.9 91.7 Swedish (sv) 89.4 89.2 87.0 89.8 3 Results AVG(21) 83.4 83.6 81.3 84.1 Table 1 presents our replication results, i.e., tag- GERMANIC (6) 88.5 88.9 86.3 89.3 ging accuracy for the 21 individual languages, ROMANCE (5) 90.8 90.1 88.4 91.4 SLAVIC (4) 86.7 86.8 84.6 87.4 with means over all languages and language fam- INDO-IRANIAN (2) 53.7 53.8 53.5 54.9 ilies (for which at least two languages are avail- URALIC (2) 79.6 79.4 79.2 79.6 able).
Recommended publications
  • Lexical Resource Reconciliation in the Xerox Linguistic Environment
    Lexical Resource Reconciliation in the Xerox Linguistic Environment Ronald M. Kaplan Paula S. Newman Xerox Palo Alto Research Center Xerox Palo Alto Research Center 3333 Coyote Hill Road 3333 Coyote Hill Road Palo Alto, CA, 94304, USA Palo Alto, CA, 94304, USA kapl an~parc, xerox, com pnewman©parc, xerox, tom Abstract This paper motivates and describes the morpho- logical and lexical adaptations of XLE. They evolved This paper motivates and describes those concurrently with PARGRAM, a multi-site XLF_~ aspects of the Xerox Linguistic Environ- based broad-coverage grammar writing effort aimed ment (XLE) that facilitate the construction at creating parallel grammars for English, French, of broad-coverage Lexical Functional gram- and German (see Butt et. al., forthcoming). The mars by incorporating morphological and XLE adaptations help to reconcile separately con- lexical material from external resources. structed linguistic resources with the needs of the Because that material can be incorrect, in- core grammars. complete, or otherwise incompatible with The paper is divided into three major sections. the grammar, mechanisms are provided to The next section sets the stage by providing a short correct and augment the external material overview of the overall environmental features of the to suit the needs of the grammar developer. original LFG GWB and its provisions for morpho- This can be accomplished without direct logical and lexicon processing. The two following modification of the incorporated material, sections describe the XLE extensions in those areas. which is often infeasible or undesirable. Externally-developed finite-state morpho- 2 The GWB Data Base logical analyzers are reconciled with gram- mar requirements by run-time simulation GWB provides a computational environment tai- of finite-state calculus operations for com- lored especially for defining and testing grammars bining transducers.
    [Show full text]
  • Wordnet As an Ontology for Generation Valerio Basile
    WordNet as an Ontology for Generation Valerio Basile To cite this version: Valerio Basile. WordNet as an Ontology for Generation. WebNLG 2015 1st International Workshop on Natural Language Generation from the Semantic Web, Jun 2015, Nancy, France. hal-01195793 HAL Id: hal-01195793 https://hal.inria.fr/hal-01195793 Submitted on 9 Sep 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. WordNet as an Ontology for Generation Valerio Basile University of Groningen [email protected] Abstract a structured database of words in a format read- able by electronic calculators. For each word in In this paper we propose WordNet as an the database, WordNet provides a list of senses alternative to ontologies for the purpose of and their definition in plain English. The senses, natural language generation. In particu- besides having a inner identifier, are represented lar, the synset-based structure of WordNet as synsets, i.e., sets of synonym words. Words proves useful for the lexicalization of con- in general belong to multiple synsets, as they cepts, by providing ready lists of lemmas have more than one sense, so the relation between for each concept to generate.
    [Show full text]
  • Modeling and Encoding Traditional Wordlists for Machine Applications
    Modeling and Encoding Traditional Wordlists for Machine Applications Shakthi Poornima Jeff Good Department of Linguistics Department of Linguistics University at Buffalo University at Buffalo Buffalo, NY USA Buffalo, NY USA [email protected] [email protected] Abstract Clearly, descriptive linguistic resources can be This paper describes work being done on of potential value not just to traditional linguis- the modeling and encoding of a legacy re- tics, but also to computational linguistics. The source, the traditional descriptive wordlist, difficulty, however, is that the kinds of resources in ways that make its data accessible to produced in the course of linguistic description NLP applications. We describe an abstract are typically not easily exploitable in NLP appli- model for traditional wordlist entries and cations. Nevertheless, in the last decade or so, then provide an instantiation of the model it has become widely recognized that the devel- in RDF/XML which makes clear the re- opment of new digital methods for encoding lan- lationship between our wordlist database guage data can, in principle, not only help descrip- and interlingua approaches aimed towards tive linguists to work more effectively but also al- machine translation, and which also al- low them, with relatively little extra effort, to pro- lows for straightforward interoperation duce resources which can be straightforwardly re- with data from full lexicons. purposed for, among other things, NLP (Simons et al., 2004; Farrar and Lewis, 2007). 1 Introduction Despite this, it has proven difficult to create When looking at the relationship between NLP significant electronic descriptive resources due to and linguistics, it is typical to focus on the dif- the complex and specific problems inevitably as- ferent approaches taken with respect to issues sociated with the conversion of legacy data.
    [Show full text]
  • TEI and the Documentation of Mixtepec-Mixtec Jack Bowers
    Language Documentation and Standards in Digital Humanities: TEI and the documentation of Mixtepec-Mixtec Jack Bowers To cite this version: Jack Bowers. Language Documentation and Standards in Digital Humanities: TEI and the documen- tation of Mixtepec-Mixtec. Computation and Language [cs.CL]. École Pratique des Hauts Études, 2020. English. tel-03131936 HAL Id: tel-03131936 https://tel.archives-ouvertes.fr/tel-03131936 Submitted on 4 Feb 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Préparée à l’École Pratique des Hautes Études Language Documentation and Standards in Digital Humanities: TEI and the documentation of Mixtepec-Mixtec Soutenue par Composition du jury : Jack BOWERS Guillaume, JACQUES le 8 octobre 2020 Directeur de Recherche, CNRS Président Alexis, MICHAUD Chargé de Recherche, CNRS Rapporteur École doctorale n° 472 Tomaž, ERJAVEC Senior Researcher, Jožef Stefan Institute Rapporteur École doctorale de l’École Pratique des Hautes Études Enrique, PALANCAR Directeur de Recherche, CNRS Examinateur Karlheinz, MOERTH Senior Researcher, Austrian Center for Digital Humanities Spécialité and Cultural Heritage Examinateur Linguistique Emmanuel, SCHANG Maître de Conférence, Université D’Orléans Examinateur Benoit, SAGOT Chargé de Recherche, Inria Examinateur Laurent, ROMARY Directeur de recherche, Inria Directeur de thèse 1.
    [Show full text]
  • Unification of Multiple Treebanks and Testing Them with Statistical Parser with Support of Large Corpus As a Lexical Resource
    Eng. &Tech.Journal, Vol.34,Part (B), No.5,2016 Unification of Multiple Treebanks and Testing Them With Statistical Parser With Support of Large Corpus as a Lexical Resource Dr. Ahmed Hussein Aliwy Computer Science Department, University of Kufa/Kufa Email:[email protected] Received on:12/11/2015 & Accepted on:21/4/2016 ABSTRACT There are many Treebanks, texts with the parse tree, available for the researcher in the field of Natural Language Processing (NLP). All these Treebanks are limited in size, and each one used private Context Free Grammar (CFG) production rules (private formalism) because its construction is time consuming and need to experts in the field of linguistics. These Treebanks, as we know, can be used for statistical parsing and machine translation tests and other fields in NLP applications. We propose, in this paper, to build large Treebank from multiple Treebanks for the same language. Also, we propose to use an annotated corpus as a lexical resource. Three English Treebanks are taken for our study which arePenn Treebank (PTB), GENIA Treebank (GTB) and British National Corpus (BNC). Brown corpus is used as a lexical resource which contains approximately one million tokens annotated with part of speech tags for each. Our work start by the unification of POS tagsets of the three Treebank then the mapping process between Brown Corpus tagset and the unified tagset is done. This is done manually according to our experience in this field. Also, all the non-terminals in the CFG production are unified.All the three Treebanks and the Brown corpus are rebuilt according to the new modification.
    [Show full text]
  • Instructions for ACL-2010 Proceedings
    Towards the Representation of Hashtags in Linguistic Linked Open Data Format Thierry Declerck Piroska Lendvai Dept. of Computational Linguistics, Dept. of Computational Linguistics, Saarland University, Saarbrücken, Saarland University, Saarbrücken, Germany Germany [email protected] [email protected] the understanding of the linguistic and extra- Abstract linguistic environment of the social media post- ing that features the hashtag. A pilot study is reported on developing the In the light of recent developments in the basic Linguistic Linked Open Data (LLOD) Linked Open Data (LOD) framework, it seems infrastructure for hashtags from social media relevant to investigate the representation of lan- posts. Our goal is the encoding of linguistical- guage data in social media so that it can be pub- ly and semantically enriched hashtags in a lished in the LOD cloud. Already the classical formally compact way using the machine- readable OntoLex model. Initial hashtag pro- Linked Data framework included a growing set cessing consists of data-driven decomposition of linguistic resources: language data i.e. hu- of multi-element hashtags, the linking of man-readable information connected to data ob- spelling variants, and part-of-speech analysis jects by e.g. RDFs annotation properties such as of the elements. Then we explain how the On- 'label' and 'comment' , have been suggested to toLex model is used both to encode and to en- be encoded in machine-readable representation3. rich the hashtags and their elements by linking This triggered the development of the lemon them to existing semantic and lexical LOD re- model (McCrae et al., 2012) that allowed to op- sources: DBpedia and Wiktionary.
    [Show full text]
  • The Interplay Between Lexical Resources and Natural Language Processing
    Tutorial on: The interplay between lexical resources and Natural Language Processing Google Group: goo.gl/JEazYH 1 Luis Espinosa Anke Jose Camacho-Collados Mohammad Taher Pilehvar Google Group: goo.gl/JEazYH 2 Luis Espinosa Anke (linguist) Jose Camacho-Collados (mathematician) Mohammad Taher Pilehvar (computer scientist) 3 www.kahoot.it PIN: 7024700 NAACL 2018 Tutorial: The Interplay between Lexical Resources and Natural Language Processing Camacho-Collados, Espinosa-Anke, Pilehvar 4 QUESTION 1 PIN: 7024700 www.kahoot.it NAACL 2018 Tutorial: The Interplay between Lexical Resources and Natural Language Processing Camacho-Collados, Espinosa-Anke, Pilehvar 5 Outline 1. Introduction 2. Overview of Lexical Resources 3. NLP for Lexical Resources 4. Lexical Resources for NLP 5. Conclusion and Future Directions NAACL 2018 Tutorial: The Interplay between Lexical Resources and Natural Language Processing Camacho-Collados, Espinosa-Anke, Pilehvar 6 INTRODUCTION NAACL 2018 Tutorial: The Interplay between Lexical Resources and Natural Language Processing Camacho-Collados, Espinosa-Anke, Pilehvar 7 Introduction ● “A lexical resource (LR) is a database consisting of one or several dictionaries.” (en.wikipedia.org/wiki/Lexical_resource) ● “What is lexical resource? In a word it is vocabulary and it matters for IELTS writing because …” (dcielts.com/ielts-writing/lexical-resource) ● “The term Language Resource refers to a set of speech or language data and descriptions in machine readable form, used for … ” (elra.info/en/about/what-language-resource )
    [Show full text]
  • Exploring Complex Vowels As Phrase Break Correlates in a Corpus of English Speech with Proposel, a Prosody and POS English Lexicon
    This is a repository copy of Exploring complex vowels as phrase break correlates in a corpus of English speech with ProPOSEL, a Prosody and POS English Lexicon. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/81922/ Proceedings Paper: Brierley, C and Atwell, ES (2009) Exploring complex vowels as phrase break correlates in a corpus of English speech with ProPOSEL, a Prosody and POS English Lexicon. In: Proceedings of InterSpeech'2009. INTERSPEECH 2009, 6-10 Sept 2009, Brighton, UK. International Speech Communications Association , 868 - 871. ISBN 9781615676927 Reuse See Attached Takedown If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing [email protected] including the URL of the record and the reason for the withdrawal request. [email protected] https://eprints.whiterose.ac.uk/ Exploring Complex Vowels as Phrase Break Correlates in a Corpus of English Speech with ProPOSEL, a Prosody and POS English Lexicon. Claire Brierley 1 2, Eric Atwell 2 1 School of Games Computing and Creative Technologies, University of Bolton, UK 2 School of Computing, University of Leeds, UK [email protected], [email protected] Abstract 2. Features used in phrase break prediction Real-world knowledge of syntax is seen as integral to the Syntactic features are integral to phrase break prediction machine learning task of phrase break prediction but there is a because of the overlap between syntactic and prosodic deficiency of a priori knowledge of prosody in both rule-based phrasing. The boundary annotation / | / in the following and data-driven classifiers.
    [Show full text]
  • Methods for Efficient Ontology Lexicalization for Non-Indo
    Methods for Efficient Ontology Lexicalization for Non-Indo-European Languages The Case of Japanese Bettina Lanser A thesis presented for the degree of Doctor rerum naturalium (Dr. rer. nat.) Bielefeld University June 2017 ii This work was supported by the Cluster of Excellence Cognitive Interaction Technol- ogy CITEC (EXC 277) at Bielefeld University, which is funded by the German Research Foundation (DFG). Printed on non-ageing paper — ISO 9706 ABSTRACT In order to make the growing amount of conceptual knowledge available through on- tologies and datasets accessible to humans, NLP applications need access to information on how this knowledge can be verbalized in natural language. One way to provide this kind of information are ontology lexicons, which apart from the actual verbalizations in a given target language can provide further, rich linguistic information about them. Compiling such lexicons manually is a very time-consuming task and requires expertise both in Semantic Web technologies and lexicon engineering, as well as a very good knowledge of the target language at hand. In this thesis we present two alternative approaches to generating ontology lexicons by means of crowdsourcing on the one hand and through the framework M-ATOLL on the other hand. So far, M-ATOLL has been used with a number of Indo-European languages that share a large set of common characteristics. Therefore, another focus of this work will be the generation of ontology lexicons specifically for Non-Indo-European languages. In order to explore these two topics, we use both approaches to generate Japanese ontology lexicons for the DBpedia ontology: First, we use CrowdFlower to generate a small Japanese ontology lexicon for ten exemplary ontology elements according to a two-stage workflow, the main underlying idea of which is to turn the task of generating lexicon entries into a translation task; the starting point of this translation task is a manually created English lexicon for DBpedia.
    [Show full text]
  • Maurice Gross' Grammar Lexicon and Natural Language Processing
    Maurice Gross’ grammar lexicon and Natural Language Processing Claire Gardent, Bruno Guillaume, Guy Perrier, Ingrid Falk To cite this version: Claire Gardent, Bruno Guillaume, Guy Perrier, Ingrid Falk. Maurice Gross’ grammar lexicon and Natural Language Processing. Language and Technology Conference, Apr 2005, Poznan/Pologne, France. inria-00103156 HAL Id: inria-00103156 https://hal.inria.fr/inria-00103156 Submitted on 3 Oct 2006 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Maurice Gross’ grammar lexicon and Natural Language Processing Claire Gardent♦, Bruno Guillaume♠, Guy Perrier♥, Ingrid Falk♣ ♦CNRS/LORIA ♠INRIA/LORIA ♥University Nancy 2/LORIA ♣CNRS/ATILF Nancy, France [email protected] Abstract Maurice Gross’ grammar lexicon contains an extremly rich and exhaustive information about the morphosyntactic and semantic proper- ties of French syntactic functors (verbs, adjectives, nouns). Yet its use within natural language processing systems is still restricted. In this paper, we first argue that the information contained in the grammar lexicon is potentially useful for Natural Language Processing (NLP). We then sketch a way to translate this information into a format which is arguably more amenable for use by NLP systems. 1. Maurice Gross’s grammar lexicon gether all the verbs which can take besides a subject, an Much work in syntax concentrates on identifying and infinitival complement but not a finite or a nominal one.
    [Show full text]
  • Using Relevant Domains Resource for Word Sense Disambiguation
    Using Relevant Domains Resource for Word Sense Disambiguation Sonia Vázquez, Andrés Montoyo German Rigau Department of Software and Computing Systems Department of Computer Languages and Systems University of Alicante Euskal Herriko Unibertsitatea Alicante, Spain Donostia, Spain {svazquez,montoyo}@dlsi.ua.es [email protected] Abstract educate ourselves. The revolution continues and one of its results is that large volumes of information will be shown in a format that is more natural for users than This paper presents a new method for Word Sense the typical data presentation formats of past computer Disambiguation based on the WordNet Domains systems. Natural Language Processing (NLP) is crucial lexical resource [4]. The underlaying working in solving these problems and language technologies hypothesis is that domain labels, such as will make an indispensable contribution to the success ARCHITECTURE, SPORT and MEDICINE provide a of information systems. natural way to establish semantic relations between Designing a system for NLP requires a large word senses, that can be used during the knowledge on language structure, morphology, syntax, disambiguation process. This resource has already semantics and pragmatic nuances. All of these different been used on Word Sense Disambiguation [5], but it linguistic knowledge forms, however, have a common has not made use of glosses information. Thus, we associated problem, their many ambiguities, which are present in first place, a new lexical resource based on difficult to resolve. WordNet Domains glosses information, named In this paper we concentrate on the resolution of the ªRelevant Domainsº. In second place, we describe a lexical ambiguity that appears when a given word has new method for WSD based on this new lexical several different meanings.
    [Show full text]
  • Using a Lexical Semantic Network for the Ontology Building Nadia Bebeshina-Clairet, Sylvie Despres, Mathieu Lafourcade
    Using a Lexical Semantic Network for the Ontology Building Nadia Bebeshina-Clairet, Sylvie Despres, Mathieu Lafourcade To cite this version: Nadia Bebeshina-Clairet, Sylvie Despres, Mathieu Lafourcade. Using a Lexical Semantic Network for the Ontology Building. RANLP: Recent Advances in Natural Language Processing, Sep 2019, Varna, Bulgaria. 12th International Conference Recent Advances in Natural Language Processing, 2019. hal-02269128 HAL Id: hal-02269128 https://hal.archives-ouvertes.fr/hal-02269128 Submitted on 22 Aug 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Using a Lexical Semantic Network for the Ontology Building Nadia Bebeshina-Clairet Sylvie Despres Mathieu Lafourcade LIRMM / LIMICS LIMICS LIRMM 74 rue Marcel Cachin 860 rue de St Priest 93017 Bobigny 34095 Montpellier Abstract cepts) shared by a community of agents. A con- cept corresponds to a set of individuals sharing sim- In the present article, we focus on mining lexical se- ilar characteristics and may or may not be lexical- mantic knowledge resources for the ontology build- ized. Thus, the ontology labels cannot be polyse- ing and enhancement. The ontology building is of- mous. The strength of ontologies is in their for- ten a descending and manual process where the high mal consistency.
    [Show full text]