Embeddings for the Lexicon: Modelling and Representation

Total Page:16

File Type:pdf, Size:1020Kb

Embeddings for the Lexicon: Modelling and Representation Embeddings for the Lexicon: Modelling and Representation Christian Chiarcos1 Thierry Declerck2 Maxim Ionov1 1 Applied Computational Linguistics Lab, Goethe University, Frankfurt am Main, Germany 2 DFKI GmbH, Multilinguality and Language Technology, Saarbrucken,¨ Germany fchiarcos,[email protected] [email protected] Abstract identified as a topic for future developments. Here, we describe possible use-cases for the latter and In this paper, we propose an RDF model for representing embeddings for elements in lex- present our current model for this. ical knowledge graphs (e.g. words and con- cepts), harmonizing two major paradigms in 2 Sharing Embeddings on the Web computational semantics on the level of rep- Although word embeddings are often calculated on resentation, i.e., distributional semantics (us- the fly, the community recognizes the importance ing numerical vector representations), and lex- ical semantics (employing lexical knowledge of pre-trained embeddings as these are readily avail- graphs). Our model is a part of a larger ef- able (it saves time), and cover large quantities of fort to extend a community standard for rep- text (their replication would be energy- and time- resenting lexical resources, OntoLex-Lemon, intense). Finally, a benefit of re-using embeddings with the means to connect it to corpus data. By is that they can be grounded in a well-defined, and, allowing a conjoint modelling of embeddings possibly, shared feature space, whereas locally built and lexical knowledge graphs as its part, we embeddings (whose creation involves an element hope to contribute to the further consolidation of randomness) would reside in an isolate feature of the field, its methodologies and the accessi- bility of the respective resources. space. This is particularly important in the context of multilingual applications, where collections of 1 Background embeddings are in a single feature space (e.g., in MUSE [6]). With the rise of word and concept embeddings, Sharing embeddings, especially if calculated lexical units at all levels of granularity have been over a large amount of data, is not only an econom- subject to various approaches to embed them into ical and ecological requirement, but unavoidable in numerical feature spaces, giving rise to a myriad many cases. For these, we suggest to apply estab- of datasets with pre-trained embeddings generated lished web standards to provide metadata about the with different algorithms that can be freely used lexical component of such data. We are not aware in various NLP applications. With this paper, we of any provider of pre-trained word embeddings present the current state of an effort to connect which come with machine-readable metadata. One these embeddings with lexical knowledge graphs. such example is language information: while ISO- This effort is a part of an extension of a widely 639 codes are sometimes being used for this pur- used community standard for representing, link- pose, they are not given in a machine-readable way, ing and publishing lexical resources on the web, but rather documented in human-readable form or OntoLex-Lemon1. Our work aims to complement given implicitly as part of file names.2 the emerging OntoLex module for representing Frequency, Attestation and Corpus Information 2.1 Concept Embeddings (FrAC) which is currently being developed by the It is to be noted, however, that our focus is not W3C Community Group “Ontology-Lexica”, as so much on word embeddings, since lexical infor- presented in [4]. There we addressed only fre- mation in this context is apparently trivial – plain quency and attestations, whereas core aspects of corpus-based information such as embeddings were 2See the ISO-639-1 code ‘en’ in FastText/MUSE files such as https://dl.fbaipublicfiles.com/arrival/ 1https://www.w3.org/2016/05/ontolex/ vectors/wiki.multi.en.vec. tokens without any lexical information do not seem 2.2 Organizing Contextualized Embeddings to require a structured approach to lexical seman- A related challenge is the organization of contex- tics. This changes drastically for embeddings of tualized embeddings that are not adequately iden- more abstract lexical entities, e.g., word senses or tified by a string (say, a word form), but only by concepts [17], that need to be synchronized be- that string in a particular context. By providing tween the embedding store and the lexical knowl- a reference vocabulary to organize contextualized edge graph by which they are defined. WordNet embeddings together with the respective context, [14] synset identifiers are a notorious example for this challenge will be overcome, as well. the instability of concepts between different ver- sions: Synset 00019837-a means ‘incapable of As a concrete use case of such information, con- being put up with’ in WordNet 1.71, but ‘estab- sider a classical approach to Word Sense Disam- lished by authority’ in version 2.1. In WordNet biguation: The Lesk algorithm [12] uses a bag-of- 3.0, the first synset has the ID 02435671-a, the words model to assess the similarity of a word in a second 00179035-s.3 given context (to be classified) with example sen- tences or definitions provided for different senses The precompiled synset embeddings provided of that word in a lexical resource (training data, with AutoExtend [17] illustrate the consequences: provided, for example, by resources such as Word- The synset IDs seem to refer to WordNet 2.1 Net, thesauri, or conventional dictionaries). The (wn-2.1-00001742-n), but use an ad hoc no- approach was very influential, although it always tation and are not in a machine-readable format. suffered from data sparsity issues, as it relied on More importantly, however, there is no synset string matches between a very limited collection of 00001742-n in Princeton WordNet 2.1. This reference words and context words. With distribu- does exist in Princeton WordNet 1.71, and in fact, tional semantics and word embeddings, such spar- it seems that the authors actually used this edition sity issues are easily overcome as literal matches instead of the version 2.1 they refer to in their pa- are no longer required. per. Machine-readable metadata would not prevent Indeed, more recent methods such as BERT al- such mistakes, but it would facilitate their verifia- low us to induce contextualized word embeddings bility. Given the current conventions in the field, [20]. So, given only a single example for a partic- such mistakes will very likely go unnoticed, thus ular word sense, this would be a basis for a more leading to highly unexpected results in applications informed word sense disambiguation. With more developed on this basis. Our suggestion here is than one example per word sense, this is becom- to use resolvable URIs as concept identifiers, and ing a little bit more difficult, as these now need if they provide machine-readable lexical data, lex- to be either aggregated into a single sense vector, ical information about concept embeddings can or linked to the particular word sense they pertain be more easily verified and (this is another applica- to (which will require a set of vectors as a data tion) integrated with predictions from distributional structure rather than a vector). For data of the sec- semantics. Indeed, the WordNet community has ond type, we suggest to follow existing models for adopted OntoLex-Lemon as an RDF-based repre- representing attestations (glosses, examples) for sentation schema and developed the Collaborative lexical entities, and to represent contextualized em- Interlingual Index (ILI or CILI) [3] to establish beddings (together with their context) as properties sense mappings across a large number of WordNets. of attestations. Reference to ILI URIs would allow to retrieve the lexical information behind a particular concept em- bedding, as the WordNet can be queried for the 3 Addressing the Modelling Challenge lexemes this concept (synset) is associated with. A In order to meet these requirements, we propose a versioning mismatch can then be easily spotted by formalism that allows the conjoint publication of comparing the cosine distance between the word lexical knowledge graphs and embedding stores. embeddings of these lexemes and the embedding We assume that future approaches to access and of the concept presumably derived from them. publish embeddings on the web will be largely in line with high-level methods that abstract from de- 3See https://github.com/globalwordnet/ tails of the actual format and provide a conceptual ili view on the data set as a whole, as provided, for ex- (1) LexicalEntry representation of a lexeme in a lexical knowledge graph, groups together form(s) and sense(s), resp. concept(s) of a particular expres- sion; (2) (lexical) Form, written representation(s) of a particular lexical entry, with (optional) gram- matical information; (3) LexicalSense word sense of one particular lexical entry; (4) LexicalConcept elements of meaning with different lexicalizations, e.g., WordNet synsets. As the dominant vocabulary for lexical knowl- edge graphs on the web of data, the OntoLex- Figure 1: OntoLex-Lemon core model Lemon model has found wide adaptation beyond its original focus on ontology lexicalization. In ample, by the torchtext package of PyTorch.4 the WordNet Collaborative Interlingual Index [3] There is no need to limit ourselves to the cur- mentioned before, OntoLex vocabulary is used to rently dominating tabular structure of commonly provide a single interlingual identifier for every con- used formats to exchange embeddings; access to cept in every WordNet language as well as machine- (and the complexity of) the actual data will be hid- readable information about it (including links with den from the user. various languages). A promising framework for the conjoint publi- To broaden the spectrum of possible usages of cation of embeddings and lexical knowledge graph the core model, various extensions have been de- is, for example, RDF-HDT [9], a compact binary veloped by the community effort.
Recommended publications
  • Standardisation Action Plan for Clarin
    Standardisation Action Plan for Clarin State: Proposal to CLARIN community Nuria Bel, Jonas Beskow, Lou Boves, Gerhard Budin, Nicoletta Calzolari, Khalid Choukri, Erhard Hinrichs, Steven Krauwer, Lothar Lemnitzer, Stelios Piperidis, Adam Przepiorkowski, Laurent Romary, Florian Schiel, Helmut Schmidt, Hans Uszkoreit, Peter Wittenburg August 2009 Summary This document describes a proposal for a Standardisation Action Plan (SAP) for the Clarin initiative in close synchronization with other relevant initiatives such as Flarenet, ELRA, ISO and TEI. While Flarenet is oriented towards a broader scope since it is also addressing standards that are typically used in industry, CLARIN wants to be more focussed in its statements to the research domain. Due to the overlap it is agreed that the Flarenet and CLARIN documents on standards need to be closely synchronized. This note covers standards that are generic (XML, UNICODE) as well as standards that are domain specific where naturally the LRT community has much more influence. This Standardization Action Plan wants to give an orientation for all practical work in CLARIN to achieve a harmonized domain of language resources and technology stepwise and therefore its core message is to overcome fragmentation. To meet these goals it wants to keep its message as simple as possible. A web-site will be established that will contain more information about examples, guidelines, explanations, tools, converters and training events such as summer schools. The organization of the document is as follows: • Chapter 1: Introduction to the topic. • Chapter 2: Recommended standards that CLARIN should endorse page 4 • Chapter 3: Standards that are emerging and relevant for CLARIN page 8 • Chapter 4: General guidelines that need to be followed page 12 • Chapter 5: Reference to community practices page 14 • Chapter 6: References This document tries to be short and will give comments, recommendations and discuss open issues for each of the standards.
    [Show full text]
  • Proceedings of the Workshop on Multilingual Language Resources and Interoperability, Pages 1–8, Sydney, July 2006
    COLING •ACL 2006 MLRI’06 Multilingual Language Resources and Interoperability Proceedings of the Workshop Chairs: Andreas Witt, Gilles Sérasset, Susan Armstrong, Jim Breen, Ulrich Heid and Felix Sasaki 23 July 2006 Sydney, Australia Production and Manufacturing by BPA Digital 11 Evans St Burwood VIC 3125 AUSTRALIA c 2006 The Association for Computational Linguistics Order copies of this and other ACL proceedings from: Association for Computational Linguistics (ACL) 209 N. Eighth Street Stroudsburg, PA 18360 USA Tel: +1-570-476-8006 Fax: +1-570-476-0860 [email protected] ISBN 1-932432-82-5 ii Table of Contents Preface .....................................................................................v Organizers . vii Workshop Program . ix Lexical Markup Framework (LMF) for NLP Multilingual Resources Gil Francopoulo, Nuria Bel, Monte George, Nicoletta Calzolari, Monica Monachini, Mandy Pet and Claudia Soria . 1 The Role of Lexical Resources in CJK Natural Language Processing Jack Halpern . 9 Towards Agent-based Cross-Lingual Interoperability of Distributed Lexical Resources Claudia Soria, Maurizio Tesconi, Andrea Marchetti, Francesca Bertagna, Monica Monachini, Chu-Ren Huang and Nicoletta Calzolari. .17 The LexALP Information System: Term Bank and Corpus for Multilingual Legal Terminology Consolidated Verena Lyding, Elena Chiocchetti, Gilles Sérasset and Francis Brunet-Manquat . 25 The Development of a Multilingual Collocation Dictionary Sylviane Cardey, Rosita Chan and Peter Greenfield. .32 Multilingual Collocation Extraction: Issues and Solutions Violeta Seretan and Eric Wehrli . 40 Structural Properties of Lexical Systems: Monolingual and Multilingual Perspectives Alain Polguère . 50 A Fast and Accurate Method for Detecting English-Japanese Parallel Texts Ken’ichi Fukushima, Kenjiro Taura and Takashi Chikayama . 60 Evaluation of the Bible as a Resource for Cross-Language Information Retrieval Peter A.
    [Show full text]
  • Standardization Work Isotiger Transcription of Spoken Language CQLF: Corpus Query Lingua Franca
    Recent Initiatives towards New Standards for Language Resources Gottfried Herzog1, Ulrich Heid2, Thorsten Trippel3, Piotr Ba´nski4, Laurent Romary5, Thomas Schmidt4, Andreas Witt4, Kerstin Eckart6 1DIN Deutsches Institut fur¨ Normung e. V., Berlin, 2Universit¨at Hildesheim, 3Universit¨at Tubingen,¨ 4Institut fur¨ Deutsche Sprache, Mannheim, 5Inria, 6Universit¨at Stuttgart E-mail: [email protected] Standardization Work Transcription of spoken language • Who? A representation format to compare, interchange and combine { Experts from industry, academia and administrations orthography-based transcriptions of spoken language. { Experts are nominated { based on expertise and interest The standard is developed in cooperation with TEI proposals in the field. [Schmidt 2011] { ISO committee TC 37/SC 4 and national mirror committees, Based on e.g. for Germany: Normenausschuss NA-105-00-06 AA • State of the art tools and formats for creating, editing, Sprachressourcen publishing and querying transcribed data DIN { Deutsches Institut f¨ur Normung e.V. • Widely used transcription systems • How? { Stepwise procedures: Encoded components Example on handout Proposals { working drafts { (draft) international standards • Metadata: based on TEI header { Consensus-based: drafting { commenting { ballot • Macrostructure: timeline, single and grouped utterances, { National standards organizations provide infrastructure elements outside utterances (e.g. <pause> and <incident>) • Microstructure: Excerpt of the list of standards and standard proposals by ISO
    [Show full text]
  • Mise En Page 1
    Catalogue général Janvier 2019 27-37 St George’s Road – London SW19 4EU — United Kingdom www.iste-editions.fr – www.openscience.fr – www.iste.co.uk – www.istegroup.com Table des matières Table of contents Organisation éditoriale, SCIENCES .................................................................................. 4 Biologie, médecine et santé • Biology, Medecine and Health ........................................ 7 Bioingénierie médicale • Biomedical Engineering ................................................................ 8 Biologie • Biology ................................................................................................................. 11 Ingénierie de la santé et société • Health Engineering and Society ..................................... 12 Chimie • Chemistry ............................................................................................................ 17 Agriculture, science des aliments et nutrition ....................................................................... 18 Agriculture, Food Science and Nutrition Chimie • Chemistry ............................................................................................................... 20 Écologie et environnement • Ecology and Environment ................................................ 23 Écologie • Ecological Sciences ............................................................................................ 24 Environnement • Environmental Sciences ........................................................................... 28
    [Show full text]
  • A Method for Reusing and Re-Engineering Non-Ontological Resources for Building Ontologies
    Departamento de Inteligencia Artificial Facultad de Informatica´ PhD Thesis A Method for Reusing and Re-engineering Non-ontological Resources for Building Ontologies Author : Msc. Boris Marcelo Villazon´ Terrazas Advisor : Prof. Dr. Asuncion´ Gomez´ Perez´ 2011 ii Tribunal nombrado por el Sr. Rector Magfco. de la Universidad Politecnica´ de Madrid, el d´ıa...............de.............................de 20.... Presidente : Vocal : Vocal : Vocal : Secretario : Suplente : Suplente : Realizado el acto de defensa y lectura de la Tesis el d´ıa..........de......................de 20...... en la E.T.S.I. /Facultad...................................................... Calificacion´ .................................................................................. EL PRESIDENTE LOS VOCALES EL SECRETARIO iii iv Abstract Current well-known methodologies for building ontologies do not consider the reuse and possible subsequent re-engineering of existing knowledge resources. The ontologization of non-ontological resources has led to the design of several specific methods, techniques and tools. These are mainly specific to a particular resource type, or to a particular resource implementation. Thus, everytime ontol- ogy engineers are confronted with the task of re-engineering a new resource into an ontology, they develop ad-hoc solutions for transforming such resource into a single ontology. Within the context of the NeOn project, we propose a novel methodology for building ontology networks: the NeOn Methodology, a methodology based on sce- narios. One
    [Show full text]
  • LMF for a Selection of African Languages Chantal Enguehard, Mathieu Mangeot
    LMF for a selection of African Languages Chantal Enguehard, Mathieu Mangeot To cite this version: Chantal Enguehard, Mathieu Mangeot. LMF for a selection of African Languages. Francopoulo, Gil. LMF: Lexical Markup Framework, theory and practice, Hermès science, pp.8, 2013. hal-00959228 HAL Id: hal-00959228 https://hal.archives-ouvertes.fr/hal-00959228 Submitted on 1 Apr 2014 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Chapter 71 LMF for a selection of African Languages Chantal Enguehard LINA, 2 rue de la Houssiniere, BP 92208, 44322 Nantes Cedex 03, France Mathieu Mangeot GETALP-LIG, 41 rue des Mathématiques, BP 53 F-38041 GRENOBLE CEDEX 9 7.1. Introduction Electronic resources are scarce regarding less-resourced languages, so it is wise to take published dictionaries and convert them into a standard format usable by automated tools for natural language processing. We introduce the notion of less- resourced languages and then discuss the methodology of conversion that we have defned and implemented. The fourth part presents examples of conversion from the initial published format to the LMF format. The last part describes some diffculties encountered when representing certain information into LMF format.
    [Show full text]
  • An Experiment in Lexical Information Extraction
    LEXIE - an Experiment in Lexical Information Extraction John J. Camilleri Michael Rosner Dept. Intelligent Computer Systems Dept. Intelligent Computer Systems University of Malta University of Malta MSD2080 Msida, Malta MSD2080 Msida, Malta john @johnjcamilleri.com mike.rosner @um.edu.mt Abstract to the general approach adopted there. However, the This document investigates the possibility of ex- two main differences we see between their approach tracting lexical information automatically from and ours are that (a) Boguraev’s work was oriented the pages of a printed dictionary of Maltese. An rather heavily towards the GPSG grammatical frame- experiment was carried out on a small sample work and (b) they were able to access the original of dictionary entries using hand-crafted rules to source tapes of the dictionary. The input they used parse the entries. Although the results obtained was therefore faithful to the orginal. were quite promising, a major problem turned out to errors introduced by OCR and the incon- In our case we had to rely upon OCR input as de- sistent style adopted for writing dictionary en- scribed further in section 4. The inherent errors caused tries. certain problems. One of the aims of this work is to establish a viable method for extracting lexical entries for lesser-studied languages lacking the usual battery of language re- Keywords sources in machine readable form. Most languages of the world fall into this category. The availability of a lexicon extraction, lexical information, lexicon, semitic paper dictionary, is, however, fairly widespread, even for exotic languages so that the methods being pro- posed could provide a robust alternative to dictionary 1 Introduction extraction for a relatively large number of languages.
    [Show full text]
  • PANACEA Project Grant Agreement No.: 248064
    SEVENTH FRAMEWORK PROGRAMME THEME 3 Information and communication Technologies PANACEA Project Grant Agreement no.: 248064 Platform for Automatic, Normalized Annotation and Cost-Effective Acquisition of Language Resources for Human Language Technologies D5.1 Parallel technology tools and resources Dissemination Level: Public Delivery Date: July 16th 2010 Status – Version: Final v1.0 Author(s) and Affiliation: Pavel Pecina (DCU), Antonio Toral (DCU), Gregor Thurmair (LinguaTec), Andy Way (DCU) Parallel technology tools and resources Table of contents Table of contents ........................................................................................................................... 1 1 Introduction ........................................................................................................................... 4 2 Terminology .......................................................................................................................... 4 2.1 Definitions ..................................................................................................................... 4 2.2 Acronyms ....................................................................................................................... 5 2.3 Related documents ......................................................................................................... 6 3 Task description .................................................................................................................... 6 3.1 Alignment .....................................................................................................................
    [Show full text]
  • Toward an Architecture for the Global Wordnet Initiative
    1 Toward an Architecture for the Global Wordnet Initiative Andrea Marchetti 1, Maurizio Tesconi 1, Francesco Ronzano 1, Marco Rosella 1 Francesca Bertagna 2, Monica Monachini 2, Claudia Soria 2, Nicoletta Calzolari 2, Chu-Ren Huang 3, Shu-Kai Hsieh 3 1Istituto di Informatica e Telematica-CNR, Via Moruzzi 1, Pisa, Italy {andrea.marchetti, maurizio.tesconi, francesco.ronzano, marco.rosella}@iit.cnr.it 2Istituto di Linguistica Computazionale-CNR, Via Moruzzi 1, Pisa, Italy {francesca.bertagna, monica.monachini, claudia.soria, nicoletta.calzolari}@ilc.cnr.it, 3Academia Sinica, Nankang, Taipei, Taiwan, [email protected], [email protected] handle and define the semantics of data. As a consequence, we Abstract — Enhancing the development of multilingual lexicons have assisted to the first attempts of integration of lexical is of foremost importance for intercultural collaboration to take resource in the Semantic Web infrastructure and content place, as multilingual lexicons are the cornerstone of several organization model. Nevertheless, large-scale multilingual multilingual applications. However, the development and maintenance of large-scale, robust multilingual dictionaries is a lexical resources are not as widely available and are very tantalizing task. Moreover, Semantic Web’s growing interest costly to construct. towards the availability of high-quality lexical resources and their The previous trend in lexical resource was oriented to multilingual interoperability, is focusing more and more attention maximization of effort by building large-scale, general- on this topic. In this paper we present a tool, based on a web purpose lexicons. However, these lexical resources are not service architecture, enabling semi-automatic generation of always satisfactory despite the tremendous amount of work bilingual lexicons through linking of distributed monolingual lexical resources.
    [Show full text]
  • Multilingual Language Resources and Interoperability
    Published in: Language Resources & Evaluation vol. 43 (2009) no. 1, pp. 1-14. Multilingual language resources and interoperability Andreas Witt Æ Ulrich Heid Æ Felix Sasaki Æ Gilles Se´rasset Abstract This article introduces the topic of ‘‘Multilingual language resources and interoperability’’. We start with a taxonomy and parameters for classifying language resources. Later we provide examples and issues of interoperatability, and resource architectures to solve such issues. Finally we discuss aspects of linguistic formalisms and interoperability. Keywords Language Á Resources Á Interoperability 1 Introduction This special issue of Language Resources and Evaluation, entitled ‘‘Multilingual language resources and interoperability’’, is composed of extended versions of selected papers from the COLING/ACL Workshop on Multilingual language resources and interoperability, held in 2006, in Sydney (cf. Witt et al. 2006 ). This introduction does not attempt to provide a complete overview of this vast topic, but A. Witt (&) IDS, Mannheim, Germany e-mail: [email protected] U. Heid University of Stuttgart, Stuttgart, Germany e-mail: [email protected] F. Sasaki Keio University, Fujisawa, Japan e-mail: [email protected] G. Se´rasset Universite´ Joseph Fourier, Grenoble Cedex 9, France e-mail: [email protected] 2 rather sketches the background against which the articles assembled in this issue are to be read. In particular, we examine the notions of (multilingual) language resources (Sect. 2) and interoperability of resources (Sect. 3), and assess resource architectures (Sect. 4) and linguistic representation formalisms (Sect. 5) with respect to their potential to support resource interoperability. This background provides a framework in which each paper in this issue of the journal is then situated.
    [Show full text]
  • Interoperability and Standards
    Common Language Resources and Technology Infrastructure Interoperability and Standards 2010-12-23 Version: 1 Editors: Erhard Hinrichs, Iris Vogel www.clarin.eu Common Language Resources and Technology Infrastructure The ultimate objective of CLARIN is to create a European federation of existing digital repositories that include language-based data, to provide uniform access to the data, wherever it is, and to provide existing language and speech technology tools as web services to retrieve, manipulate, enhance, explore and exploit the data. The primary target audience is researchers in the humanities and social sciences and the aim is to cover all languages relevant for the user community. The objective of the current CLARIN Preparatory Phase Project (2008-2010) is to lay the technical, linguistic and organizational foundations, to provide and validate specifications for all aspects of the infrastructure (including standards, usage, IPR) and to secure sustainable support from the funding bodies in the (now 23) participating countries for the subsequent construction and exploitation phases beyond 2010. CLARIN-D5C-3 2 Common Language Resources and Technology Infrastructure Interoperability and Standards CLARIN-D5C-3 EC FP7 project no. 212230 Deliverable: D5.C-3 - Deadline: 31.12.2010 Responsible: Erhard Hinrichs CLARIN-D5C-3 3 Common Language Resources and Technology Infrastructure Contributing Partners: UTU, UHEL, IPPBAS, KTH, IMCS, IPIPAN, ILC-CNR, ILSP, MPI Contributing Members: BBAW, U Hamburg, U Stuttgart, Vienna With contributions
    [Show full text]
  • LMF Reloaded
    LMF Reloaded 1,3,4 1,2,3 8 1,6,7 Laurent Romary ,​ Mohamed Khemakhem ,​ Fahad Khan ,​ Jack Bowers ,​ Nicoletta ​ 8 5 ​ 5 ​ 9 ​ Calzolari ,​ Monte George ,​ Mandy Pet ,​ Piotr Bański ​ ​ ​ ​ 1. Inria-ALMAnaCH - Automatic Language Modelling and ANAlysis & Computational Humanities 2. UPD7 - Université Paris Diderot - Paris 7 3. CMB - Centre Marc Bloch 4. BBAW - Berlin-Brandenburg Academy of Sciences and Humanities 5. ANSI- American National Standards Institute 6. EPHE - École Pratique des Hautes Études ​ 7. ÖAW - Austrian Academy of Sciences 8. CNR-ILC - Istituto di Linguistica Computazionale "Antonio Zampolli" 9. IDS - Institut für Deutsche Sprache Abstract The Lexical Markup Framework (LMF) or ISO 24613 [1] is a de jure standard ​ which constitutes a framework for modelling and encoding lexical information both in retrodigitised print dictionaries as well as in NLP lexical databases. An in-depth review is currently underway within the standardisation sub-committee, ISO-TC37/SC4/WG4 with the goal of creating a more modular, flexible and durable follow up to the original LMF standard published by ISO in 2008. In this paper we will showcase some of the major improvements which have so far been implemented in the new version of LMF. Key words: ISO 24613, LMF, Lexical resources ​ 1. Introduction The previous version of LMF, published by ISO in 2008 [1] offered a framework for modelling, publishing and sharing lexical resources with a special focus on requirements arising from the domain of Natural Language Processing (NLP). Due to the potential richness and the multi-layered nature of linguistic descriptions in lexical resources the LMF meta-model ended up taking on a great deal of complexity in its attempt to reflect these various different linguistic facets.
    [Show full text]