Standardization of the Formal Representation of Lexical Information for NLP

Standardization of the Formal Representation of Lexical Information for NLP

Standardization of the formal representation of lexical information for NLP Laurent Romary, INRIA-Gemo & HUB-IDSL Unter den Linden 6, 10099 Berlin [email protected] 1. Complexity of lexical structures and related domains Lexical databases play a central role in all natural language processing applications (Briscoe, 1991), ranging from simple spellcheckers to more complex machine translation systems. In most cases, they constitute the sole parameter information for the corresponding software, and apart from some vary basic methods such as stemming (Lovins, 1968; Frakes, 1992) relying on pure string processing principles and low linguistic requirements, hardly any language technology application can avoid relying on a minimal lexical resource. Even some basic tasks such as word segmentation for languages such as Japanese, Korean or Chinese (Halpern, 2008), in particular in the perspective of accurate named entity recognition, can hardly be carried out without large lexical resources. The same problem actually occurs for the proper identification of multi word units in “easier” languages as demonstrated in (Schone and Jurafsky, 2001). Such observations have lead ISO for instance to consider lexical representations as the main tenet of word segmentation processes (ISO/DIS 24614-1). As a result, the cost of development and maintenance of a language technology application highly correlates with the complexity and size of the corresponding lexical database. It is thus essential to be able to standardise the structures and formats of lexical data, taking into consideration that the actual complexity and coverage may deeply fluctuate from one application to another. As a matter of fact, lexical databases can cover many different levels, from simple morphosyntactic descriptions, like in the Multext framework (Ide and Véronis, 1994; Erjavec, 2004), up to multilevel lexical descriptions for machine translation (Lieske et alii, 2001). The degree of generalisation and factorisation in such lexica also impact on their reusability from one application to another and any standardisation effort related to lexical information should be able to cope with multiple types of combinations of linguistic description levels. Finally, it would be difficult to speak about NLP lexica without eliciting the possible relation that these may bear with more human oriented resources. In this respect, we should acknowledge that machine readable dictionaries as well as terminological databases, even if conceived to fulfil other types of requirements, should not be seen as completely separated resources which would deserve unconnected standardisation activities. Machine-readable dictionaries can indeed be related in two main aspects to NLP lexica. First, they are both more and more conceived on the basis of large corpus exploration methods to identify morphological, syntactic or semantic patterns. As a consequence, they naturally share similar information components (examples, statistical information, general form and sense organisation) that result from their common origin. Second, the prior existence of machine readable dictionaries in the digital world has made them a good basis for compiling NLP lexica by exploring their content (Copestake et alii, 1995). Despite the encountered difficulty to generate formal representations out of more prose based descriptions, such methods have proved useful to ensure that the target lexica bears a large linguistic coverage. As a consequence, descriptors and sometimes organisation principles for both types of lexical data, should be similar enough to ensure that such extraction or mapping operations can rely on the same principles and further that enough interoperability exists between them so that they can be transparently used when their actual ontological difference is not relevant. We should also consider here how NLP lexica should relate to terminological databases. There again, we can identify usage scenarios where both types should be closely related. This is typically the case when the underlying NLP application has to deal with either multi-word units in texts, or with combinations of languages (translation or multilingual information retrieval). The interest here relies on the fact that huge terminological databases are continuously maintained by human experts or translators worldwide which make them, in the corresponding scientific or technical domains, the best reference for the identification or translation of the corresponding occurrences in texts. 2. Overview of foundational works Early attempts at standardising lexical structures in the nineties have either focused on precisely defining the features needed for the basic description of lexical entries, as has been the case for the Multext lexical family, or on the definition of a generic structure for lexical databases, as in the TEI guidelines. We will come back below on the TEI representation when exemplifying LMF and we will first outline here why Multext has been an important step forward in the domain of standardisation. The full specification (see Bel et alii, 1995) of the Multext descriptive framework for both morphosyntactic lexica and, by extension, for the morphosyntactic annotation of texts, is entirely based on flat feature structures, which in turn are encoded as elementary tags. For instance, the German form “Hundes” can, on the one hand, be abstractly characterised by the following set of features: {cat=noun, type=common, gender=masculine, number=singular, case=genitive} and, on the other hand, be encoded as a simple line representation in a text file as follows: Hundes Hund Ncmsg where the form is accompanied with an indication of the associated lemma together with a concise tag. The tag itself maps one-to-one onto the underlying feature structure so that features can univocally refer to the actual definition provided in the Multext specification. For instance, the specification describes the possible values for case in German as comprised of {nominative, genitive, dative, and accusative}, encoded in turn as {n,g,d,a} respectively in the tagset. As a matter of fact, the Multext framework implements the distinction between conceptual domain and value domain as defined in ISO 11179 and thus has paved the way for the current work on data categories (see below). Besides, the recent publication on a standardised XML format for feature structures (ISO 24610-1:2006) makes it actually possible to reuse Multext specifications and lexica in a modern technological framework (Erjavec, 2004). The wide success of the Multext guidelines, in collaboration with the Eagles project, resulted from both their simplicity and comprehensiveness. They allowed the quick dissemination of reference resources in the initial seven languages of the project (English, French, Spanish, Italian, German, Dutch, Swedish), complemented by seven additional languages (Bulgarian, Czech, Estonian, Hungarian, Romanian, and Slovene, followed afterwards by Serbian) in the context of the Multext-east project (see http://nl.ijs.si/ME/). The intimate liaison with part of speech tagging paved the way to see such guidelines as a strong basis for interoperability and the feature- structure based formalism favoured the development of additional developments whereby new languages where being described under the same modelling framework. From a more theoretical point of view, feature structures appear as a very appropriate framework for modelling the organisation of lexical structure and content. Without going back to the first shifts towards lexicalisation in the post-chomskian era, the expressive and inferential power of feature structures have been used in the context of highly lexicalized formalisms such as HPSG (Pollard and Sag, 1994) as well as general purpose lexical databases such as Acquilex (Briscoe, 1991). This relation has been made explicit up to the point of being seen as characterising the actual informational coverage of a lexical structure (Ide et alii 1993) as well as a way to elicit inheritance mechanisms existing within a lexical entry (Ide et alii, 2000). It may be noted that these concepts have been seminal in the design of the TEI Print dictionary chapter and LMF respectively. A series of further developments took place in the late 90s and early years 2000 to a) extend the experience gained from the Multext project in standardising formats and associated semantics, b) integrate theoretical contributions in lexical modelling and c) bring in input from additional representation levels that gained maturity from a processing point of view. In Europe a series of follow-up projects to Multext and Eagles, but also to pioneering SGML based activities such as Genelex (Antoni-Lay et alii, 1994), worked towards a comprehensive lexical model incorporating multiple linguistic levels. The Simple project (Bel et alii, 2000), followed by the cross-Atlantic Isle project (Calzolari et alii), designed a multi-layered architecture for lexical structures, which is sketched out in figure 1. What characterises this architecture is basically that multiple representation layers are both seen as autonomous modules, thus facilitating factorisation, and possibly linked together explicitly, thus ensuring a coherent framework for the development and maintenance of lexica. Both Simple and Isle produced multilingual lexical resources, which, despite their low public availability, provided useful examples for the further development of LMF. Figure 1: the general Mile lexical architecture. In parallel

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us