And Corpus-Based Concept Extraction

And Corpus-Based Concept Extraction

Combining Dictionary- and Corpus-Based Concept Extraction Joan Codina-Filba`1 and Leo Wanner2 Abstract. Concept extraction is an increasingly popular topic in Consider, for instance, BabelNet http://www.babelnet.org [21] and deep text analysis. Concepts are individual content elements. Their BabelFy http://www.babelfy.org [20]. BabelNet captures the terms extraction offers thus an overview of the content of the material from Wikipedia3, WikiData4, OmegaWiki5, Wiktionary6 and Word- from which they were extracted. In the case of domain-specific ma- net [19] and disambiguates and structures them in terms of an ontol- terial, concept extraction boils down to term identification. The most ogy. Wikipedia is nowadays a crowd-sourced multilingual encyclo- straightforward strategy for term identification is a look up in ex- pedia that is constantly being updated by more than 100,000 active isting terminological resources. In recent research, this strategy has a editors only for the English version. There are studies, cf., e.g., [11], poor reputation because it is prone to scaling limitations due to neolo- which show that observing edits in the Wikipedia, one can learn what gisms, lexical variation, synonymy, etc., which make the terminology is happening around the globe. BabelFy is a tool that scans a text in to be submitted to a constant change. For this reason, many works de- search of terms and named entities (NEs) that are present in Babel- veloped statistical techniques to extract concepts. But the existence Net. Once the terms and NEs are detected, it uses the text as context of a crowdsourced resource such as Wikipedia is changing the land- in order to disambiguate them. scape. We present a hybrid approach that combines state-of-the-art In the light of this significant change of the terminological dic- statistical techniques with the use of the large scale term acquisition tionary landscape, it is time to assess whether dictionary-driven con- tool BabelFy to perform concept extraction. The combination of both cept extraction cannot be factored in into linguistic and corpus-driven allows us to boost the performance, compared to approaches that use concept extraction to improve the performance of the overall task. these techniques separately. The three techniques complement each other: while linguistic cri- teria filter term candidates, statistical measures help detect domain- specific terms from these candidates, and dictionaries provide terms 1 Introduction from which we can assume that they are semantically meaningful. In what follows, we present our work in which we incorporate Ba- Concept extraction is an increasingly popular topic in deep text anal- belFy, and by extension BabelNet and Wikipedia, into the process of ysis. Concepts are individual content elements, such that their extrac- domain-specific linguistic and statistical term recognition. This work tion from textual material offers an overview of the content of this has been carried out in the context of the MULTISENSOR Project, material. In applications in which the material is domain-specific, which targets, among other objectives, concept extraction as a ba- concept extraction commonly boils down to the identification and sis for content-oriented visual and textual summaries of multilingual extraction of terms, i.e., domain-specific (mono- or multiple-word) online textual material. lexical items. Usually, these are nominal lexical items that denote The remainder of the paper is structured as follows. In Section 2, concrete or abstract entities. The most straight-forward strategy for we introduce the basics of statistical and dictionary-based concept term identification is a look up in existing terminological dictionar- extraction. In Section 3, we then outline our approach. The set up ies. In recent research, this strategy has a poor reputation because it of the experiments we carried out to evaluate our approach and the is prone to scaling limitations due to neologisms, lexical variation, results we achieved are discussed in Sections 4 and 5. In Section 6, synonymy, etc., which make the terminology be submitted to a con- we discuss the achieved results, while Section 7, finally, draws some stant change [15]. As an alternative, a number of works cast syntactic conclusions and points out some future work. and/or semantic criteria into rules to determine whether a given lexi- cal item qualifies as a term [3, 4, 7], while others apply the statistical criterion of relative frequency of an item in a domain-specific corpus; 2 The Basics of statistical and dictionary-based see, for example, [1, 10, 22, 24, 25]. Most often, state-of-the-art sta- concept extraction tistical term identification is preceded by a rule-based stage in which Only a few proposals for concept extraction rely solely on linguistic the preselection of term candidates is done drawing upon linguistic analysis to do term extraction, always assuming that a term is a nom- criteria. inal phrase (NP). Bourigault [5], as one of the first addressing the However, most of the state-of-the-art proposals neglect that a task of concept extraction, uses for this purpose part-of-speech (PoS) new generation of terminological (and thus conceptual) resources tags. Manning and Schutze¨ [16], and Kaur [14] draw upon regular emerged and with them, instruments to keep these resources updated. expressions of PoS sequences. 1 NLP Group, Pompeu Fabra University, Barcelona, email: 3 http://www.wikipedia.org [email protected] 4 wikidata.org 2 Catalan Institute for Research and Advanced Studies (ICREA) and NLP 5 omegaWiki.org Group, Pompeu Fabra University, Barcelona, email: [email protected] 6 wikitionary.org More common is the extension of statistical term extraction by a upon a reference corpus, while the frequency of the candidate term preceding linguistic feature-driven term detection stage, such that we in the target domain corpus can be assumed to be TF , such that we can speak of two core strategies for concept extraction: the statistical get: TFtarget ∗ IDFref [16]. (or corpus-based) concept extraction and the dictionary-based con- Other measures have been developed specifically for term detec- cept extraction. As already pointed out, concept extraction means tion. The most common of them are: here “term extraction”. Although resources such as BabelNet are considerably richer than traditional terminological dictionaries, they • C-Value [10]. The objective of the C-Value score is to assign a ter- can be considered as the modern variant of the latter. Let us revise mhood value to each candidate token sequence, considering also the basics of both of these two core strategies. its occurrence inside other terms. The C-value expands each term candidate with all its possible nested multiword subterms that will 2.1 Statistical term extraction become also term candidates. For instance, the term candidate floating point routine includes two nested terms: floating point, Corpus-based terminology extraction started to attract attention in which is a term, and point routine, which is not a meaningful ex- the 90s, with the increasing availability of large computerized textual pression. corpora; see [13, 6] for a review of some early proposals. In general, The following formula fomarlizes the calculation of the C-Value corpus-based concept extraction relies on corpus statistics to score measure: and select the terms among the term candidates. In the course of the years, a number of different statistics have been suggested to identify ( log2 jtj TF (t); t is not nested relevant terms and best word groupings; cf., e.g., [2]. P TF (b) (1) b2Tt log2 jtj TF (t) − otherwise As a rule, the extraction is done in a three-step procedure: P (Tt) 1. Term candidate detection. The objective of this first step is to where t is the candidate token sequence, Tt the set of extracted find words and multiword sequences that could be terms. This first candidate terms that contain t, and P (Tt) the number of the can- step has to offer a high recall, as the terms missed here will not be didate terms. considered in the remainder of the procedure. • Lexical Cohesion [22]. Lexical cohesion computes the cohesion 2. Compute features for term candidates. For each term candidate, of multiword terms, that is, at this stage, any arbitrary n-gram. a set of features is computed. Most of the features are statistical This measure is a generalization of the Dice coefficient; it is pro- and measure how often the term is found as such in the corpus and portional to the length of the term and the frequency: in the document, as part of other terms, and also with respect to the words that compound it. These basic features are then combined jtjlog10 (TF (t)) TF (t) LC(t) = P (2) to compute a global score. w2t TF (w) 3. Select final terms from candidates Term candidates that obtain higher scores are selected as terms. The cut-off strategy can be where jtj is the length of the term and w the number of words that based on a threshold applied to the score (obtained from a training compound it. set, in order to optimize precision/recall ) or on a fixed number of • Domain Relevance [25]. This measure compares frequencies of terms (in that case, the top N terms are selected). the term between the target and reference datasets: TF (t) In what follows, we discuss each of these steps in turn. DR(t) = target (3) TFtarget(t) + TFref (t) 2.1.1 Term candidate detection • Relevance [24]. This measure has been developed in an applica- The most basic statistical term candidate detection strategies are tion that focuses on Spanish. The syntactic patterns used to detect based on n-gram extraction. Any n-gram in a text collection could term candidates are thus specific for Spanish, but the term scoring be a term candidate.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us