
UKP-UBC Entity Linking at TAC-KBP Nicolai Erbsy, Eneko Agirrez, Aitor Soroaz, Ander Barrenaz, Ugaitz Etxebarriaz, Iryna Gurevychy, Torsten Zeschy yUbiquitous Knowledge Processing Lab (UKP-TUDA) Department of Computer Science, Technische Universitat¨ Darmstadt http://www.ukp.tu-darmstadt.de zIXA NLP Group University of the Basque Country, Donostia, Basque Country http://ixa.si.ehu.es Abstract focus on the entity linking task which assigns an en- tity from a knowledge base to a marked mention in This paper describes our system for the en- a text. tity linking task at TAC KBP 2012. We de- Our approach is solely based on the given mention veloped a supervised system using dictionary- (no further approaches for extending the mention are based, similarity-based, and graph-based fea- tures. As a global feature, we apply Per- used) and ranking the candidate entities. This en- sonalized PageRank with Wikipedia to weight ables the system to be used for rather general prob- the list of entity candidates. We use two lems like word sense disambiguation (Agirre and Wikipedia versions with different timestamps Edmonds, 2006; Navigli, 2009). to enrich the knowledge base and develop This paper is structured as follows: Section 2 pro- an algorithm for mapping between the two vides an overview of the notation used throughout Wikipedia versions. We observed a large drop this paper. In Section 3, we describe the system ar- in system performance when moving from training data to test data. Our error analysis chitecture and classify the features used in Section 4. showed that the guidelines for mention anno- We present the official results and our error analysis tation were not followed by annotators. An ad- in Section 5 and conclude with a description of fu- ditional mention detection component should ture work (Section 6) and a summary (Section 7). improve performance to the expected level. 2 Notation 1 Introduction Each document d contains exactly one mention m Entity linking is the task of linking a surface string with a context c. For each mention m, there exists in a text (e.g. Washington) to the correctly disam- a set of entities E, which are entity candidates for biguated representation of the entity in some knowl- m. Each feature has a scoring function s(m; e) that edge base (e.g. either George Washington or Wash- computes the probability for each mention-entity ington, D.C. depending on what Washington was re- pair (m; e) that m refers to e. For features that con- ferring to in this context). Entity linking is an im- sider the context the scoring function s(m; e; c) also portant task with applications in several domains in- takes the context into account. cluding information retrieval, text structuring, and machine translation. The Knowledge Base Popu- 3 System architecture 1 lation workshop of the Text Analysis Conference Figure 1 shows the architecture of the system. provides a framework for comparing different ap- The system is designed using modular components proaches to entity linking in a controlled setting. We based on the UIMA framework (Ferrucci and Lally, 1http://www.nist.gov/tac/2012/KBP/index. 2004) augmented with the functionality provided by html uimaFIT (Ogren and Bethard, 2009). constructed from a more recent Wikipedia (i.e. 5th Preprocessing of April 2011) provides a fair number of mention- entity pairs, but may contain entities that have been renamed, merged, split, or deleted since the KB- version in 2008. The Google dictionary provides Candidate generation statistical information for basically any mention, but is sometimes noisy. Candidate expansion When the target query Ranking feature string is not found in the dictionary, the following extraction heuristics are applied in turn to the mention: 1. remove parentheses and text in between Candidate ranking 2. remove leading ”the” from mention In one run that accessed external resources queried online, we also use the Did You Mean (DYM) API in Wikipedia. First, we check if the dic- KB mapping tionary lookup returns any entity candidates. If not, we try DYM. If DYM fails to return any candidates, it is applied after each of the above heuristic again. Figure 1: System architecture That is, if the first heuristic fails, then we try DYM for the mention without parenthesis, and so on. Preprocessing Preprocessing components were Ranking feature extraction For each mention- 2 taken from the open source project DKPro Core . entity pair (m; e), each of the features computes a We used tokenization, POS-tagging, lemmatization, scoring function s(m; e) or s(m; e; c) if the context chunking, and named entity recognition. is considered. Details of these features are described Candidate generation We reduce the whole set in Section 4. of entities E to those that have previously been Candidate ranking In order to rank entity candi- used as a target for the corresponding mention m. dates, several options are available: For this purpose we use two types of dictionaries: • the score of a single feature One is extracted from existing links in Wikipedia (Chang et al., 2010), redirects, and category infor- • a linear combination of many or all features mation. The other is extracted from Google search • logs (Spitkovsky and Chang, 2012) and incorpo- a supervised component that learns a model rates additional information like source language. A We submitted several runs for different feature com- Wikipedia dictionary is specific for one Wikipedia binations. The supervised combinations were gener- timestamp while information in the Google dictio- ated with Rank SVM (Tsochantaridis, 2006). nary is collected over a 2-year period. KB mapping Due to constant revisions of The system uses several dictionaries of those two Wikipedia articles, not all articles can be mapped to types in parallel to overcome the disadvantages of older versions using their title. Hence, we follow a each dictionary. A dictionary constructed using a three-step process: First, we try to map an article by Wikipedia version close to the KB timestamp (i.e. its title. Second, we search for a redirect from the 12th of March 2008) covers all entities in the KB corresponding Wikipedia version with the same title but contains fewer mention-entity pairs, due to the and map it to its target. Third, heuristics as previ- smaller size of Wikipedia in 2008. A dictionary ously described for candidate expansion are used to 2code.google.com/p/dkpro-core-asl/ and map entities.If none of these methods return a KB code.google.com/p/dkpro-core-gpl/ entry, NIL (Not in KB) is returned. 4 Classification of features George Washington George Washington (February 22, 1732 – In this section, we give a brief overview of the fea- December 14, 1799), was one of the Founding Fathers of the United States, serving as the commander-in-chief of the Continental Army during ture we used in our entity linking system. We divide the American Revolutionary War and later as the new republic's first President. He also presided over into three types of features: (i) features solely con- the convention that drafted the Constitution. sidering the mention, (ii) features taking the context into account, and (iii) another contextual feature do- ing a global disambiguation. Washington D.C. John Adams 4.1 Mention ...Named in honor of George Washington, the ...Adams' revolutionary credentials secured City of Washington was founded in 1791 to him two terms as George Washington's vice These features take the mention and a list of entity serve as the new national capital. ... president and his own election in 1796. ... candidates as input. Figure 2: Building description text for George Washing- Dictionary-based. This feature computes a score ton using text from its article and articles that refer to that for any tuple (m; e) derived from the frequency of page. a dictionary including target statistics for each men- tion. Two different types of dictionaries from dis- tinct sources are used: Similarity3 and apply Levenshtein and Jaro-Winkler distance. • Wikipedia (Chang et al., 2010). A Wikipedia- based dictionary that lists all mentions with a 4.2 Context frequency distribution of their target. It is gen- erated by adding all categories, redirects, and A common approach in Word Sense Disambigua- all links in all Wikipedia articles of one lan- tion is to use the context of the mention to iden- guage version. Statistical data slightly differs tify the correct sense. For instance, in the Lesk al- depending on the time stamp of the Wikipedia gorithm (Lesk, 1986) words in the context of the version. Probability may change over time mention are compared to words in the description since Wikipedia article are under constant re- of each entity given by its Wikipedia article. We vision, e.g. they are added, split, merged, or use a similar approach, also used by Han and Sun removed. Hence, different Wikipedia version (2011), where context words are weighted using the may return different frequency distributions. In frequency in the description, smoothed by their n- our system we use two previously mentioned gram frequency (Jelinek and Mercer, 1980). That Wikipedia versions, one from 2008 and one way, high-frequent words have a lower influence from 2011. than uncommon words, similar to tf.idf weighting (Salton and Buckley, 1988). We use the Google Web • Google (Spitkovsky and Chang, 2012). This re- 1T corpus (T. Brants and Franz, 2006) for frequency source collects information from Google search counts. logs and therefore is able to provide statis- tics from a large crowd of users. It also As an alternative to the use of the description includes information about Wikipedia inter- in the corresponding Wikipedia article, we extract language links, and types (e.g.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-