Transfer Learning Based Cross-Lingual Knowledge Extraction for Wikipedia

Transfer Learning Based Cross-Lingual Knowledge Extraction for Wikipedia

Transfer Learning Based Cross-lingual Knowledge Extraction for Wikipedia Zhigang Wang†, Zhixing Li†, Juanzi Li†, Jie Tang†, and Jeff Z. Pan‡ † Tsinghua National Laboratory for Information Science and Technology DCST, Tsinghua University, Beijing, China wzhigang,zhxli,ljz,tangjie @keg.cs.tsinghua.edu.cn { } ‡ Department of Computing Science, University of Aberdeen, Aberdeen, UK [email protected] Abstract Heino and Pan, 2012) and OWL (Pan and Hor- rocks, 2006; Pan and Thomas, 2007; Fokoue et Wikipedia infoboxes are a valuable source al., 2012), and their reasoning services. of structured knowledge for global knowl- However, most infoboxes in different Wikipedi- edge sharing. However, infobox infor- a language versions are missing. Figure 1 shows mation is very incomplete and imbal- the statistics of article numbers and infobox infor- anced among the Wikipedias in differen- mation for six major Wikipedias. Only 32.82% t languages. It is a promising but chal- of the articles have infoboxes on average, and the lenging problem to utilize the rich struc- numbers of infoboxes for these Wikipedias vary tured knowledge from a source language significantly. For instance, the English Wikipedi- Wikipedia to help complete the missing in- a has 13 times more infoboxes than the Chinese foboxes for a target language. Wikipedia and 3.5 times more infoboxes than the second largest Wikipedia of German language. In this paper, we formulate the prob- 6 lem of cross-lingual knowledge extraction x 10 4 from multilingual Wikipedia sources, and Article present a novel framework, called Wiki- 3.5 Infobox CiKE, to solve this problem. An instance- 3 based transfer learning method is utilized 2.5 to overcome the problems of topic drift 2 1.5 and translation errors. Our experimen- Number of Instances 1 tal results demonstrate that WikiCiKE out- 0.5 performs the monolingual knowledge ex- 0 English German French Dutch Spanish Chinese traction method and the translation-based Languages method. Figure 1: Statistics for Six Major Wikipedias. 1 Introduction To solve this problem, KYLIN has been pro- In recent years, the automatic knowledge extrac- posed to extract the missing infoboxes from un- tion using Wikipedia has attracted significant re- structured article texts for the English Wikipedi- search interest in research fields, such as the se- a (Wu and Weld, 2007). KYLIN performs mantic web. As a valuable source of structured well when sufficient training data are available, knowledge, Wikipedia infoboxes have been uti- and such techniques as shrinkage and retraining lized to build linked open data (Suchanek et al., have been used to increase recall from English 2007; Bollacker et al., 2008; Bizer et al., 2008; Wikipedia’s long tail of sparse infobox classes Bizer et al., 2009), support next-generation in- (Weld et al., 2008; Wu et al., 2008). The extraction formation retrieval (Hotho et al., 2006), improve performance of KYLIN is limited by the number question answering (Bouma et al., 2008; Fer- of available training samples. r´andez et al., 2009), and other aspects of data ex- Due to the great imbalance between different ploitation (McIlraith et al., 2001; Volkel et al., Wikipedia language versions, it is difficult to gath- 2006; Hogan et al., 2011) using semantic web s- er sufficient training data from a single Wikipedia. tandards, such as RDF (Pan and Horrocks, 2007; Some translation-based cross-lingual knowledge 641 Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 641–650, Sofia, Bulgaria, August 4-9 2013. c 2013 Association for Computational Linguistics extraction methods have been proposed (Adar et called WikiCiKE. The extraction perfor- al., 2009; Bouma et al., 2009; Adafre and de Rijke, mance for the target Wikipedia is improved 2006). These methods concentrate on translating by using rich infoboxes and textual informa- existing infoboxes from a richer source language tion in the source language. version of Wikipedia into the target language. The recall of new target infoboxes is highly limited 2. We propose the TrAdaBoost-based extractor by the number of equivalent cross-lingual arti- training method to avoid the problems of top- cles and the number of existing source infoboxes. ic drift and translation errors of the source Take Chinese-English1 Wikipedias as an example: Wikipedia. Meanwhile, some language- current translation-based methods only work for independent features are introduced to make 87,603 Chinese Wikipedia articles, 20.43% of the WikiCiKE as general as possible. total 428,777 articles. Hence, the challenge re- 3. Chinese-English experiments for four typ- mains: how could we supplement the missing in- ical attributes demonstrate that WikiCiKE foboxes for the rest 79.57% articles? outperforms both the monolingual extrac- On the other hand, the numbers of existing in- tion method and current translation-based fobox attributes in different languages are high- method. The increases of 12.65% for pre- ly imbalanced. Table 1 shows the comparison cision and 12.47% for recall in the template of the numbers of the articles for the attributes named person are achieved when only 30 tar- in template PERSON between English and Chi- get training articles are available. nese Wikipedia. Extracting the missing value for these attributes, such as awards, weight, influences The rest of this paper is organized as follows. and style, inside the single Chinese Wikipedia is Section 2 presents some basic concepts, the prob- intractable due to the rarity of existing Chinese lem formalization and the overview of WikiCiKE. attribute-value pairs. In Section 3, we propose our detailed approaches. We present our experiments in Section 4. Some re- Attribute en zh Attribute en zh lated work is described in Section 5. We conclude name 82,099 1,486 awards 2,310 38 birth date 77,850 1,481 weight 480 12 our work and the future work in Section 6. occupation 66,768 1,279 influences 450 6 nationality 20,048 730 style 127 1 2 Preliminaries Table 1: The Numbers of Articles in TEMPLATE In this section, we introduce some basic con- PERSON between English(en) and Chinese(zh). cepts regarding Wikipedia, formally defining the key problem of cross-lingual knowledge extrac- In this paper, we have the following hypothesis: tion and providing an overview of the WikiCiKE one can use the rich English (auxiliary) informa- framework. tion to assist the Chinese (target) infobox extrac- tion. In general, we address the problem of cross- 2.1 Wiki Knowledge Base and Wiki Article lingual knowledge extraction by using the imbal- We consider each language version of Wikipedia ance between Wikipedias of different languages. as a wiki knowledge base, which can be represent- For each attribute, we aim to learn an extractor to ed as K = a p , where a is a disambiguated { i}i=1 i find the missing value from the unstructured arti- article in K and p is the size of K. cle texts in the target Wikipedia by using the rich Formally we define a wiki article a K as a ∈ information in the source language. Specifically, 5-tuple a = (title,text,ib,tp,C), where we treat this cross-lingual information extraction task as a transfer learning-based binary classifica- title denotes the title of the article a, • tion problem. text denotes the unstructured text description The contributions of this paper are as follows: • of the article a, 1. We propose a transfer learning-based cross- ib is the infobox associated with a; specif- lingual knowledge extraction framework • q ically, ib = (attr , value ) represents { i i }i=1 1Chinese-English denotes the task of Chinese Wikipedia the list of attribute-value pairs for the article infobox completion using English Wikipedia a, 642 Given an attribute attrT and an instance n (word/token) xi, XS = xi i=1 and XT = n+m { } xi i=n+1 are the sets of instances (words/tokens) in{ the} source and the target language respectively. xi can be represented as a feature vector according to its context. Usually, we have n m in our set- ting, with much more attributes in≫ the source that those in the target. The function g : X Y maps → the instance from X = X X to the true la- S ∪ T bel of Y = 0, 1 , where 1 represents the extrac- { } tion target (positive) and 0 denotes the background information (negative). Because the number of target instances m is inadequate to train a good classifier, we combine the source and target in- stances to construct the training data set as T D = T D T D , where T D = x , g(x ) n and S ∪ T S { i i }i=1 T D = x , g(x ) n+m represent the source Figure 2: Simplified Article of “Bill Gates”. T { i i }i=n+1 and target training data, respectively. Given the combined training data set T D, our r is the infobox template as- tp = attri i=1 objective is to estimate a hypothesis f : X Y • sociated{ with} , where is the number of ib r that minimizes the prediction error on testing→ data attributes for one specific template, and in the target language. Our idea is to determine the C denotes the set of categories to which the useful part of T DS to improve the classification • article a belongs. performance in T DT . We view this as a transfer learning problem. Figure 2 gives an example of these five impor- tant elements concerning the article named “Bill Gates”. 2.3 WikiCiKE Framework In what follows, we will use named subscripts, such as aBill Gates, or index subscripts, such as ai, WikiCiKE learns an extractor for a given attribute to refer to one particular instance interchangeably. attrT in the target Wikipedia.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us