Improving Knowledge Base Construction from Robust Infobox Extraction

Total Page:16

File Type:pdf, Size:1020Kb

Improving Knowledge Base Construction from Robust Infobox Extraction Improving Knowledge Base Construction from Robust Infobox Extraction Boya Peng∗ Yejin Huh Xiao Ling Michele Banko∗ Apple Inc. 1 Apple Park Way Sentropy Technologies Cupertino, CA, USA {yejin.huh,xiaoling}@apple.com {emma,mbanko}@sentropy.io∗ Abstract knowledge graph created largely by human edi- tors. Only 46% of person entities in Wikidata A capable, automatic Question Answering have birth places available 1. An estimate of 584 (QA) system can provide more complete million facts are maintained in Wikipedia, not in and accurate answers using a comprehen- Wikidata (Hellmann, 2018). A downstream appli- sive knowledge base (KB). One important cation such as Question Answering (QA) will suf- approach to constructing a comprehensive knowledge base is to extract information from fer from this incompleteness, and fail to answer Wikipedia infobox tables to populate an ex- certain questions or even provide an incorrect an- isting KB. Despite previous successes in the swer especially for a question about a list of enti- Infobox Extraction (IBE) problem (e.g., DB- ties due to a closed-world assumption. Previous pedia), three major challenges remain: 1) De- work on enriching and growing existing knowl- terministic extraction patterns used in DBpe- edge bases includes relation extraction on nat- dia are vulnerable to template changes; 2) ural language text (Wu and Weld, 2007; Mintz Over-trusting Wikipedia anchor links can lead to entity disambiguation errors; 3) Heuristic- et al., 2009; Hoffmann et al., 2011; Surdeanu et al., based extraction of unlinkable entities yields 2012; Koch et al., 2014), knowledge base reason- low precision, hurting both accuracy and com- ing from existing facts (Lao et al., 2011; Guu et al., pleteness of the final KB. This paper presents 2015; Das et al., 2017), and many others (Dong a robust approach that tackles all three chal- et al., 2014). lenges. We build probabilistic models to pre- Wikipedia (https://wikipedia.org) dict relations between entity mentions directly from the infobox tables in HTML. The en- has been one of the key resources used for tity mentions are linked to identifiers in an ex- knowledge base construction. In many Wikipedia isting KB if possible. The unlinkable ones pages, a summary table of the subject, called an are also parsed and preserved in the final out- infobox table, is presented in the top right region put. Training data for both the relation extrac- of the page (see the leftmost table in Figure1 for tion and the entity linking models are auto- the infobox table of The_Beatles). Infobox matically generated using distant supervision. tables offer a unique opportunity for extracting We demonstrate the empirical effectiveness of information and populating a knowledge base. the proposed method in both precision and re- call compared to a strong IBE baseline, DBpe- An infobox table is structurally formatted as an dia, with an absolute improvement of 41:3% HTML table and therefore it is often not necessary in average F1. We also show that our ex- to parse the text into a syntactic parse tree as traction makes the final KB significantly more in natural language extraction. Intra-Wikipedia complete, improving the completeness score anchor links are prevalent in infobox tables, of list-value relation types by 61:4%. often providing unambiguous references to en- tities. Most importantly, a significant amount 1 Introduction of information represented in the infobox tables Most existing knowledge bases (KBs) are largely are not otherwise available in a more traditional incomplete. This can be seen in Wikidata (Vran- structured knowledge base, such as Wikidata. deciˇ c´ and Krötzsch, 2014), which is a widely used We are not the first to use infobox tables ∗ Work performed at Apple Inc. 1as of June 2018 138 Proceedings of NAACL-HLT 2019, pages 138–148 Minneapolis, Minnesota, June 2 - June 7, 2019. c 2019 Association for Computational Linguistics for knowledge base completion. The pioneering base, Wikidata, improving an average recall of work of DBpedia (Auer et al., 2007; Lehmann list-value relation types by 61:4%. et al., 2015)2 extracts canonical knowledge triples To summarize, our contributions are three-fold: (subject, relation type, object) from infobox tables with a manually created mapping from Me- • RIBE produces high-quality extractions, diawiki 3 templates to relation types. Despite the achieving higher precision and recall com- success of the DBpedia project, three major chal- pared to DBpedia. lenges remain. First, deterministic mappings are • Our extractions make Wikidata more com- sensitive to template changes. If Wikipedia modi- plete by adding a significant number of triples fies an infobox template (e.g., the attribute “birth- for 51 relation types. Date” is renamed to “dateOfBirth”), the DBpe- • RIBE extracts relations with unlinkable enti- dia mappings need to be manually updated. Sec- ties, which are crucial for the completeness ondly, while Wikipedia anchor links facilitate dis- of list-value relation types and the question ambiguation of string values in the infobox ta- answering capability from a knowledge base. bles, blindly trusting the anchor links can cause errors. For instance, both “Sasha” and “Malia,” 2 Related Work children of Barack Obama, are linked to a sec- tion of the Wikipedia page of Barack_Obama, Auer and Lehmann(2007) proposed to extract in- rather than their own pages. Finally, little atten- formation from infobox tables by pattern match- tion has been paid to the extraction of unlinkable ing against Mediawiki templates and parsing and entities. For example, Larry King has married transforming them into RDF triples. However, re- seven women, only one of which can be linked to lation types of the triples remain lexical and can a Wikipedia page. A knowledge base without the be ambiguous. Lehmann et al.(2015) introduced information of the other six entities will provide an ontology to reduce the ambiguity in relation types. A mapping from infobox attributes to the an incorrect answer to the question “How many 4 women has Larry King married?” ontology is manually created, which is brittle to template changes. In contrast, RIBE is much more In this paper, we present a system, RIBE, to robust. It trains statistical models with distant su- tackle all three challenges: 1) We build proba- pervision from Wikidata to automatically learn the bilistic models to predict relations and object en- mapping from infobox attributes to Wikidata re- tity links. The learned models are more robust lation types. RIBE properly parses infobox val- to changes in the underlying data representation ues into separate object mentions, instead of rely- than manually maintained mappings. 2) We incor- ing on existing Mediawiki boundaries as in (Auer porate the information from HTML anchors and and Lehmann, 2007; Lehmann et al., 2015). The build an entity linking system to link string values RIBE entity linker learns to make entity link pre- to entities rather than fully relying on the anchor dictions rather than a direct translation of anchor links. 3) We produce high-quality extractions even links (Auer and Lehmann, 2007; Lehmann et al., when the objects are unlinkable, which improves 2015) vulnerable to human edit errors. While the completeness of the final knowledge base. there is other relevant work, due to space con- We demonstrate that the proposed method is ef- straints, it is discussed throughout the paper and fective in extracting over 50 relation types. Com- in the appendices as appropriate. pared to a strong IBE baseline, DBpedia, our extractions achieve significant improvements on 3 Problem Definition both precision and recall, with a 41:3% increase in average F1 score. We also show that the extracted We define a relation as a triple (e1; r; e2) or triples add a great value to an existing knowledge r(e1; e2), where r is the relation type and e1, e are the subject and the object entities respec- 2 2 Throughout the paper, we use “DBpedia” to refer to 5 its infobox extraction component rather than the DBpedia tively . We define an entity mention m as the knowledge base unless specified. We use Wikidata as the surface form of an entity, and a relation mention baseline knowledge base for our experiments since it is up- dated more frequently. The last release DBpedia knowledge 4DBpedia Mappings Wiki at http://mappings. base was in 2016. dbpedia.org . 3 5 Mediawiki is a markup language that defines the page e2 can also be a literal value such as a number or a time layout and presentation of Wikipedia pages. expression unless specified. 139 as a pair of entity mentions (m1; m2) for a rela- 4.1.2 Feature Generation tion type r. We denote the set of infobox tables For each relation mention, we generate features by T = ft1; t2; :::; tng where ti appears in the similar to the ones from Mintz et al.(2009), with 6 Wikipedia page of the entity ei . The Infobox some modifications. Since the subject of a rela- Extraction problem studied in this paper aims at tion mention is outside the infobox, most gener- extracting a set of relation triples r(e1; e2) from ated features focus on the object (e.g., the word an input of Wikipedia infobox tables T . shape of the object mention), and its context (e.g., the infobox attribute value). Table6 in Appendix 4 System Overview A lists the complete set of features. We describe the RIBE system in this section. 4.1.3 Distant Supervision Wikidata (Vrandeciˇ c´ and Krötzsch, 2014) is em- ployed as the base external KB. We draw distant Instead of manually collecting training examples, supervision from it by matching candidate men- we use distant supervision (Craven and Kumlien, tions against Wikidata relation triples for training 1999; Bunescu and Mooney, 2007; Mintz et al., statistical models.
Recommended publications
  • Towards a Korean Dbpedia and an Approach for Complementing the Korean Wikipedia Based on Dbpedia
    Towards a Korean DBpedia and an Approach for Complementing the Korean Wikipedia based on DBpedia Eun-kyung Kim1, Matthias Weidl2, Key-Sun Choi1, S¨orenAuer2 1 Semantic Web Research Center, CS Department, KAIST, Korea, 305-701 2 Universit¨at Leipzig, Department of Computer Science, Johannisgasse 26, D-04103 Leipzig, Germany [email protected], [email protected] [email protected], [email protected] Abstract. In the first part of this paper we report about experiences when applying the DBpedia extraction framework to the Korean Wikipedia. We improved the extraction of non-Latin characters and extended the framework with pluggable internationalization components in order to fa- cilitate the extraction of localized information. With these improvements we almost doubled the amount of extracted triples. We also will present the results of the extraction for Korean. In the second part, we present a conceptual study aimed at understanding the impact of international resource synchronization in DBpedia. In the absence of any informa- tion synchronization, each country would construct its own datasets and manage it from its users. Moreover the cooperation across the various countries is adversely affected. Keywords: Synchronization, Wikipedia, DBpedia, Multi-lingual 1 Introduction Wikipedia is the largest encyclopedia of mankind and is written collaboratively by people all around the world. Everybody can access this knowledge as well as add and edit articles. Right now Wikipedia is available in 260 languages and the quality of the articles reached a high level [1]. However, Wikipedia only offers full-text search for this textual information. For that reason, different projects have been started to convert this information into structured knowledge, which can be used by Semantic Web technologies to ask sophisticated queries against Wikipedia.
    [Show full text]
  • Learning a Large Scale of Ontology from Japanese Wikipedia
    2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology Learning a Large Scale of Ontology from Japanese Wikipedia Susumu Tamagawa∗, Shinya Sakurai∗, Takuya Tejima∗, Takeshi Morita∗, Noriaki Izumiy and Takahira Yamaguchi∗ ∗Keio University 3-14-1 Hiyoshi, Kohoku-ku, Yokohama-shi, Kanagawa 223-8522, Japan Email: fs tamagawa, [email protected] yNational Institute of Advanced Industrial Science and Technology 2-3-26 Aomi, Koto-ku, Tokyo 135-0064, Japan Abstract— Here is discussed how to learn a large scale II. RELATED WORK of ontology from Japanese Wikipedia. The learned on- tology includes the following properties: rdfs:subClassOf Auer et al.’s DBpedia [4] constructed a large information (IS-A relationships), rdf:type (class-instance relationships), database to extract RDF from semi-structured information owl:Object/DatatypeProperty (Infobox triples), rdfs:domain resources of Wikipedia. They used information resource (property domains), and skos:altLabel (synonyms). Experimen- tal case studies show us that the learned Japanese Wikipedia such as infobox, external link, categories the article belongs Ontology goes better than already existing general linguistic to. However, properties and classes are built manually and ontologies, such as EDR and Japanese WordNet, from the there are only 170 classes in the DBpedia. points of building costs and structure information richness. Fabian et al. [5] proposed YAGO which enhanced Word- Net by using the Conceptual Category. Conceptual Category is a category of Wikipedia English whose head part has the form of plural noun like “ American singers of German I. INTRODUCTION origin ”. They defined a Conceptual Category as a class, and defined the articles which belong to the Conceptual It is useful to build up large-scale ontologies for informa- Category as instances of the class.
    [Show full text]
  • Cross-Lingual Infobox Alignment in Wikipedia Using Entity-Attribute Factor Graph
    Cross-lingual infobox alignment in Wikipedia using Entity-Attribute Factor Graph Yan Zhang, Thomas Paradis, Lei Hou, Juanzi Li, Jing Zhang and Haitao Zheng Knowledge Engineering Group, Tsinghua University, Beijing, China [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract. Wikipedia infoboxes contain information about article enti- ties in the form of attribute-value pairs, and are thus a very rich source of structured knowledge. However, as the different language versions of Wikipedia evolve independently, it is a promising but challenging prob- lem to find correspondences between infobox attributes in different lan- guage editions. In this paper, we propose 8 effective features for cross lingual infobox attribute matching containing categories, templates, at- tribute labels and values. We propose entity-attribute factor graph to consider not only individual features but also the correlations among attribute pairs. Experiments on the two Wikipedia data sets of English- Chinese and English-French show that proposed approach can achieve high F1-measure:85.5% and 85.4% respectively on the two data sets. Our proposed approach finds 23,923 new infobox attribute mappings be- tween English and Chinese Wikipedia, and 31,576 between English and French based on no more than six thousand existing matched infobox attributes. We conduct an infobox completion experiment on English- Chinese Wikipedia and complement 76,498 (more than 30% of EN-ZH Wikipedia existing cross-lingual links) pairs of corresponding articles with more than one attribute-value pairs.
    [Show full text]
  • Automatically Refining the Wikipedia Infobox Ontology
    Automatically Refining the Wikipedia Infobox Ontology Fei Wu Daniel S. Weld Computer Science & Engineering Department, Computer Science & Engineering Department, University of Washington, Seattle, WA, USA University of Washington, Seattle, WA, USA [email protected] [email protected] ABSTRACT example, our autonomous Kylin system [35] trained machine-learning algorithms on the infobox data, yielding extractors which can accu- The combined efforts of human volunteers have recently extracted 2 numerous facts from Wikipedia, storing them as machine-harvestable rately generate infoboxes for articles which don’t yet have them. object-attribute-value triples in Wikipedia infoboxes. Machine learn- We estimate that this approach can add over 10 million additional ing systems, such as Kylin, use these infoboxes as training data, facts to those already incorporated into DBpedia. By running the accurately extracting even more semantic knowledge from natural learned extractors on a wider range of Web text and validating with language text. But in order to realize the full power of this informa- statistical tests (as pioneered in the KnowItAll system [16]), one tion, it must be situated in a cleanly-structured ontology. This paper could gather even more structured data. introduces KOG, an autonomous system for refining Wikipedia’s In order to effectively exploit extracted data, however, the triples infobox-class ontology towards this end. We cast the problem of must be organized using a clean and consistent ontology. Unfortu- ontology refinement as a machine learning problem and solve it nately, while Wikipedia has a category system for articles, the facil- using both SVMs and a more powerful joint-inference approach ity is noisy, redundant, incomplete, inconsistent and of very limited expressed in Markov Logic Networks.
    [Show full text]
  • Analyzing Wikidata Transclusion on English Wikipedia
    Analyzing Wikidata Transclusion on English Wikipedia Isaac Johnson Wikimedia Foundation [email protected] Abstract. Wikidata is steadily becoming more central to Wikipedia, not just in maintaining interlanguage links, but in automated popula- tion of content within the articles themselves. It is not well understood, however, how widespread this transclusion of Wikidata content is within Wikipedia. This work presents a taxonomy of Wikidata transclusion from the perspective of its potential impact on readers and an associated in- depth analysis of Wikidata transclusion within English Wikipedia. It finds that Wikidata transclusion that impacts the content of Wikipedia articles happens at a much lower rate (5%) than previous statistics had suggested (61%). Recommendations are made for how to adjust current tracking mechanisms of Wikidata transclusion to better support metrics and patrollers in their evaluation of Wikidata transclusion. Keywords: Wikidata · Wikipedia · Patrolling 1 Introduction Wikidata is steadily becoming more central to Wikipedia, not just in maintaining interlanguage links, but in automated population of content within the articles themselves. This transclusion of Wikidata content within Wikipedia can help to reduce maintenance of certain facts and links by shifting the burden to main- tain up-to-date, referenced material from each individual Wikipedia to a single repository, Wikidata. Current best estimates suggest that, as of August 2020, 62% of Wikipedia ar- ticles across all languages transclude Wikidata content. This statistic ranges from Arabic Wikipedia (arwiki) and Basque Wikipedia (euwiki), where nearly 100% of articles transclude Wikidata content in some form, to Japanese Wikipedia (jawiki) at 38% of articles and many small wikis that lack any Wikidata tran- sclusion.
    [Show full text]
  • Automatically Refining the Wikipedia Infobox Ontology
    Automatically Refining the Wikipedia Infobox Ontology Fei Wu Daniel S. Weld Computer Science & Engineering Department, Computer Science & Engineering Department, University of Washington, Seattle, WA, USA University of Washington, Seattle, WA, USA [email protected] [email protected] ABSTRACT example, our autonomous Kylin system [35] trained machine-learning algorithms on the infobox data, yielding extractors which can accu- The combined efforts of human volunteers have recently extracted 2 numerous facts from Wikipedia, storing them as machine-harvestable rately generate infoboxes for articles which don’t yet have them. object-attribute-value triples in Wikipedia infoboxes. Machine learn- We estimate that this approach can add over 10 million additional ing systems, such as Kylin, use these infoboxes as training data, facts to those already incorporated into DBpedia. By running the accurately extracting even more semantic knowledge from natural learned extractors on a wider range of Web text and validating with language text. But in order to realize the full power of this informa- statistical tests (as pioneered in the KnowItAll system [16]), one tion, it must be situated in a cleanly-structured ontology. This paper could gather even more structured data. introduces KOG, an autonomous system for refining Wikipedia’s In order to effectively exploit extracted data, however, the triples infobox-class ontology towards this end. We cast the problem of must be organized using a clean and consistent ontology. Unfortu- ontology refinement as a machine learning problem and solve it nately, while Wikipedia has a category system for articles, the facil- using both SVMs and a more powerful joint-inference approach ity is noisy, redundant, incomplete, inconsistent and of very limited expressed in Markov Logic Networks.
    [Show full text]
  • Community-Driven Engineering of the Dbpedia Infobox Ontology and Dbpedia Live Extraction Diploma Thesis
    UNIVERSITÄT LEIPZIG Faculty of Mathematics and Computer Science Department of Computer Science Institute of Business Information Systems Community-Driven Engineering of the DBpedia Infobox Ontology and DBpedia Live Extraction Diploma Thesis Leipzig, May 2010 Thesis Supervisors: Submitted by Claus Stadler Prof. Dr. Inf. habil. K. Fähnrich Born 12.05.1984 Dipl. Inf. Sebastian Hellmann Course of Studies Computer Science Abstract The DBpedia project aims at extracting information based on semi-structured data present in Wikipedia articles, interlinking it with other knowledge bases, and publishing this information as RDF freely on the Web. So far, the DBpedia project has succeeded in creating one of the largest knowledge bases on the Data Web, which is used in many applications and research prototypes. However, the manual effort required to produce and publish a new version of the dataset – which was already partially outdated the moment it was released – has been a drawback. Additionally, the maintenance of the DBpedia Ontology, an ontology serving as a structural backbone for the extracted data, made the release cycles even more heavyweight. In the course of this thesis, we make two contributions: Firstly, we develop a wiki-based solution for maintaining the DBpedia Ontology. By allowing anyone to edit, we aim to distribute the maintenance work among the DBpedia community. Secondly, we extend DBpedia with a Live Extraction Framework, which is capable of extracting RDF data from articles that have recently been edited on the English Wikipedia. By making this RDF data automatically public in near realtime, namely via SPARQL and Linked Data, we overcome many of the drawbacks of the former release cycles.
    [Show full text]
  • IJCA Template
    International Journal of Computer Applications (0975 – 8887) Volume 12– No.3, November 2010 Semantic Relationship Extraction and Ontology Building using Wikipedia: A Comprehensive Survey Nora I. Al- Rajebah Hend S. Al-Khalifa Computer Science Department Information Technology Department College of Computer and Information Sciences College of Computer and Information Sciences King Saud University King Saud University ABSTRACT Each article belongs to one or more categories that are grouped Semantic web as a vision of Tim Berners-Lee is highly according to their relatedness in a hierarchy. For example King dependable upon the availability of machine readable Saud University belongs to Universities and colleges in Saudi information. Ontologies are one of the different machine readable Arabia category and Educational institutions established in 1957 formats that have been widely investigated. Several studies focus category. Usually each article begins with a definition statement on how to extract concepts and semantic relations in order to and a brief overview of the concept. After that, the text is build ontologies. Wikipedia is considered as one of the important organized into sections that focus on some aspect of the concept. knowledge sources that have been used to extract semantic Within the article‘s text, any reference to other articles is relations due to its characteristics as a semi-structured presented as a hyperlink to that article. Also, some articles have knowledge source that would facilitate such a challenge. In this infoboxes that provide summarized information about the article paper we will focus on the current state of this challenging field in a form of a table where each row provides attribute-value by discussing some of the recent studies about Wikipedia and information.
    [Show full text]
  • Towards Understanding and Improving Multilingual Collaborative Ontology Development in Wikidata
    Towards Understanding and Improving Multilingual Collaborative Ontology Development in Wikidata John Samuel CPE Lyon, Université de Lyon Villeurbanne CEDEX, France [email protected] ABSTRACT blogs, mailing lists, internet forums for discussion and obtaining Wikidata is a multilingual, open, linked, structured knowledge base. a final consensus. Understanding this whole process of commu- One of its major interesting features is the collaborative nature of nication during software development has helped to create new ontological development for diverse domains. Even though Wiki- full-fledged development environments that not only includes ver- data has around 4500 properties compared to the millions of items, sion control systems but also incorporates continuous integration, their limited number cannot be overlooked because they form a bug reporting, discussion forums etc. Understanding the role of major role in the creation of new items, especially for the manual external communication mechanisms is therefore very important interface. This article will take a look on understanding the present to build effective collaboration tools. approach of property creation, mainly focusing on its multilingual Wikis[24] are another major and often cited example of online aspects. It will also present a quick overview of an ongoing work collaboration. Wikipedia, undoubtedly comes to mind especially called WDProp, a dashboard for real-time statistics on properties considering its global outreach. Editors from around the world and their translation. create, edit and discuss various topics of interest. However, what distinguishes Wikipedia from many large software collaboration CCS CONCEPTS projects is also its goal of building a multilingual knowledge base right from its inception. This means, Wikis can be read and modified • Information systems → Web data description languages; by users from different linguistic backgrounds in their own native Structure and multilingual text search; • Software and its engi- (or preferred) languages.
    [Show full text]
  • A Tool to Create Wikipedia Infoboxes Using Dbpedia
    WikInfoboxer: A Tool to Create Wikipedia Infoboxes Using DBpedia Ismael Rodriguez-Hernandez, Raquel Trillo-Lado, and Roberto Yus University of Zaragoza, Zaragoza, Spain f587429, raqueltl, [email protected] Abstract. Wikipedia infoboxes present a summary, in a semistructured format, of the articles they are associated to. Therefore, they have be- come the main information source used by projects to leverage the knowl- edge in Wikipedia, such as DBpedia. However, creating quality infoboxes is complicated as current mechanisms are based on simple templates which, for example, do not check whether the information provided is semantically correct. In this paper, we present WikInfoboxer, a tool to help Wikipedia editors to create rich and accurate infoboxes. WikInfoboxer computes attributes that might be interesting for an article and suggests possible values for them after analyzing similar articles from DBpedia. To make the process easier for editors, WikInfoboxer presents this information in a friendly user interface. Keywords: Infoboxes, Wikipedia, DBpedia, Semantic Web 1 Introduction Nowadays, Wikipedia1 is the biggest free and collaborative encyclopedia thanks to the collaboration of thousands of anonymous editors along the world. Thus, it is one of the most visited websites on the Web. Moreover, Wikipedia is also the main information source for other projects, such as DBpedia [1] or Google Knowl- edge Graph [3], which aim to automatically process data in order to speed up research studies and analytics. Indeed, these projects benefit from the semistruc- tured information in the infoboxes in some Wikipedia articles. Infoboxes are a fixed-format table with partially established parameters that provides standard- ized information across related articles and summarizes the most relevant facts in them.
    [Show full text]
  • Research Review a Survey of Ontology Construction And
    Research Review Research Methods in Computing A Survey of Ontology Construction and Information Extraction from Wikipedia Author: Supervisor: Chaohai Ding Dr. Srinandan Email:[email protected] Dasmahapatra December 10, 2010 Contents 1 Introduction 1 2 Historical Overview 2 3 Summaries of Key Publications 2 3.1 Semantic Wikipedia . 2 3.2 YAGO: A Large Ontology from Wikipedia and WordNet . 3 3.3 Automatically Refining the Wikipedia Infobox Ontology . 3 4 Evaluation 3 5 Bibliography 5 Appendix A: Reflection 8 Abstract With the development of the Semantic Web in recent years, the ontology has become a significant factor for information extraction and inference, which contributes to the semantic search. Wikipedia known as the largest corpus for information extraction plays an essential role in machine learning, data mining and semantic search. Also a well- formed ontology plays a crucial role in the semantic search. This paper gives a brief introduction about the ontology construction and compares some existing systems for refining and reconstructing the Wikipedia. 1 Introduction An ontology was defined as \an explicit, machine-readable specification of a shared conceptualization" in [1]. Wikipedia known as the largest corpus for information extraction plays an essential role in machine learning, data mining and knowledge-based system [2]. But the current Wikipedia ontology such as the Infobox ontology was mainly constructed manually by volunteers' collaboration and some articles of Wikipedia are redundant and ambiguous [3]. So in this report, section one presents a historical overview about the ontology refining and extraction, while section two summarizes the three key publications in this area following with evaluation of these publications in section three.
    [Show full text]
  • Wikipedia Infobox Type Prediction Using Embeddings
    Wikipedia Infobox Type Prediction Using Embeddings Russa Biswas1;2, Rima T¨urker1;2, Farshad Bakhshandegan-Moghaddam1;2, Maria Koutraki1;2, and Harald Sack1;2 1 FIZ Karlsruhe { Leibniz Institute for Information Infrastructure, Germany 2 Karlsruhe Institute of Technology, Institute AIFB, Germany [email protected] [email protected] Abstract. Wikipedia, the multilingual, free content encyclopedia has evolved as the largest and the most popular general reference work on the Internet. Since the time of commencement of Wikipedia, crowd sourcing of articles has been one of the most salient features of this open encyclo- pedia. It is obvious that enormous amount of work and expertise goes in the creation of a self-content article. However, it has been observed that the infobox type information in Wikipedia articles is often incomplete, incorrect and missing. This is due to the human intervention in creat- ing Wikipedia articles. Moreover, the type of the infoboxes in Wikipedia plays a vital role in the determination of RDF type inference in the Knowledge Graphs such as DBpedia. Hence, there arouses a necessity to have the correct infobox type information in the Wikipedia articles. In this paper, we propose an approach of predicting Wikipedia infobox type information using both word and network embeddings. Furthermore, the impact of using minimalistic information such as Table of Contents and Named Entity mentions in the abstract of a Wikipedia article in the prediction process has been analyzed as well. Keywords: Wikipedia, Infobox, Embeddings, Knowledge Graph, Clas- sification 1 Introduction Since the commencement of Wikipedia, it has emerged as the largest multilingual encyclopedias available on the Internet.
    [Show full text]