Augmenting Dbpedia with Domain Information

Augmenting Dbpedia with Domain Information

DBpedia Domains: augmenting DBpedia with domain information Gregor Titze, Volha Bryl, Cacilia¨ Zirn, Simone Paolo Ponzetto Research Group Data and Web Science University of Mannheim, Germany fi[email protected] Abstract We present an approach for augmenting DBpedia, a very large ontology lying at the heart of the Linked Open Data (LOD) cloud, with domain information. Our approach uses the thematic labels provided for DBpedia entities by Wikipedia categories, and groups them based on a kernel based k-means clustering algorithm. Experiments on gold-standard data show that our approach provides a first solution to the automatic annotation of DBpedia entities with domain labels, thus providing the largest LOD domain-annotated ontology to date. Keywords: DBpedia, Wikipedia, domain information 1. Introduction ing classified as AFRICAN-AMERICAN UNITED STATES Recent years have seen a great deal of work on the au- PRESIDENTIAL CANDIDATES). In addition, information tomatic acquisition of machine-readable knowledge from about domains, such as for instance the one encoded in a wide range of different resources, ranging from the WordNet Domains (Bentivogli et al., 2004), has been pre- Web (Carlson et al., 2010) all the way to collaboratively- viously shown to benefit important semantic tasks like, for constructed resources used either alone (Bizer et al., 2009b; instance, ontology matching (Lin and Sandkuhl, 2008). Ponzetto and Strube, 2011; Nastase and Strube, 2012), or In this work we tackle these issues by focusing on the 2 complemented with information from manually-assembled task of discovering the domains of DBpedia entities . knowledge sources (Navigli and Ponzetto, 2012; Gurevych We formulate domain typing as the problem of clustering et al., 2012; Hoffart et al., 2013). The availability of large Wikipedia categories associated with each DBpedia entity amounts of high-quality machine readable knowledge, in in a meaningful way, and matching the clusters with the turn, has led directly to a resurgence of knowledge-rich collaboratively-defined domains used to thematically cate- 3 methods in Artificial Intelligence and Natural Language gorize Wikipedia’s featured articles . Our results indicate Processing (Hovy et al., 2013). that our method is able to provide a first preliminary so- All in all, we take this as very good news, since this lution to the problem of automatically discovering domain research trend clearly demonstrates that wide-coverage types for large amounts of entities in the Linked Open Data knowledge bases like DBpedia (Bizer et al., 2009b), YAGO cloud. (Hoffart et al., 2013) or BabelNet (Navigli and Ponzetto, 2012) have all the clear potential to yield the next genera- 2. Augmenting DBpedia with domains tion of intelligent systems. DBpedia, for instance, has been We present our method to automatically type DBpedia con- successfully used for a variety of high-end complex intel- cepts with domain information. Our approach is based on ligent tasks such as open-domain question answering (Fer- three main steps: rucci et al., 2010), topic labeling (Hulpus et al., 2013), web search result clustering (Schuhmacher and Ponzetto, 2013), 1. Initially, we collect as topic seeds the information en- and open-domain data mining (Paulheim and Furnkranz,¨ coded within the categories of the Wikipedia pages asso- 2012). However, much still remains to be done to further ciated with DBpedia concepts which, typically, capture improve the quality of existing wide-coverage knowledge very highly-specialized topics. For instance, Barack bases, as well as to extend them with new information. Obama is associated with many such fine-grained top- For instance, a kind of information currently missing from ics (categories) like, among others, the following ones: any of DBpedia, YAGO or BabelNet is the notion of “do- main”, namely the fact that concepts and entities in these a. Politicians from Chicago, Illinois knowledge bases belong to a set of broad thematic areas, or b. African American United States Senators topics, they are mostly focused on. For instance, informa- tion about Barack Obama is mostly about U.S. POLITICS c. Democratic Party Presidents of the United States (which, in turn, is a sub-domain of POLITICS), whereas d. American Legal Scholars Angela Merkel is mostly focused around GERMAN POL- ITICS (again, a specialization of the more general topic of e. University of Chicago Law School faculty POLITICS)1. Domain information, in turn, could provide a middle level of abstraction between Wikipedia’s concepts 2Hereafter, we use concepts and entities interchangeably to re- and their highly fine-grained categories (e.g., OBAMA be- fer to the core resources of the underlying knowledge base, i.e., DBpedia URIs. 1Note that more than one label can apply to a specific entity – 3Namely, Wikipedia’s community-deemed best articles avail- e.g., people like Arnold Schwarzenegger or Ronald Reagan could able at http://en.wikipedia.org/wiki/Wikipedia: be associated with both U.S. POLITICS and MOVIES. Featured_articles 1438 2. In the next phase, we cluster these categories, in order to to have all categories which are primarily about football in automatically create resource-specific domains. In our one cluster, those about politics in another one, and so on. example above, we would like to group together cate- We tackle the task of grouping Wikipedia categories by gories (a-c) to capture the notion of (U.S.) POLITICS. means of statistical clustering techniques. This has several Similarly, categories (d-e) identify entities and concepts advantages over rule-based, deterministic approaches like, within the domain of LAW. for instance, grouping based on shared subsumption rela- tions in the Wikipedia category tree. First, it works with 3. In the final phase, we label the clusters by means of a as little information as that provided by the category labels simple, yet effective method which exploits Wikipedia’s – e.g., it can be applied also to other resources which do category tree in order to collect the categories’ general- not have any hierarchical organization of the concepts’ cat- izations – i.e., super-categories – of each cluster mem- egories. Second, machine learning algorithms allow us to ber. For instance, categories (a-c) all have POLITICS include a variety of heterogeneous features to model the de- OF THE UNITED STATES as super-category, which is ac- gree of similarity between Wikipedia categories. cordingly set as (one of) the cluster’s main label. In this work, we opt for kernel k-means, an algorithm which In the following, we first briefly introduce DBpedia, the combines the simplicity of k-means with the power of ker- resource used in our methodology, and then move on to ex- nels. The k-means algorithm partitions n observations into plain each phase in details. k clusters by assigning each data point to the cluster with the nearest mean (which is assumed to be a good prototyp- 2.1. DBpedia ical description of the cluster). The best means and clusters DBpedia is a crowd-sourced community effort to extract are found using an iterative procedure in which (a) each structured information from Wikipedia and make this in- observation is assigned to the cluster whose mean is clos- formation available on the Web as a full-fledged on- est to it (as given by the squared Euclidean distance); (b) tology. The key idea behind DBpedia is to parse in- each cluster’s mean is then re-calculated as the centroid of foboxes, namely property-summarizing tables found within its observations. The algorithm stops when convergence Wikipedia pages, in order to automatically acquire proper- has been reached – i.e., the cluster assignments no longer ties and relations about a large amount of entities. These change. Kernel k-means works in the same way as the are further embedded within an ontology based on Se- original k-means, except that in the calculation of distance mantic Web formalisms such as: i) representing data on a kernel function is used to calculate Euclidean distance the basis of the best practices of linked data (Bizer et al., (Dhillon et al., 2004). 2009a); ii) encoding semantic relations using the Resource In our case, a kernel function is a function K : C × C ! R Description Framework (RDF), a generic graph-based data which for all pairs of Wikipedia categories ci; cj 2 C cal- model for describing objects and their relationships. Cru- culates their similarity as a metric in a Hilbert feature space cial to our method is the fact that each DBpedia entity F . An example of this is the dot product, i. e. K(ci; cj) = is associated with a corresponding Wikipedia page from hφ(ci); φ(cj)i, where φ(cj) is a fixed mapping from cate- which the RDF triples describing it were extracted. For gories to vector representations in F , namely φ : C ! F . instance, the entity dbp:Barack Obama4 corresponds to In this work, we consider the following kernel functions: the entity described by the Wikipedia page http://en. wikipedia.org/wiki/Barack_Obama. Wikipedia 1) a simple co-occurrence kernel to capture the fact categories, which provide a fine-grained thematic cate- that two categories are more similar if they tend to be gorization of the pages – albeit not taxonomic in nature assigned to the same pages. This consists of a lin- 0 (Ponzetto and Strube, 2011) – are then included in DBpedia ear kernel of the form KCOOC(ci; cj) = φ(ci) φ(cj) by means of RDF triples expressing dcterms:subject which represents the number of co-occurring assign- relations such as the following one: ments of categories ci and cj to an entity in DBpe- dia. The feature vector of each category c has the form jIj dbp:Barack_Obama dcterms:subject φ(c) = (hasCat1(c); : : : ; hasCatjIj(c)) 2 f0; 1g cat:United_States_Senators_from_Illinois where hasCati(c) is a Boolean function indicating the assignment of category c to DBpedia entity i 2 I 2.2.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us