Entity Linking Meets Word Sense Disambiguation: a Unified Approach
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Predicting the Correct Entities in Named-Entity Linking
Separating the Signal from the Noise: Predicting the Correct Entities in Named-Entity Linking Drew Perkins Uppsala University Department of Linguistics and Philology Master Programme in Language Technology Master’s Thesis in Language Technology, 30 ects credits June 9, 2020 Supervisors: Gongbo Tang, Uppsala University Thorsten Jacobs, Seavus Abstract In this study, I constructed a named-entity linking system that maps between contextual word embeddings and knowledge graph embeddings to predict correct entities. To establish a named-entity linking system, I rst applied named-entity recognition to identify the entities of interest. I then performed candidate gener- ation via locality sensitivity hashing (LSH), where a candidate group of potential entities were created for each identied entity. Afterwards, my named-entity dis- ambiguation component was performed to select the most probable candidate. By concatenating contextual word embeddings and knowledge graph embeddings in my disambiguation component, I present a novel approach to named-entity link- ing. I conducted the experiments with the Kensho-Derived Wikimedia Dataset and the AIDA CoNLL-YAGO Dataset; the former dataset was used for deployment and the later is a benchmark dataset for entity linking tasks. Three deep learning models were evaluated on the named-entity disambiguation component with dierent context embeddings. The evaluation was treated as a classication task, where I trained my models to select the correct entity from a list of candidates. By optimizing the named-entity linking through this methodology, this entire system can be used in recommendation engines with high F1 of 86% using the former dataset. With the benchmark dataset, the proposed method is able to achieve F1 of 79%. -
Knowledge Graphs on the Web – an Overview Arxiv:2003.00719V3 [Cs
January 2020 Knowledge Graphs on the Web – an Overview Nicolas HEIST, Sven HERTLING, Daniel RINGLER, and Heiko PAULHEIM Data and Web Science Group, University of Mannheim, Germany Abstract. Knowledge Graphs are an emerging form of knowledge representation. While Google coined the term Knowledge Graph first and promoted it as a means to improve their search results, they are used in many applications today. In a knowl- edge graph, entities in the real world and/or a business domain (e.g., people, places, or events) are represented as nodes, which are connected by edges representing the relations between those entities. While companies such as Google, Microsoft, and Facebook have their own, non-public knowledge graphs, there is also a larger body of publicly available knowledge graphs, such as DBpedia or Wikidata. In this chap- ter, we provide an overview and comparison of those publicly available knowledge graphs, and give insights into their contents, size, coverage, and overlap. Keywords. Knowledge Graph, Linked Data, Semantic Web, Profiling 1. Introduction Knowledge Graphs are increasingly used as means to represent knowledge. Due to their versatile means of representation, they can be used to integrate different heterogeneous data sources, both within as well as across organizations. [8,9] Besides such domain-specific knowledge graphs which are typically developed for specific domains and/or use cases, there are also public, cross-domain knowledge graphs encoding common knowledge, such as DBpedia, Wikidata, or YAGO. [33] Such knowl- edge graphs may be used, e.g., for automatically enriching data with background knowl- arXiv:2003.00719v3 [cs.AI] 12 Mar 2020 edge to be used in knowledge-intensive downstream applications. -
Detecting Personal Life Events from Social Media
Open Research Online The Open University’s repository of research publications and other research outputs Detecting Personal Life Events from Social Media Thesis How to cite: Dickinson, Thomas Kier (2019). Detecting Personal Life Events from Social Media. PhD thesis The Open University. For guidance on citations see FAQs. c 2018 The Author https://creativecommons.org/licenses/by-nc-nd/4.0/ Version: Version of Record Link(s) to article on publisher’s website: http://dx.doi.org/doi:10.21954/ou.ro.00010aa9 Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online’s data policy on reuse of materials please consult the policies page. oro.open.ac.uk Detecting Personal Life Events from Social Media a thesis presented by Thomas K. Dickinson to The Department of Science, Technology, Engineering and Mathematics in partial fulfilment of the requirements for the degree of Doctor of Philosophy in the subject of Computer Science The Open University Milton Keynes, England May 2019 Thesis advisor: Professor Harith Alani & Dr Paul Mulholland Thomas K. Dickinson Detecting Personal Life Events from Social Media Abstract Social media has become a dominating force over the past 15 years, with the rise of sites such as Facebook, Instagram, and Twitter. Some of us have been with these sites since the start, posting all about our personal lives and building up a digital identify of ourselves. But within this myriad of posts, what actually matters to us, and what do our digital identities tell people about ourselves? One way that we can start to filter through this data, is to build classifiers that can identify posts about our personal life events, allowing us to start to self reflect on what we share online. -
Three Birds (In the LLOD Cloud) with One Stone: Babelnet, Babelfy and the Wikipedia Bitaxonomy
Three Birds (in the LLOD Cloud) with One Stone: BabelNet, Babelfy and the Wikipedia Bitaxonomy Tiziano Flati and Roberto Navigli Dipartimento di Informatica Sapienza Universit`adi Roma Abstract. In this paper we present the current status of linguistic re- sources published as linked data and linguistic services in the LLOD cloud in our research group, namely BabelNet, Babelfy and the Wikipedia Bitaxonomy. We describe them in terms of their salient aspects and ob- jectives and discuss the benefits that each of these potentially brings to the world of LLOD NLP-aware services. We also present public Web- based services which enable querying, exploring and exporting data into RDF format. 1 Introduction Recent years have witnessed an upsurge in the amount of semantic information published on the Web. Indeed, the Web of Data has been increasing steadily in both volume and variety, transforming the Web into a global database in which resources are linked across sites. It is becoming increasingly critical that existing lexical resources be published as Linked Open Data (LOD), so as to foster integration, interoperability and reuse on the Semantic Web [5]. Thus, lexical resources provided in RDF format can contribute to the creation of the so-called Linguistic Linked Open Data (LLOD), a vision fostered by the Open Linguistic Working Group (OWLG), in which part of the Linked Open Data cloud is made up of interlinked linguistic resources [2]. The multilinguality aspect is key to this vision, in that it enables Natural Language Processing tasks which are not only cross-lingual, but also independent both of the language of the user input and of the linked data exploited to perform the task. -
Cross Lingual Entity Linking with Bilingual Topic Model
Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence Cross Lingual Entity Linking with Bilingual Topic Model Tao Zhang, Kang Liu and Jun Zhao Institute of Automation, Chinese Academy of Sciences HaiDian District, Beijing, China {tzhang, kliu, jzhao}@nlpr.ia.ac.cn Abstract billion web pages on Internet and 31% of them are written in other languages. This report tells us that the percentage of Cross lingual entity linking means linking an entity web pages written in English are decreasing, which indicates mention in a background source document in one that English pages are becoming less dominant compared language with the corresponding real world entity in with before. Therefore, it becomes more and more important a knowledge base written in the other language. The to maintain the growing knowledge base by discovering and key problem is to measure the similarity score extracting information from all web contents written in between the context of the entity mention and the different languages. Also, inserting new extracted knowledge document of the candidate entity. This paper derived from the information extraction system into a presents a general framework for doing cross knowledge base inevitably needs a system to map the entity lingual entity linking by leveraging a large scale and mentions in one language to their entries in a knowledge base bilingual knowledge base, Wikipedia. We introduce which may be another language. This task is known as a bilingual topic model that mining bilingual topic cross-lingual entity linking. from this knowledge base with the assumption that Intuitively, to resolve the cross-lingual entity linking the same Wikipedia concept documents of two problem, the key is how to define the similarity score different languages share the same semantic topic between the context of entity mention and the document of distribution. -
Babelfying the Multilingual Web: State-Of-The-Art Disambiguation and Entity Linking of Web Pages in 50 Languages
Babelfying the Multilingual Web: state-of-the-art Disambiguation and Entity Linking of Web pages in 50 languages Roberto Navigli [email protected] http://lcl.uniroma1.it! ERC StG: MultilingualERC Joint Word Starting Sense Disambiguation Grant (MultiJEDI) MultiJEDI No. 259234 1! Roberto Navigli! LIDER EU Project No. 610782! Joint work with Andrea Moro Alessandro Raganato A. Moro, A. Raganato, R. Navigli. Entity Linking meets Word Sense Disambiguation: a Unified Approach. Transactions of the Association for Computational Linguistics (TACL), 2, 2014. Babelfying the Multilingual Web: state-of-the-art Disambiguation and Entity Linking Andrea Moro and Roberto Navigli 2014.05.08 BabelNet & friends3 Roberto Navigli 2014.05.08 BabelNet & friends4 Roberto Navigli Motivation • Domain-specific Web content is available in many languages • Information should be extracted and processed independently of the source/target language • This could be done automatically by means of high- performance multilingual text understanding Babelfying the Multilingual Web: state-of-the-art Disambiguation and Entity Linking Andrea Moro, Alessandro Raganato and Roberto Navigli BabelNet (http://babelnet.org) A wide-coverage multilingual semantic network and encyclopedic dictionary in 50 languages! Named Entities and specialized concepts Concepts from WordNet from Wikipedia Concepts integrated from both resources Babelfying the Multilingual Web: state-of-the-art Disambiguation and Entity Linking Andrea Moro, Alessandro Raganato and Roberto Navigli BabelNet (http://babelnet.org) -
Unsupervised, Knowledge-Free, and Interpretable Word Sense Disambiguation
Unsupervised, Knowledge-Free, and Interpretable Word Sense Disambiguation Alexander Panchenkoz, Fide Martenz, Eugen Ruppertz, Stefano Faralliy, Dmitry Ustalov∗, Simone Paolo Ponzettoy, and Chris Biemannz zLanguage Technology Group, Department of Informatics, Universitat¨ Hamburg, Germany yWeb and Data Science Group, Department of Informatics, Universitat¨ Mannheim, Germany ∗Institute of Natural Sciences and Mathematics, Ural Federal University, Russia fpanchenko,marten,ruppert,[email protected] fsimone,[email protected] [email protected] Abstract manually in one of the underlying resources, such as Wikipedia. Unsupervised knowledge-free ap- Interpretability of a predictive model is proaches, e.g. (Di Marco and Navigli, 2013; Bar- a powerful feature that gains the trust of tunov et al., 2016), require no manual labor, but users in the correctness of the predictions. the resulting sense representations lack the above- In word sense disambiguation (WSD), mentioned features enabling interpretability. For knowledge-based systems tend to be much instance, systems based on sense embeddings are more interpretable than knowledge-free based on dense uninterpretable vectors. Therefore, counterparts as they rely on the wealth of the meaning of a sense can be interpreted only on manually-encoded elements representing the basis of a list of related senses. word senses, such as hypernyms, usage We present a system that brings interpretability examples, and images. We present a WSD of the knowledge-based sense representations into system that bridges the gap between these the world of unsupervised knowledge-free WSD two so far disconnected groups of meth- models. The contribution of this paper is the first ods. Namely, our system, providing access system for word sense induction and disambigua- to several state-of-the-art WSD models, tion, which is unsupervised, knowledge-free, and aims to be interpretable as a knowledge- interpretable at the same time. -
Ten Years of Babelnet: a Survey
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21) Survey Track Ten Years of BabelNet: A Survey Roberto Navigli1 , Michele Bevilacqua1 , Simone Conia1 , Dario Montagnini2 and Francesco Cecconi2 1Sapienza NLP Group, Sapienza University of Rome, Italy 2Babelscape, Italy froberto.navigli, michele.bevilacqua, [email protected] fmontagnini, [email protected] Abstract to integrate symbolic knowledge into neural architectures [d’Avila Garcez and Lamb, 2020]. The rationale is that the The intelligent manipulation of symbolic knowl- use of, and linkage to, symbolic knowledge can not only en- edge has been a long-sought goal of AI. How- able interpretable, explainable and accountable AI systems, ever, when it comes to Natural Language Process- but it can also increase the degree of generalization to rare ing (NLP), symbols have to be mapped to words patterns (e.g., infrequent meanings) and promote better use and phrases, which are not only ambiguous but also of information which is not explicit in the text. language-specific: multilinguality is indeed a de- Symbolic knowledge requires that the link between form sirable property for NLP systems, and one which and meaning be made explicit, connecting strings to repre- enables the generalization of tasks where multiple sentations of concepts, entities and thoughts. Historical re- languages need to be dealt with, without translat- sources such as WordNet [Miller, 1995] are important en- ing text. In this paper we survey BabelNet, a pop- deavors which systematize symbolic knowledge about the ular wide-coverage lexical-semantic knowledge re- words of a language, i.e., lexicographic knowledge, not only source obtained by merging heterogeneous sources in a machine-readable format, but also in structured form, into a unified semantic network that helps to scale thanks to the organization of concepts into a semantic net- tasks and applications to hundreds of languages. -
Named Entity Extraction for Knowledge Graphs: a Literature Overview
Received December 12, 2019, accepted January 7, 2020, date of publication February 14, 2020, date of current version February 25, 2020. Digital Object Identifier 10.1109/ACCESS.2020.2973928 Named Entity Extraction for Knowledge Graphs: A Literature Overview TAREQ AL-MOSLMI , MARC GALLOFRÉ OCAÑA , ANDREAS L. OPDAHL, AND CSABA VERES Department of Information Science and Media Studies, University of Bergen, 5007 Bergen, Norway Corresponding author: Tareq Al-Moslmi ([email protected]) This work was supported by the News Angler Project through the Norwegian Research Council under Project 275872. ABSTRACT An enormous amount of digital information is expressed as natural-language (NL) text that is not easily processable by computers. Knowledge Graphs (KG) offer a widely used format for representing information in computer-processable form. Natural Language Processing (NLP) is therefore needed for mining (or lifting) knowledge graphs from NL texts. A central part of the problem is to extract the named entities in the text. The paper presents an overview of recent advances in this area, covering: Named Entity Recognition (NER), Named Entity Disambiguation (NED), and Named Entity Linking (NEL). We comment that many approaches to NED and NEL are based on older approaches to NER and need to leverage the outputs of state-of-the-art NER systems. There is also a need for standard methods to evaluate and compare named-entity extraction approaches. We observe that NEL has recently moved from being stepwise and isolated into an integrated process along two dimensions: the first is that previously sequential steps are now being integrated into end-to-end processes, and the second is that entities that were previously analysed in isolation are now being lifted in each other's context. -
Entity Linking
Entity Linking Kjetil Møkkelgjerd Master of Science in Computer Science Submission date: June 2017 Supervisor: Herindrasana Ramampiaro, IDI Norwegian University of Science and Technology Department of Computer Science Abstract Entity linking may be of help to quickly supplement a reader with further information about entities within a text by providing links to a knowledge base containing more information about each entity, and may thus potentially enhance the reading experience. In this thesis, we look at existing solutions, and implement our own deterministic entity linking system based on our research, using our own approach to the problem. Our system extracts all entities within the input text, and then disam- biguates each entity considering the context. The extraction step is han- dled by an external framework, while the disambiguation step focuses on entity-similarities, where similarity is defined by how similar the entities’ categories are, which we measure by using data from a structured knowledge base called DBpedia. We base this approach on the assumption that simi- lar entities usually occur close to each other in written text, thus we select entities that appears to be similar to other nearby entities. Experiments show that our implementation is not as effective as some of the existing systems we use for reference, and that our approach has some weaknesses which should be addressed. For example, DBpedia is not an as consistent knowledge base as we would like, and the entity extraction framework often fail to reproduce the same entity set as the dataset we use for evaluation. However, our solution show promising results in many cases. -
KOI at Semeval-2018 Task 5: Building Knowledge Graph of Incidents
KOI at SemEval-2018 Task 5: Building Knowledge Graph of Incidents 1 2 2 Paramita Mirza, Fariz Darari, ∗ Rahmad Mahendra ∗ 1 Max Planck Institute for Informatics, Germany 2 Faculty of Computer Science, Universitas Indonesia, Indonesia paramita @mpi-inf.mpg.de fariz,rahmad.mahendra{ } @cs.ui.ac.id { } Abstract Subtask S3 In order to answer questions of type (ii), participating systems are also required We present KOI (Knowledge of Incidents), a to identify participant roles in each identified an- system that given news articles as input, builds swer incident (e.g., victim, subject-suspect), and a knowledge graph (KOI-KG) of incidental use such information along with victim-related nu- events. KOI-KG can then be used to effi- merals (“three people were killed”) mentioned in ciently answer questions such as “How many the corresponding answer documents, i.e., docu- killing incidents happened in 2017 that involve ments that report on the answer incident, to deter- Sean?” The required steps in building the KG include: (i) document preprocessing involv- mine the total number of victims. ing word sense disambiguation, named-entity Datasets The organizers released two datasets: recognition, temporal expression recognition and normalization, and semantic role labeling; (i) test data, stemming from three domains of (ii) incidental event extraction and coreference gun violence, fire disasters and business, and (ii) resolution via document clustering; and (iii) trial data, covering only the gun violence domain. KG construction and population. Each dataset -
Named-Entity Recognition and Linking with a Wikipedia-Based Knowledge Base Bachelor Thesis Presentation Niklas Baumert [[email protected]]
Named-Entity Recognition and Linking with a Wikipedia-based Knowledge Base Bachelor Thesis Presentation Niklas Baumert [[email protected]] 2018-11-08 Contents 1 Why and What?—Motivation. 1.1 What is named-entity recognition? 1.2 What is entity linking? 2 How?—Implementation. 2.1 Wikipedia-backed Knowledge Base. 2.2 Entity Recognizer & Linker. 3 Performance Evaluation. 4 Conclusion. 2 1 Why and What—Motivation. 3 Built to provide input files for QLever. ● Specialized use-case. ○ General application as an extra feature. ● Providing a wordsfile and a docsfile as output. ● Wordsfile is input for QLever. ● Support for Wikipedia page titles, Freebase IDs and Wikidata IDs. ○ Convert between those 3 freely afterwards. 4 The docsfile is a list of all sentences with IDs. record id text 1 Anarchism is a political philosophy that advocates [...] 2 These are often described as [...] 5 The wordsfile contains each word of each sentence and determines entities and their likeliness. word Entity? recordID score Anarchism 0 1 1 <Anarchism> 1 1 0.9727 is 0 1 1 a 0 1 1 political 0 1 1 philosophy 0 1 1 <Philosophy> 1 1 0.955 6 Named-entity recognition (NER) identifies nouns in a given text. ● Finding and categorizing entities in a given text. [1][2] ● (Proper) nouns refer to entities. ● Problem: Words are ambiguous. [1] ○ Same word, different entities within a category—”John F. Kennedy”, the president or his son? ○ Same word, different entities in different categories—”JFK”, person or airport? 7 Entity linking (EL) links entities to a database. ● Resolve ambiguity by associating mentions to entities in a knowledge base.