Entity Linking Based on Sentence Representation

Entity Linking Based on Sentence Representation

Hindawi Complexity Volume 2021, Article ID 8895742, 9 pages https://doi.org/10.1155/2021/8895742 Research Article Entity Linking Based on Sentence Representation Bingjing Jia ,1,2 Zhongli Wu,2 Pengpeng Zhou ,1 and Bin Wu 1 1Beijing University of Posts and Telecommunications, Beijing 100876, China 2Anhui Science and Technology University, Bengbu 233000, China Correspondence should be addressed to Bin Wu; [email protected] Received 12 September 2020; Revised 26 October 2020; Accepted 8 January 2021; Published 19 January 2021 Academic Editor: Wei Wang Copyright © 2021 Bingjing Jia et al. )is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Entity linking involves mapping ambiguous mentions in documents to the correct entities in a given knowledge base. Most existing methods failed to link when a mention appears multiple times in a document, since the conflict of its contexts in different locations may lead to difficult linking. Sentence representation, which has been studied based on deep learning approaches recently, can be used to resolve the above issue. In this paper, an effective entity linking model is proposed to capture the semantic meaning of the sentences and reduce the noise introduced by different contexts of the same mention in a document. )is model first uses the symmetry of the Siamese network to learn the sentence similarity. )en, the attention mechanism is added to improve the interaction between input sentences. To show the effectiveness of our sentence representation model combined with attention mechanism, named ELSR, extensive experiments are conducted on two public datasets. Results illustrate that our model outperforms the baselines and achieves the superior performance. 1. Introduction commonly rank entities based on a measure of similarity between the mention and the entity to determine the best With the development of big data, a large number of un- candidate. Various types of features have been designed, structured texts have appeared on the Internet, and almost including entity popularity, entity type, and entity co-oc- all of the mentions in the texts are ambiguous. For example, currence. For entity popularity, Milne and Witten [5] Figure 1 gives a snippet “Spain and U.S. teams are very achieved 90% accuracy on the Wikipedia test articles using competitive. )is year’s Fed Cup finalists—defending the prior probability. For entity type, Nie et al. [6] used type champion Spain and the United States—will hit the road to information to improve the co-attention effect. Chen et al. open the 1997 women’s international team competition,” in and Gupta et al. [7, 8] captured latent entity type infor- which the mention “Spain” may refer to Spain, the Spain mation through pretrained entity embedding. For entity co- national football team, the Spanish Empire, or the Spain Fed occurrence, graph-based methods are explored to improve Cup team. When the context of the mention is different, it the impact of coherence. Guo and Barbosa [9] obtained the may refer to different entities. )ere are thousands of entities indirect connections of entities through random walks on in a knowledge base (KB), which are the basis of many the disambiguation graph. )ere is some useful information research studies [1]. )erefore, KB is regarded as surrogates in these statistics and features, which makes the entity for real-world entities. )e main purpose of entity linking linking problem easier. In most cases, these statistics and focuses on linking mentions in text to corresponding entities features are provided by KB. )e separate entity records only in a KB such as Freebase. Entity linking is beneficial to fully have description information when the incomplete KB is understanding these texts [2–4], and it also can boost the sparse and simple [10]. )erefore, the above methods may development of question answering, machine reading, and not perform well. It will be expensive and time consuming to knowledge base population. inject structural features and statistics into entities manually. )e task of entity linking is challenging because of in- So, only the entity description, which is the most common herent ambiguity of natural language. Previous studies information in KB, is considered in this paper. In addition, 2 Complexity Spain : officially the Kingdom of Spain, is a sovereign state located on the Iberian Peninsula in southwestern Europe. Spain national football team : represents Spain in International association football and Spain and U.S. teams are very competitive. This year’s is controlled by the Royal Spanish Football Federation, the Fed Cup finalists—defending champion Spain and governing body for football in Spain. the United States—will hit the road to open the 1997 Spanish Empire: women’s international team competition. was one of the largest empires in world history and one of the first of global extent. Spain Fed Cup team : represents Spain in Fed Cup tennis competition and are governed by the Real Federación Española de Tenis. Figure 1: Illustration of mentions in the snippet and their candidate entities in the knowledge base. Solid lines point to the correct target entities corresponding to the mentions and to the descriptions of these correct target entities. when a mention appears multiple times in a document, the work attempted to develop sophisticated features to rank the conflict of its contexts in different locations may lead to entities [11]. )e results showed that the context features and difficult linking. As shown in Figure 1, the first and second the special features perform well. )en, the hidden infor- occurrences of mention “Spain” may link to an entity of mation generated by the entity-topic model was used to “Spain” and an entity of “Spain Fed Cup team,” respectively. enhance the context feature [12]. Besides, link information It is necessary to take information from a sentence where a in KB was leveraged to construct the graph, which was mention appears. combined with four confidence scores [13]. Lexical and )erefore, ELSR is proposed at a sentence level. )is statistical features were added into the unified semantic model uses the symmetry of Siamese network to derive the representation for documents and entities to solve the all- dependencies between the mention context and entity de- against-all matching problem [9]. scription, which also alleviates the issue caused by the in- To reduce time and alleviate manual work, recent neural complete knowledge base. )e main contributions of this network-based models have been investigated to capture paper are listed as follows: semantic features of mentions and entities, which achieved the state-of-the-art performance. In [14], a denoising (i) A novel symmetrical neural model is proposed for autoencoder was first applied to encode documents and entity linking, which not only captures the semantic entities. Francislandau et al. [15] combined sparse feature meaning of the sentences but also reduces the noise with multiple granularity information learned by convolu- introduced by different contexts of the same tional neural networks (CNNs). Sun et al. [16] utilized mention in a document. memory network to automatically find key information for a (ii) )e key information extracted only from the mention from surrounding contexts to facilitate entity mention context and entity description are suitable linking. Ganea and Hofmann [17] combined entity em- for the incomplete knowledge base, which can make beddings with the contextual attention for local linking. up the sparseness and the simplicity. Besides, the )en, the loopy belief propagation was used for global in- attention mechanism can improve the interactive ference and achieved competitive results. RLEL [18] effects between two sentences. regarded entity linking as a sequence decision problem and (iii) To validate the performance of the model, experi- gave the result based on a reinforcement learning model. ments are conducted over two corpora. )e results Unfortunately, their model only used the previously referred illustrate that our model is powerful in most cases. entities and failed to find the consistency of subsequent entities. Because of the flexibility and efficiency of the graph )e structure of the rest of this paper is organized as neural networks, they can capture better graph represen- follows. Section 2 presents related work. )e entity linking tations and pay attention to find the relatedness between model is detailed in Section 3, and experimental results are entities. Cao et al. [19] utilized graph convolutional network described in Section 4. Section 5 gives conclusions and to encode entity graphs. However, this model only focused suggestions for future work. on target entities, which generated lots of noisy data. Wu 2. Related Work et al. [20] added some local and global features into an entity dependency graph and utilized the graph convolutional Entity linking is a well-studied task in KB population. It is networks to capture the structural information among en- also a key step for the question answering, content analysis, tities. Fang et al. [21] generated high recall candidate sets and and information retrieval. )e early works applied extensive introduced a sequential graph attention network to obtain hand-designed features and require intensive manual efforts, the topical coherence of mentions. However, the con- including prior popularity, name string similarity, Wiki- struction of graphs is very time consuming, and the com- pedia’s category information, and so on. A representative plexity of these models is relatively high. Complexity 3 Following the burgeoning popularity of embedding advantages, our proposed model consists of four main methods on knowledge graphs, some studies tried to apply components, including embedding, fine-tuning BERT, in- embedding algorithms on entity linking. Yamada et al. [22] teraction, and prediction. )e overall architecture of the easily measured the similarities between mentions and en- model is illustrated in Figure 2.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us