Dynamic Anticipation and Completion for Multi-Hop Reasoning Over

Dynamic Anticipation and Completion for Multi-Hop Reasoning Over

Dynamic Anticipation and Completion for Multi-Hop Reasoning over Sparse Knowledge Graph Xin Lv1;2, Xu Han1;2, Lei Hou1;2∗ , Juanzi Li1;2, Zhiyuan Liu1;2 Wei Zhang3, Yichi Zhang3, Hao Kong3, Suhui Wu3 1Department of Computer Science and Technology, BNRist 2KIRC, Institute for Artificial Intelligence, Tsinghua University, Beijing 100084, China 3Alibaba Group, Hangzhou, China flv-x18,[email protected] fhoulei,lijuanzi,[email protected] isa Abstract Olivia Langdon American Multi-hop reasoning has been widely studied spouse child? isa in recent years to seek an effective and inter- child pretable method for knowledge graph (KG) Mark Twain Susy Clemens completion. Most previous reasoning meth- ods are designed for dense KGs with enough write writing_language? paths between entities, but cannot work well on those sparse KGs that only contain sparse Roughing It U.S. English publish_area official_language paths for reasoning. On the one hand, sparse KGs contain less information, which makes Figure 1: An illustration of multi-hop reasoning task it difficult for the model to choose correct over sparse KG. The missing relations (black dashed paths. On the other hand, the lack of evi- arrows) between entities can be inferred from exist- dential paths to target entities also makes the ing triples (solid black arrows) through reasoning paths reasoning process difficult. To solve these (bold arrows). However, some relations in the reason- problems, we propose a multi-hop reasoning ing path are missing (red dashed arrows) in sparse KG, model named DacKGR over sparse KGs, by which makes multi-hop reasoning difficult. applying novel dynamic anticipation and com- pletion strategies: (1) The anticipation strat- egy utilizes the latent prediction of embedding- their further development and adaption for related based models to make our model perform downstream tasks. more potential path search over sparse KGs. To alleviate this issue, some embedding-based (2) Based on the anticipation information, the completion strategy dynamically adds edges models (Bordes et al., 2013; Dettmers et al., 2018) as additional actions during the path search, are proposed, most of which embed entities and which further alleviates the sparseness prob- relations into a vector space and make link predic- lem of KGs. The experimental results on tions to complete KGs. These models focus on five datasets sampled from Freebase, NELL efficiently predicting knowledge but lack necessary and Wikidata show that our method outper- interpretability. In order to solve this problem, Das forms state-of-the-art baselines. Our codes et al.(2018) and Lin et al.(2018) propose multi- and datasets can be obtained from https:// hop reasoning models, which use the REINFORCE github.com/THU-KEG/DacKGR arXiv:2010.01899v1 [cs.CL] 5 Oct 2020 . algorithm (Williams, 1992) to train an agent to 1 Introduction search over KGs. These models can not only give the predicted result but also an interpretable path Knowledge graphs (KGs) represent the world to indicate the reasoning process. As shown in the knowledge in a structured way, and have been upper part of Figure1, for a triple query ( Olivia proven to be helpful for many downstream NLP Langdon, child, ?), multi-hop reasoning models tasks like query answering (Guu et al., 2015), di- can predict the tail entity Susy Clemens through a alogue generation (He et al., 2017) and machine reasoning path (bold arrows). reading comprehension (Yang et al., 2019). Despite Although existing multi-hop reasoning models their wide applications, many KGs still face serious have achieved good results, they still suffer two incompleteness (Bordes et al., 2013), which limits problems on sparse KGs: (1) Insufficient infor- ∗ Corresponding Author mation. Compared with normal KGs, sparse KGs #degree Dataset #Ent #Rel #Fact sion during the reasoning process. In sparse KGs, mean median many entities only have few relations, which limits FB15K-237 14,505 237 272,115 19.74 14 the choice spaces of the agent. Our completion WN18RR 40,945 11 86,835 2.19 2 NELL23K 22,925 200 35,358 2.21 1 strategy thus dynamically adds some additional WD-singer 10,282 135 20,508 2.35 2 relations (e.g., red dashed arrows in Figure1) ac- cording to the state information of the current entity Table 1: The statistics of some benchmark KG datasets. during searching reasoning paths. After that, for #degree is the outgoing degree of every entity that can the current entity and an additional relation r, we indicate the sparsity level. use a pre-trained embedding-based model to pre- dict tail entity e. Then, the additional relation r e contain less information, which makes it difficult and the predicted tail entity will form a potential (r; e) for the agent to choose the correct search direction. action and be added to the action space of the (2) Missing paths. In sparse KGs, some entity current entity for path expansion. pairs do not have enough paths between them as We conduct experiments on five datasets sam- reasoning evidence, which makes it difficult for the pled from Freebase, NELL and Wikidata. The agent to carry out the reasoning process. As shown results show that our model DacKGR outperforms in the lower part of Figure1, there is no evidential previous multi-hop reasoning models, which veri- path between Mark Twain and English since the fies the effectiveness of our model. relation publish area is missing. From Table1 we 2 Problem Formulation can learn that some sampled KG datasets are actu- ally sparse. Besides, some domain-specific KGs In this section, we first introduce some symbols (e.g., WD-singer) do not have abundant knowledge and concepts related to normal multi-hop reason- and also face the problem of sparsity. ing, and then formally define the task of multi-hop As the performance of most existing multi-hop reasoning over sparse KGs. reasoning methods drops significantly on sparse Knowledge graph KG can be formulated as KGs, some preliminary efforts, such as CPL (Fu KG = fE; R; T g, where E and R denote entity set et al., 2019), explore to introduce additional text and relation set respectively. T = f(es; rq; eo)g ⊆ information to ease the sparsity of KGs. Although E × R × E is triple set, where es and eo are head these explorations have achieved promising results, and tail entities respectively, and rq is the relation they are still limited to those specific KGs whose between them. For every KG, we can use the aver- out entities have additional text information. Thus, age out-degree Davg of each entity (node) to define out reasoning over sparse KGs is still an important but its sparsity. Specifically, if Davg of a KG is larger not fully resolved problem, and requires a more than a threshold, we can say it is a dense or normal generalized approach to this problem. KG, otherwise, it is a sparse KG. In this paper, we propose a multi-hop reasoning Given a graph KG and a triple query (es; rq; ?), model named DacKGR, along with two dynamic where es is the source entity and rq is the query re- strategies to solve the two problems mentioned lation, multi-hop reasoning for knowledge graphs above: aims to predict the tail entity eo for (es; rq; ?). Dynamic Anticipation makes use of the limited Different from previous KG embedding tasks, information in a sparse KG to anticipate potential multi-hop reasoning also gives a supporting path targets before the reasoning process. Compared f(es; r1; e1); (e1; r2; e2) :::; (en−1; rn; eo)g over with multi-hop reasoning models, embedding- KG as evidence. As mentioned above, we mainly based models are robust to sparse KGs, because focus on the multi-hop reasoning task over sparse they depend on every single triple rather than paths KGs in this paper. in KG. To this end, our anticipation strategy injects 3 Methodology the pre-trained embedding-based model’s predic- tions as anticipation information into the states of In this section, we first introduce the whole rein- reinforcement learning. This information can guide forcement learning framework for multi-hop rea- the agent to avoid aimlessly searching paths. soning, and then detail our two strategies designed Dynamic Completion temporarily expands the for the sparse KGs, i.e., dynamic anticipation and part of a KG to enrich the options of path expan- dynamic completion. The former strategy intro- Triple Query: ("&, !!, ? ) Policy Netwrok ' ' ! ",! ... ' ' ! ! ",! Relations Attention Sample Link Prediction ... ... State # ' ! " " !# - ... # . ! " # "* !! "" ℎ" - ... Additional Action Space ... LSTM LSTM ... LSTM ... ... New Action Space Action Probability " "& !' "' !" " ... Action Space Figure 2: An illustration of our policy network with dynamic anticipation and dynamic completion strategies. The vector of ep is the prediction information introduced in Section 3.3. We use the current state to dynamically select some relations, and use the pre-trained embedding-based model to perform link prediction to obtain additional action space. The original action space will be merged with the additional action space to form a new action space. duces the guidance information from embedding- an LSTM to encode the historical path information, based models to help multi-hop models find the ht is the output of LSTM at the t-th step. correct direction on sparse KGs. Based on this Action For a state st = (rq; et; ht), if there is strategy, the dynamic completion strategy intro- a triple (et; rn; en) in the KG, (rn; en) is an action duces some additional actions during the reasoning of the state st. All actions of the state st make up its process to increase the number of paths, which action space At = f(r; e)j(et; r; e) 2 T g. Besides, can alleviate the sparsity of KGs.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us