Temporal Network Embedding for Link Prediction Via VAE Joint Attention Mechanism

Temporal Network Embedding for Link Prediction Via VAE Joint Attention Mechanism

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 1 Temporal Network Embedding for Link Prediction via VAE Joint Attention Mechanism Pengfei Jiao , Xuan Guo , Xin Jing, Dongxiao He , Huaming Wu , Member, IEEE, Shirui Pan , Member, IEEE, Maoguo Gong , Senior Member, IEEE, and Wenjun Wang Abstract— Network representation learning or embedding aims I. INTRODUCTION to project the network into a low-dimensional space that can be devoted to different network tasks. Temporal networks are an VARIETY of complex systems can be represented important type of network whose topological structure changes Aas networks, e.g., protein–protein interaction net- over time. Compared with methods on static networks, temporal works [1], social networks [2], information networks [3], network embedding (TNE) methods are facing three challenges: and co-authorship networks [4]. Generally speaking, complex 1) it cannot describe the temporal dependence across network networks have high-dimensional, isomorphism invariant, and snapshots; 2) the node embedding in the latent space fails to indicate changes in the network topology; and 3) it cannot avoid nonlinear features [5]. Hence, directly using the original a lot of redundant computation via parameter inheritance on a topological structure of the network in machine learning tasks series of snapshots. To overcome these problems, we propose a is inefficient and difficult [6]. Learning latent low-dimensional novel TNE method named temporal network embedding method representations (embeddings) of network nodes is an important based on the VAE framework (TVAE), which is based on way in data mining and complex network analysis. Then, net- a variational autoencoder (VAE) to capture the evolution of temporal networks for link prediction. It not only generates work embeddings can be utilized in a variety of applications, low-dimensional embedding vectors for nodes but also preserves including node classification [7], community detection [8], the dynamic nonlinear features of temporal networks. Through [9], link prediction [10], knowledge graphs [11], and rec- the combination of a self-attention mechanism and recurrent ommendation system [6]. Thus, how to find low-dimensional neural networks, TVAE can update node representations and embeddings that capture the essential features and properties keep the temporal dependence of vectors over time. We utilize parameter inheritance to keep the new embedding close to the of the network is an important and essential challenge. previous one, rather than explicitly using regularization, and Recently, various approaches for network embedding have thus, it is effective for large-scale networks. We evaluate our been developed, including methods based on a random walk, model and several baselines on synthetic data sets and real-world matrix factorization, and deep learning methods. Random networks. The experimental results demonstrate that TVAE has walk-based methods usually approximate many properties superior performance and lower time cost compared with the baselines. in the network by obtaining co-occurrence probabilities, including node centrality and similarity [12]–[14]. Matrix Index Terms— Link prediction, self-attention mechanism, tem- factorization-based methods usually decompose the adjacency poral network embedding (TNE), variational autoencoder (VAE). or the higher order similarity matrix of the network into a low-dimensional space [15], [16]. Deep learning-based meth- ods are adopted for capturing the nonlinear structure of the Manuscript received May 15, 2020; revised November 23, 2020 and network [17], [18]. There are also some surveys on the network May 19, 2021; accepted May 24, 2021. This work was supported in embedding in [6] and [19]–[21]. To sum up, these methods part by the National Key Research and Development Program of China and models are only designed for static networks which do under Grant 2018YFC0809804; in part by the National Natural Science Foundation of China under Grant 61902278, Grant 62071327, and Grant not undergo structural changes over time. 61876128; and in part by the Tianjin Municipal Science and Tech- In the real world, however, the topological structure of nology Project under Grant 19ZXZNGX00030. (Corresponding author: the network is always varying with time, which is referred Dongxiao He.) Pengfei Jiao is with the Center of Biosafety Research and Strategy, Law to as a temporal network [22]. Compared with static net- School, Tianjin University, Tianjin 300350, China (e-mail: [email protected]). works, temporal networks always present diverse evolution- Xuan Guo, Xin Jing, Dongxiao He, and Wenjun Wang are with the College ary mechanisms [23], [24], which makes embeddings more of Intelligence and Computing, Tianjin University, Tianjin 300350, China (e-mail: [email protected]; [email protected]; guoxuan@tju. complicated and difficult. For instance, in communication edu.cn; [email protected]). networks, complex interactions are ubiquitous, which gives Huaming Wu is with the Center for Applied Mathematics, Tianjin Univer- rise to a dramatic change of the topology structure due to sity, Tianjin 300072, China (e-mail: [email protected]). Shirui Pan is with the Department of Data Science and AI, Faculty of certain events. The temporal network is usually represented Information Technology, Monash University, Clayton, VIC 3800, Australia as a sequence of snapshots at different time steps [25], there (e-mail: [email protected]). are some complex and intrinsic associations and transforma- Maoguo Gong is with the Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, International Research Center tions across the snapshots [26]. Static embedding methods for Intelligent Perception and Computation, Xidian University, Xi’an 710071, can only handle these network snapshots separately, and China (e-mail: [email protected]). they cannot discover the dependencies between snapshots as Color versions of one or more figures in this article are available at https://doi.org/10.1109/TNNLS.2021.3084957. they ignore historical information. Therefore, these methods Digital Object Identifier 10.1109/TNNLS.2021.3084957 fail to model the evolution of temporal networks and the 2162-237X © 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See https://www.ieee.org/publications/rights/index.html for more information. Authorized licensed use limited to: Monash University. Downloaded on June 24,2021 at 04:29:14 UTC from IEEE Xplore. Restrictions apply. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 2 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS computation for efficiency. Therefore, one method should effectively and efficiently model dynamic information and predict changes in temporal networks. To cope with these challenges, we try to come up with an end-to-end TNE method for link prediction. It can generate the node embeddings at each snapshot, which combines the generation of links and the dynamic changes of the network and can be used for link prediction. To learn the essential features of the temporal network, we employ variational autoencoder (VAE) [30] to produce the representations of each snapshot for the prediction of the tem- Fig. 1. Three continuous network snapshots and their representation in the poral network. In detail, we deem that the generation of links latent space. Different arrows are used to express the three challenges for in the network follows a nonlinear rule. Accordingly, the node the temporal network embedding (TNE) task: 1) the dependence between embedding in the latent space may follow a Gaussian or more consecutive snapshots should be captured into the embeddings; 2) the varying network structure needs to be reflected and predicted with the embeddings; complex distribution. We then reconstruct the topology of the and 3) with many consecutive snapshots might be input, the number of model next network snapshot. Therefore, the temporal embedding parameters need be limited for scalability. vectors can be generated which reflect the change of network topology. Furthermore, to overcome the fuzzification of VAE, prediction performance of temporal networks is not good. we import a self-attention mechanism [31] to guide the embed- In addition, they are difficult to deal with large-scale networks ding toward a more reasonable direction. This is because the due to the independent and repetitive calculations for all the attention mechanism can extract key structural information in snapshots. networks [32]. In the meantime, we try to preserve the implicit Although some embedding methods for the temporal or temporal dependence of the network in the latent space. In the dynamic networks have been proposed [27]–[29], there are proposed model, we utilize a network embedding sequence still several unfathomed challenges in existing researches: with the long short term memory (LSTM) architecture [33] to 1) Embeddings should effectively characterize and describe preserve significant features of the temporal network. We take the dependence across the temporal networks. Since the for- advantage of the forget gate and the output gate to conserve mation of networks in the real world is usually sequential and essential relationships

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us