Deep Learning on Graphs: a Survey

Deep Learning on Graphs: a Survey

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 1 Deep Learning on Graphs: A Survey Ziwei Zhang, Peng Cui and Wenwu Zhu, Fellow, IEEE Abstract—Deep learning has been shown to be successful in a number of domains, ranging from acoustics, images, to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, substantial research efforts have been devoted to applying deep learning methods to graphs, resulting in beneficial advances in graph analysis techniques. In this survey, we comprehensively review the different types of deep learning methods on graphs. We divide the existing methods into five categories based on their model architectures and training strategies: graph recurrent neural networks, graph convolutional networks, graph autoencoders, graph reinforcement learning, and graph adversarial methods. We then provide a comprehensive overview of these methods in a systematic manner mainly by following their development history. We also analyze the differences and compositions of different methods. Finally, we briefly outline the applications in which they have been used and discuss potential future research directions. Index Terms—Graph Data, Deep Learning, Graph Neural Network, Graph Convolutional Network, Graph Autoencoder. F 1 INTRODUCTION Large-scale graphs. In the big-data era, real graphs can • Over the past decade, deep learning has become the “crown jewel” easily have millions or billions of nodes and edges; some of artificial intelligence and machine learning [1], showing supe- well-known examples are social networks and e-commerce rior performance in acoustics [2], images [3] and natural language networks [8]. Therefore, how to design scalable models, processing [4], etc. The expressive power of deep learning to preferably models that have a linear time complexity with extract complex patterns from underlying data is well recognized. respect to the graph size, is a key problem. 1 Incorporating interdisciplinary knowledge. Graphs are of- On the other hand, graphs are ubiquitous in the real world, repre- • senting objects and their relationships in varied domains, including ten connected to other disciplines, such as biology, chemistry, social networks, e-commerce networks, biology networks, traffic and social sciences. This interdisciplinary nature provides networks, and so on. Graphs are also known to have complicated both opportunities and challenges: domain knowledge can structures that can contain rich underlying values [5]. As a result, be leveraged to solve specific problems but integrating do- how to utilize deep learning methods to analyze graph data has main knowledge can complicate model designs. For example, attracted considerable research attention over the past few years. when generating molecular graphs, the objective function and This problem is non-trivial because several challenges exist in chemical constraints are often non-differentiable; therefore applying traditional deep learning architectures to graphs: gradient-based training methods cannot easily be applied. Irregular structures of graphs. Unlike images, audio, and • To tackle these challenges, tremendous efforts have been made text, which have a clear grid structure, graphs have irregular in this area, resulting in a rich literature of related papers and structures, making it hard to generalize some of the basic methods. The adopted architectures and training strategies also mathematical operations to graphs [6]. For example, defining vary greatly, ranging from supervised to unsupervised and from convolution and pooling operations, which are the funda- convolutional to recursive. However, to the best of our knowledge, mental operations in convolutional neural networks (CNNs), little effort has been made to systematically summarize the differ- for graph data is not straightforward. This problem is often ences and connections between these diverse methods. referred to as the geometric deep learning problem [7]. arXiv:1812.04202v3 [cs.LG] 13 Mar 2020 In this paper, we try to fill this knowledge gap by comprehen- Heterogeneity and diversity of graphs. A graph itself • sively reviewing deep learning methods on graphs. Specifically, as can be complicated, containing diverse types and properties. shown in Figure 1, we divide the existing methods into five cate- For example, graphs can be heterogeneous or homogenous, gories based on their model architectures and training strategies: weighted or unweighted, and signed or unsigned. In addition, graph recurrent neural networks (Graph RNNs), graph convolu- the tasks of graphs also vary widely, ranging from node- tional networks (GCNs), graph autoencoders (GAEs), graph re- focused problems such as node classification and link predic- inforcement learning (Graph RL), and graph adversarial methods. tion to graph-focused problems such as graph classification We summarize some of the main characteristics of these categories and graph generation. These diverse types, properties, and in Table 1 based on the following high-level distinctions. Graph tasks require different model architectures to tackle specific RNNs capture recursive and sequential patterns of graphs by mod- problems. eling states at either the node-level or the graph-level. GCNs define convolution and readout operations on irregular graph structures • Z. Zhang, P. Cui, and W. Zhu are with the Department of Computer Science and Technology, Tsinghua University, Beijing, China. to capture common local and global structural patterns. GAEs E-mail: [email protected], [email protected], assume low-rank graph structures and adopt unsupervised methods [email protected]. P. Cui and W. Zhu are corresponding authors. for node representation learning. Graph RL defines graph-based 1. Graphs are also called networks such as in social networks. In this paper, actions and rewards to obtain feedbacks on graph tasks while fol- we use two terms interchangeably. lowing constraints. Graph adversarial methods adopt adversarial JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 2 Graph Recurrent Neural Networks Graph Convolutional Networks Deep Learning Graph Autoencoders on Graphs Graph Reinforcement Learning Graph Adversarial Methods Fig. 1. A categorization of deep learning methods on graphs. We divide the existing methods into five categories: graph recurrent neural networks, graph convolutional networks, graph autoencoders, graph reinforcement learning, and graph adversarial methods. TABLE 1 Main Distinctions among Deep Learning Methods on Graphs Category Basic Assumptions/Aims Main Functions Graph recurrent neural networks Recursive and sequential patterns of graphs Definitions of states for nodes or graphs Graph convolutional networks Common local and global structural patterns of graphs Graph convolution and readout operations Graph autoencoders Low-rank structures of graphs Unsupervised node representation learning Graph reinforcement learning Feedbacks and constraints of graph tasks Graph-based actions and rewards Graph adversarial methods The generalization ability and robustness of graph-based models Graph adversarial trainings and attacks training techniques to enhance the generalization ability of graph- and graph adversarial methods in Section 3 to 7, respectively. We based models and test their robustness by adversarial attacks. conclude with a discussion in Section 8. In the following sections, we provide a comprehensive and detailed overview of these methods, mainly by following their 2 NOTATIONS AND PRELIMINARIES development history and the various ways these methods solve Notations. In this paper, a graph2 is represented as G = (V; E) the challenges posed by graphs. We also analyze the differences where V = fv ; :::; v g is a set of N = jV j nodes and between these models and delve into how to composite different 1 N E ⊆ V × V is a set of M = jEj edges between nodes. We use architectures. Finally, we briefly outline the applications of these A 2 N N to denote the adjacency matrix, whose ith row, jth models, introduce several open libraries, and discuss potential R × column, and an element are denoted as A(i; :); A(:; j); A(i; j), future research directions. In the appendix, we provide a source respectively. The graph can be either directed or undirected code repository, analyze the time complexity of various methods and weighted or unweighted. In this paper, we mainly consider discussed in the paper, and summarize some common applications. unsigned graphs; therefore, A(i; j) ≥ 0. Signed graphs will Related works . Several previous surveys are related to our be discussed in future research directions. We use FV and FE et al. paper. Bronstein [7] summarized some early GCN methods to denote features of nodes and edges, respectively. For other as well as CNNs on manifolds and studied them comprehensively variables, we use bold uppercase characters to denote matrices et al. through geometric deep learning. Battaglia [9] summarized and bold lowercase characters to denote vectors, e.g., a matrix X how to use GNNs and GCNs for relational reasoning using a and a vector x. The transpose of a matrix is denoted as XT and et al. unified framework called graph networks, Lee [10] reviewed the element-wise multiplication is denoted as X X . Functions et al. 1 2 the attention models for graphs, Zhang [11] summarized are marked with curlicues, e.g., F(·). et al. some GCNs, and Sun [12] briefly surveyed adversarial To better illustrate

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    24 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us