
The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19) Bayesian Graph Convolutional Neural Networks for Semi-Supervised Classification Yingxue Zhang∗ Soumyasundar Pal,∗ Mark Coates Deniz Ustebay¨ Huawei Noah’s Ark Lab Dept. Electrical and Computer Engineering Huawei Noah’s Ark Lab Montreal Research Centre McGill University Montreal Research Centre 7101 Avenue du Parc, H3N 1X9 3480 University St, H3A 0E9 7101 Avenue du Parc, H3N 1X9 Montreal, QC Canada Montreal, QC, Canada Montreal, QC Canada Abstract paper suggest that this formulation, although computation- ally more demanding, can lead to an ability to learn more Recently, techniques for applying convolutional neural net- works to graph-structured data have emerged. Graph con- from less data, a better capacity to represent uncertainty, volutional neural networks (GCNNs) have been used to ad- and better robustness and resilience to noise or adversarial dress node and graph classification and matrix completion. attacks. Although the performance has been impressive, the current In this paper, we present the novel Bayesian GCNN implementations have limited capability to incorporate un- framework and discuss how inference can be performed. To certainty in the graph structure. Almost all GCNNs process provide a concrete example of the approach, we focus on a graph as though it is a ground-truth depiction of the re- a specific random graph model, the assortative mixed mem- lationship between nodes, but often the graphs employed in bership block model. We address the task of semi-supervised applications are themselves derived from noisy data or mod- classification of nodes and examine the resilience of the elling assumptions. Spurious edges may be included; other derived architecture to random perturbations of the graph edges may be missing between nodes that have very strong relationships. In this paper we adopt a Bayesian approach, topology. viewing the observed graph as a realization from a paramet- ric family of random graphs. We then target inference of the 2 Related work joint posterior of the random graph parameters and the node A significant body of research focuses on using neural net- (or graph) labels. We present the Bayesian GCNN framework works to analyze structured data when there is an underlying and develop an iterative learning procedure for the case of assortative mixed-membership stochastic block models. We graph describing the relationship between data items. Early present the results of experiments that demonstrate that the work led to the development of the graph neural network Bayesian formulation can provide better performance when (GNN) (Frasconi, Gori, and Sperduti 1998; Scarselli, Gori, there are very few labels available during the training pro- and others 2009; Li et al. 2016b). The GNN approaches cess. rely on recursive processing and propagation of informa- tion across the graph. Training can often take a long time to 1 Introduction converge and the required time scales undesirably with re- spect to the number of nodes in the graph, although recently Novel approaches for applying convolutional neural net- an approach to mitigate this has been proposed by (Liao, works to graph-structured data have emerged in recent Brockschmidt, and others 2018). years. Commencing with the work in (Bruna et al. 2013; Graph convolutional neural networks (GCNNs) have Henaff, Bruna, and LeCun 2015), there have been numer- emerged more recently, with the first proposals in (Bruna ous developments and improvements. Although these graph et al. 2013; Henaff, Bruna, and LeCun 2015; Duvenaud, convolutional neural networks (GCNNs) are promising, the Maclaurin, and others 2015). A spectral filtering approach current implementations have limited capability to handle was introduced in (Defferrard, Bresson, and Vandergheynst uncertainty in the graph structure, and treat the graph topol- 2016) and this method was simplified or improved in (Kipf ogy as ground-truth information. This in turn leads to an in- and Welling 2017; Levie, Monti, and others 2017; Chen, ability to adequately characterize the uncertainty in the pre- Ma, and Xiao 2018). Spatial filtering or aggregation strate- dictions made by the neural network. gies were adopted in (Atwood and Towsley 2016; Hamilton, In contrast to this past work, we employ a Bayesian Ying, and Leskovec 2017). A general framework for train- framework and view the observed graph as a realization ing neural networks on graphs and manifolds was presented from a parametric random graph family. The observed ad- by (Monti, Boscaini, and others 2017) and the authors ex- jacency matrix is then used in conjunction with features and plain how several of the other methods can be interpreted as labels to perform joint inference. The results reported in this special cases. ∗These authors contributed equally to this work. The performance of the GCNNs can be improved by in- Copyright c 2019, Association for the Advancement of Artificial corporating attention nodes (Velickoviˇ c´ et al. 2018), lead- Intelligence (www.aaai.org). All rights reserved. ing to the graph attention network (GAT). Experiments 5829 have also demonstrated that gates, edge conditioning, and Although there are many different flavours of GCNNs, all skip connections can prove beneficial (Bresson and Laurent current versions process the graph as though it is a ground- 2017; Sukhbaatar, Szlam, and Fergus 2016; Simonovsky and truth depiction of the relationship between nodes. This is Komodakis 2017). In some problem settings it is also bene- despite the fact that in many cases the graphs employed ficial to consider an ensemble of graphs (Anirudh and Thia- in applications are themselves derived from noisy data or garajan 2017), multiple adjacency matrices (Such, Sah, and modelling assumptions. Spurious edges may be included; others 2017) or the dual graph (Monti, Shchur, and others other edges may be missing between nodes that have very 2018). Compared to this past work, the primary methodolog- strong relationships. Incorporating attention mechanisms as ical novelty in our proposed approach involves the adoption in (Velickoviˇ c´ et al. 2018) addresses this to some extent; at- of a Bayesian framework and the treatment of the observed tention nodes can learn that some edges are not represen- graph as additional data to be used during inference. tative of a meaningful relationship and reduce the impact There is a rich literature on Bayesian neural networks, that the nodes have on one another. But the attention mecha- commencing with pioneering work (Tishby, Levin, and Solla nisms, for computational expediency, are limited to process- 1989; Denker and Lecun 1991; MacKay 1992; Neal 1993) ing existing edges — they cannot create an edge where one and extending to more recent contributions (Hernandez-´ should probably exist. This is also a limitation of the en- Lobato and Adams 2015; Gal and Ghahramani 2016; Sun, semble approach of (Anirudh and Thiagarajan 2017), where Chen, and Carin 2017; Louizos and Welling 2017). To the learning is performed on multiple graphs derived by erasing best of our knowledge, Bayesian neural networks have not some edges in the graph. yet been developed for the analysis of data on graphs. Bayesian neural networks 3 Background We consider the case where we have training inputs X = Graph convolutional neural networks (GCNNs) fx1; :::; xng and corresponding outputs Y = fy1; :::; yng. Although graph convolutional neural networks can be ap- Our goal is to learn a function y = f(x) via a neural net- plied to a variety of inference tasks, in order to make the work with fixed configuration (number of layers, activation description more concrete we consider the task of identify- function, etc., so that the weights are sufficient statistics for ing the labels of nodes in a graph. Suppose that we observe a f) that provides a likely explanation for the relationship be- graph Gobs = (V; E), comprised of a set of N nodes V and a tween x and y. The weights W are modelled as random vari- set of edges E. For each node we measure data (or derive fea- ables in a Bayesian approach and we introduce a prior dis- tures), denoted xi for node i. For some subset of the nodes tribution over them. Since W is not deterministic, the output L ⊂ V, we can also measure labels YL = fyi : i 2 Lg. of the neural network is also a random variable. Prediction In a classification context, the label yi identifies a category; for a new input x can be formed by integrating with respect in a regression context yi can be real-valued. Our task is to to the posterior distribution of W as follows: use the features x and the observed graph structure G to obs Z estimate the labels of the unlabelled nodes. p(yjx; X; Y) = p(yjx; W )p(W jX; Y) dW : (3) A GCNN performs this task by performing graph convo- lution operations within a neural network architecture. Col- lecting the feature vectors as the rows of a matrix X, the The term p(yjx; W ) can be viewed as a likelihood; in a clas- layers of a GCNN (Defferrard, Bresson, and Vandergheynst sification task it is modelled using a categorical distribution 2016; Kipf and Welling 2017) are of the form: by applying a softmax function to the output of the neural network; in a regression task a Gaussian likelihood is often an appropriate choice. The integral in eq. (3) is in general (1) (0) H = σ(AGXW ) (1) intractable. Various techniques for inference of p(W jX; Y) H(l+1) = σ(A H(l)W(l)) (2) have been proposed in the literature, including expectation G propagation (Hernandez-Lobato´ and Adams 2015), varia- Here W(l) are the weights of the neural network at layer tional inference (Gal and Ghahramani 2016; Sun, Chen, and l, H(l) are the output features from layer l − 1, and σ is Carin 2017; Louizos and Welling 2017), and Markov Chain Monte Carlo methods (Neal 1993; Korattikara et al.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-