Comparing Graph Spectra of Adjacency and Laplacian Matrices
Total Page:16
File Type:pdf, Size:1020Kb
1 Comparing Graph Spectra of Adjacency and Laplacian Matrices J. F. Lutzeyer & A. T. Walden Abstract—Typically, graph structures are represented by one of algorithm [17] and graph wavelets [15]. Spectral clustering has three different matrices: the adjacency matrix, the unnormalised had a big impact in the recent analysis of graphs, with active and the normalised graph Laplacian matrices. The spectral research on the theoretical guarantees which can be given for (eigenvalue) properties of these different matrices are compared. For each pair, the comparison is made by applying an affine a clustering determined in this way ([11],[12]). [15] use their transformation to one of them, which enables comparison whilst construction of graph wavelets, utilising both the spectrum and preserving certain key properties such as normalised eigengaps. the eigenvectors in their design, to obtain clusters on several Bounds are given on the eigenvalue differences thus found, different scales. which depend on the minimum and maximum degree of the The different representation matrices are well established in graph. The monotonicity of the bounds and the structure of the graphs are related. The bounds on a real social network the literature. A shifted version of the symmetric normalised graph, and on three model graphs, are illustrated and analysed. graph Laplacian is used in the analysis by [12]. [14] work The methodology is extended to provide bounds on normalised with the unnormalised graph Laplacian L and remark on eigengap differences which again turn out to be in terms of the both normalised graph Laplacians Lrw and Lsym, while [13] graph’s degree extremes. It is found that if the degree extreme use the adjacency matrix and remark on the unnormalised difference is large, different choices of representation matrix A may give rise to disparate inference drawn from graph signal graph Laplacian L. [15] use the symmetric normalised graph processing algorithms; smaller degree extreme differences result Laplacian Lsym in the construction of their graph wavelets. in consistent inference, whatever the choice of representation [11] perform their analysis using the adjacency matrix A and matrix. The different inference drawn from signal processing mention the extension of their results to the graph Laplacian algorithms is visualised using the spectral clustering algorithm as future work. The question of what would have happened on the three representation matrices corresponding to a model graph and a real social network graph. if another representation had been chosen instead, arises naturally when reading the networks and graph literature. The Index Terms —adjacency matrix, affine transformation, graph purpose of this paper is to compare the spectral properties of Laplacian, spectrum, graph signal processing, degree difference these different matrices, to try to gain some insight into this issue. I. INTRODUCTION Our approach is to firstly apply an affine transformation to Graph structures are usually represented by one of three one of the matrices, before making the comparison. The trans- different matrices: the adjacency matrix, and unnormalised and formation preserves certain key properties, such as normalized normalised graph Laplacian matrices. We will refer to these eigengaps (differences between successive eigenvalues) and three matrices as representation matrices. In recent years there the ordering of eigenvalues. has been an increasing interest in the use of graph structures Consider a graph G = (V, E) with vertex set V , where for modelling purposes and their analysis. An overview of |V | = n, and edge set E. We will assume our graphs to the numerous advances in the area of signal processing on be undirected and simple. An edge between vertices vi and arXiv:1712.03769v1 [stat.ME] 11 Dec 2017 graphs is provided in [14]. [7] contains a vast bibliography and vj has edge weight wij , which is defined to be 0 if there overview of the applications of graph spectra in engineering, exists no edge between vi and vj . If the edge weights are computer science, mathematics and other areas. [13] address binary, then the graph is said to be unweighted, otherwise it is a weighted graph. The matrix holding the edge weights is the the big data topic in the context of graph structures by n×n introducing different product graphs and showing how discrete adjacency matrix A ∈ [0, 1] of a graph and is defined to signal processing is sped up by utilising the product graph have entries Aij = wij , (the maximal weight of an edge, unity, structure. can be achieved by normalising by the maximal edge weight if In a particular analysis, given a choice of one representation necessary). The adjacency matrix is one of the standard graph matrix, the spectral (eigenvalue) properties of the matrix representation matrices considered here. th are often utilised as in, for example, the spectral clustering The degree di of the i vertex is defined to be the sum of the weights of all edges connecting this vertex with others, i.e., Copyright (c) 2017 IEEE. Personal use of this material is permitted. How- n di = j=1 wij and the degree matrix D is defined to be the ever, permission to use this material for any other purposes must be obtained diagonal matrix D = diag(d1,...,dn). The degree sequence from the IEEE by sending a request to [email protected]. Johannes P Lutzeyer and Andrew Walden are both at the Department of Mathematics, is the set of vertex degrees {d1,...,dn}. The minimal and Imperial College London, 180 Queen’s Gate, London SW7 2AZ, UK. (e-mail: maximal degree in the degree sequence is denoted dmin and [email protected] and [email protected]). dmax, respectively. (An important special case is that of d- regular graphs, in which each vertex is of degree d, so that 2 dmax = dmin = d, and the degree matrix is a scalar multiple of Section II). Therefore, a reflection of one of the spectra the identity matrix, D = dI, where I is the identity matrix.) using a multiplicative constant is a sensible preprocessing For a general D matrix, the unnormalised graph Laplacian, step before comparing spectra. Furthermore, using an affine L, is defined as transformation has the advantage that we are able to map the L = D − A. spectral supports of the three representation matrices onto each other to further increase comparability of the spectra. This In the literature there are two normalised Laplacians which mapping of spectral supports requires scaling. are considered [17]; these follow from the unnormalised graph Let X and Y be any possible pair of the three representation Laplacian by use of the inverse of the degree matrix as follows: matrices to be compared. Our general approach is to define an −1 −1/2 −1/2 Lrw = D L; Lsym = D LD . (1) affine transform of one of them, say F(X). We then study bounds on the eigenvalue error Since they are related by a similarity transform the sets of eigenvalues of these two normalised Laplacians are identical |λi(F(X)) − λi(Y )|, and we only need to concern ourselves with one: we work with Lrw. where λi(Y ) is the eigenvalue of Y. The three representation matrices we utilise in this paper Remark 1: For d-regular graphs our transformations recover are thus A, L and Lrw. the exact relations between the spectra of the three matrices. We use the common nomenclature that the set of eigenvalues This is a crucial baseline to test when considering maps of a matrix is called its spectrum. In order to distinguish between the representation spectra. eigenvalues of A, L and Lrw we use µ for an eigenvalue of A, λ for an eigenvalue of L and η for one of L . We write rw B. Contributions λ(Lˆ) to denote an eigenvalue of a perturbation Lˆ of L, and likewise for the other matrices. The contributions of this paper are: 1) A framework under which the representation matrices A. Motivation of the affine transformations can be compared. Central to this framework is an For d-regular graphs, since D = dI, the spectra of the affine transformation, mapping one matrix as closely as three graph representation matrices are exactly related via possible on to the other in spectral norm. This procedure known affine transformations, see for example [16, p. 71]. is rather general and is likely to be applicable in other For general graphs, the relation of the representation spec- matrix comparisons. Properties of these transforms, such tra is non-linear. The transformation between representation as normalised eigengap preservation, are discussed. spectra can be shown to be either non-existent, if repeated 2) The quantification of the spectral difference between the eigenvalues in one spectrum correspond to unequal eigenvalues representation matrices at an individual eigenvalue level, in another spectrum, or to be polynomial of degree smaller and at eigengap level, via closed form bounds. or equal than n, see Appendix A. In practice, finding the 3) A partition of graphs, according to their degree extremes, polynomial mapping between spectra is subject to significant derived from the bounds, which enables an interesting numerical issues, which are also outlined in Appendix A; analysis of our results. furthermore, crucial spectral properties such as eigengap size 4) The recognition that if the degree extreme difference and eigenvalue ordering are not preserved by these non- is large, different choices of representation matrix may linear transformations. Importantly, in the spectral clustering give rise to disparate inference drawn from graph signal algorithm, eigengaps are used to determine the number of processing algorithms; smaller degree extreme differ- clusters present in the network, while the eigenvalue ordering, ences will result in consistent inference, whatever the an even more fundamental property, is used to identify which choice of representation matrix.