Graph Drawing in Spectral Layout∗

Graph Drawing in Spectral Layout∗

Graph drawing in spectral layout∗ Maureen Gallagher Colleen Tygh John Urschel Ludmil Zikatanov Beginning: July 18, 2013; Today is: October 12, 2013 1 Introduction Our research focuses on the use of spectral graph drawing techniques (see [5] and [3]) and other mathematical tools to create a two-dimensional (2D) representation of the graph of a sparse matrix. The actual graph drawing then amounts to displaying a particular arrangement of the vertices and edges of a given graph in two dimensions. We use a spectral layout that provides coordinates using the second and third eigenvectors of the associated graph Laplacian. To compute the eigenvectors and eigenvalues, we implement a subspace correction method proposed in [4] and further analyzed in [1]. 1.1 Some graph drawing applications Several important applications of graph drawing emerge from the analysis of networks, including those evident in social media sites. In particular, drawing graphs of networks can aid the following tasks: • Exploration of data: Displaying nodes and ties in various layouts and attributing color, size and other properties to nodes allows for the better visualization and analysis of data. • Analyzing collaboration graphs: { Vertices represent participants or people. { Edges join two participants if a collaborative relationship exists between them. { Edge weights denote positive and negative relationships. • Analysis of sociograms: Digraphs that diagram the structures and patterns of group interac- tions facilitate the study of choices or preferences within a group. With the quick and recent expansion of social networking, graph drawing is gaining a great deal of attention. Some of the currently available graph-drawing software includes the following: • Wolfram Alpha: This online service can connect to Facebook and make charts of friend connections and relationships. • Weighted Spectral Distribution: This MATLAB tool applies to undirected graphs. ∗These notes are based on the 2013 summer research as a part of the Computational Mathematics for Un- dergraduate Students in the Department of Mathematics, Penn State, University Park, PA, 16802, USA; see http://sites.psu.edu/cmus2013/ for details 1 • LTAS Technologies, Inc.: Among many other applications, this web identity search tool for Facebook finds the degree of separation between any two Facebook users and displays it in a graph. 2 Graph Theory: Preliminaries We consider a general combinatorial graph G = (V; E), where • V is the set of vertices: a set of discrete objects (but usually isomorphic to f1; : : : ; ng). • E is the set of edges: ordered or unordered pairs of vertices. We will consider two different types of graphs: undirected graphs and directed graphs. In an undirected graph for an edge (c; m), we always have (c; m) = (m; c), while in a directed graph (c; m) 6= (m; c). In addition, multigraphs allow multiple edges between vertices and loop graphs contain an edge that connects a vertex to itself. If the graph is weighted, one may associate a weight with every edge of a graph. Finally, a simple graph is an undirected graph that contains no loops or parallel edges. We will focus on weighted and unweighted simple graphs. 2.1 Characteristics of Graphs . All graphs have the following characteristics: • order: V (G) = jV j (the number of vertices) • size: E(G) = jEj (the number of edges) • degree of a vertex: the number of edges incident to the vertex (gives an idea of how directly connected it is) • connectedness: A graph is connected if for any two vertices v1; v2 2 V there exists a path from v1 to v2; a path being a sequence of vertices v1 = w1; w2 : : : ; wk = v2 such that (wj−1; wj) 2 E, for all j = 2; : : : ; k. • partition of a graph: splitting V in two disjoint subsets, namely, V = V1 [V2 with V1 \V2 = ;. A graph is connected if for every partition there exists an edge connecting the two sets of vertices. 2.2 Matrices and graphs We will now introduce several matrix representations of graphs, such as the adjacency and graph Laplacian matrices. In this and the following sections, we shall consider simple, undirected graphs (possibly weighted). n×n • Adjacency matrix A 2 IR , n = jV j, where Aij equals the number of edges between vertex i and vertex j • graph Laplacian matrix L = D − A, where D is the degree matrix of the graph (D = diag(d(v1); d(v2); : : : ; d(vn)) and A is the adjacency matrix of the graph. Here, d(vk) for a vertex vk 2 V is the number of edges incident with vk. 2 Homework Show that for a simple undirected graph, Lij = −1 for i 6= j, and Lii = d(i) (the degree of vertex i). 3 Graph drawing and eigenvalues The graph Laplacian L of a simple graph is a symmetric matrix. If σ(L) is the set of eigenvalues of the symmetric matrix L, we can conclude that all eigenvalues of L are real, that is, σ(L) ⊂ IR. Moreover, since the row and column sums of a graph Laplacian are zero, we immediately obtain that L1 = 0, where 1 = (1;:::; 1)T (the constant vector). In other words, 0 2 σ(A). | {z } n On IRn we have an inner (or scalar) product n X (u; v) = u1v1 + u2v2 + :::; +unvn = uivi: (1) j=1 Using the inner product, with every matrix, such as L, we associate a bilinear form: n n X X n n (Lu; v) = Lijujvi; for all u 2 IR ; v 2 IR : i=1 j=1 The fact that L1 = 0 implies that X Lij = −Lii; for all i = 1; : : : ; n: j6=i The sum on the left side of the above identity is over all j 2 f1; : : : ; ng such that j 6= i. Homework Given that L1 = 0, show that the above bilinear form can be written as: n n 1 X X (Lu; v) = (−L )(u − u )(v − v ): 2 ij i j i j i=1 j=1 n Further show that if all Lij ≤ 0, then (Lu; u) ≥ 0 for all u 2 IR . Such an L is called symmetric positive semidefinite. The extreme eigenvalues of L will be extreme values of the Rayleigh quotient if σ(L) = fλ1; λ2; : : : ; λng; λ1 ≤ λ2 ≤ :::; ≤ λn: (Lv; v) (Lv; v) λ1 = min ; λn = max : v2IRn (v; v) v2IRn (v; v) As we checked in the Homework, (Lv; v) ≥ 0 and (L1; 1) = 0. We conclude that λ1 = 0 and the corresponding eigenvector is 1. For a connected, undirected graph with positive edge weights it can be shown that 0 is a simple eigenvalue, namely, 0 = λ1 < λ2 ≤ λ2 ::: ≤ λn. Furthermore, it can be shown that if (λ2; v2) and (λ2; v3) are the eigenvalue and eigenvector pairs of L, then (Lv; v) (Lv; v) λ2 = min ; λ3 = min ; (2) v2V2 (v; v) v2V3 (v; v) 3 where V1 is the subspace of vectors orthogonal to 1, and V2 is the subspace of vectors orthogonal to both 1 and v2. More precisely, n V1 = v 2 IR ; (v; 1) = 0 ; n V2 = v 2 IR ; (v; 1) = 0; (v; v2) = 0 : Homework Show this by proving the min-max principle (you will find this in [2]). The spectral layout graph drawing uses v2 and v3 as coordinates of the vertices to display the 2 graph in two dimensions. In other words, the vertex i will have coordinates (v1;i; v2;i) ⊂ IR , and an edge is drawn between i and j if (i; j) 2 E. Thus, to utilize a spectral layout drawing, the problem is to find (λ2; v2) and (λ3; v3) which solve Lv2 = λ2v2; Lv3 = λ3v3; (v2; 1) = (v3; 1) = 0; (v2; v3) = 0: (Lv; v) This is equivalent to minimizing the Rayleigh quotient, written as λ = min . (x;1)=0 (v; v) We would like to emphasize that instead of the scalar product given in (1), in all of the Rayleigh quotients we use (Dv; v) instead of (v; v). In such a case we will be solving the so-called generalized eigenvalue problem Lv2 = λ2Dv2; Lv2 = λ2Dv2; (Dv2; 1) = (Dv3; 1) = 0; (Dv2; v3) = 0: (3) Indeed, it is easy to verify that all considerations below are valid with this scalar product. Homework Using the definition of scalar product in [2], show that n X (Dv; w) = d(i)viwi i=1 is indeed a dot product between v and w. For n = 2, (two dimensions) draw the unit sphere, or the tips of all vectors v 2 IR2 satisfying (Dv; v) = 1 if: (a) d(1) = d(2) = 1; (b) d(1) = 1 and d(2) = 10. 3.1 Subspace correction method for eigenvalue problem We consider the subspace correction method for eigenvalue problems (see [4], [1]). We begin by splitting IRn into the sum of subspaces, n IR = W1 + W2 + ··· + Wnc : For each of the subspaces Wk we assume n×nk Wk = Range(Pk);Pk 2 IR ; nk = dim Wk: The algorithm visits each subspace and improves the current eigenvalue approximation from each of these subspaces. To explain this in more detail, let y be a given approximation to the eigenvector v2, with (y; 1) = 0. We want to improve the approximation y by finding y1 2 W1 such that (L(y + z); (y + z)) y1 = arg min : (4) z2W1 (y + z; y + z) 4 The notation arg min means that y1 is the argument z which minimizes the right side of the equation above. We now show how we can find y1 by solving a (ni + 1) × (ni + 1) problem. We first compute the numerator on the right hand side of (4): (L(y + z); (y + z)) = (Ly + Lz; y + z) = (Ly; y) + (Ly; z) + (Lz; y) + (Lz; z) We compute the denominator in a similar fashion.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us