Spectral Graph Theory 1 Recap 2 Laplacian Quadratic Form
Total Page:16
File Type:pdf, Size:1020Kb
CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford ([email protected]) February 8, 2018 1 Lecture 10 - Spectral Graph Theory In the last lecture we began our study of spectral graph theory, the study of connections between combi- natorial properties of graphs and linear algebraic properties of the fundamental matrices associated with them. In this lecture we continue this study making connections between eigenvalues of and eigenvectors of the Laplacian matrix associated with a simple, undirected, positively weighted graph and combinatorial properties of these graphs. 1 Recap In the remainder of this lecture our focus will be on a simple, undirected, positively weighted graph G = with RE . As we noted, spectral graph theory for broader classes of matrices is possible, but (V; E; w) w 2 >0 this will provide a good taste of the eld of spectral graph theory. Also note that while we might dene things with explicitly in the denition, e.g. or we may drop the when it is clear from context. G L(G) degG G Now, last class we introduced several fundamental matrices associated with a graph, the (weighted) adjacency matrix A(G) 2 RV ×V , the (weighted) degree matrix D(G) 2 RV ×V , and the Laplacian matrix L(G) 2 RV ×V each dened for all i; j 2 V by 8 ( ( >deg(i) i = j def wfi;jg fi; jg 2 E def deg(i) i = j def < A(G)ij = ; D(G)ij = ; L(G)ij == −wfi;jg fi; jg 2 E : 0 otherwise 0 otherwise > :>0 otherwise Also as we pointed out a matrix M is an adjacency matrix of some G if and only if Aji = Aij ≥ 0 for all i; j 2 V and Aii = 0 for all i 2 V and a matrix a Laplacian matrix of some G if and only if Lij = Lji ≤ 0 and L~1 = ~0. Furthermore, each gives a bijection from these matrices to simple, undirected, weighted graphs with positive edge weights. 2 Laplacian Quadratic Form Now why do we focus on Laplacian matrices? As we will show the quadratic form of the Laplacian has a number of nice properties. 2.1 Laplacians are PSD One nice properties of Laplacians are that they are positive semidenite (PSD). Denition 1 (Positive Semidenite (PSD)). A symmetric matrix M = M> is positive semidenite (PSD) if and only if for all x we have x>Mx ≥ 0. We refer to x>Mx as the quadratic form of M and this denition says a symmetric matrix is PSD if and only if the quadratic form is non-negative. As we will see this is equivalent to saying that all eigenvalues of M are non-negative. 1These lecture notes are a work in progress and there may be typos, awkward language, omitted proof details, etc. Moreover, content from lectures might be missing. These notes are intended to converge to a polished superset of the material covered in class so if you would like anything claried, please do not hesitate to post to Piazza and ask. CME 305: Discrete Mathematics and Algorithms - Lecture 10 2 Here we will prove that L(G) is PSD, proving several interesting properties of Laplacians and their quadratic form along the way. We begin, by dening the sum of two graphs as follows: Denition 2 (Graph Sums). For simple, undirected, weighted graphs G1 = (V; E1; w1) and G2 = (V; E2; w2) def we dene the graph sum G1 + G2 = (V; E+; w+) by E+ = E1 [ E2 and for fi; jg 2 E+ 8 w (i; j) + w (i; j) fi; jg 2 E \ E > 1 2 1 2 > <w1(i; j) fi; jg 2 E1 n E2 w+(i; j) = : >w2(i; j) fi; jg 2 E2 n E1 > :>0 otherwise In other words, the sum of two graphs with the same vertices we are dening as the graph on the same vertices with the union of their edges and the weight of the edges dened as the sum of the weights in each graph (where if an edge does not exist in a graph we are summing it with weight 0). One nice property of this denition is that it directly corresponds to summing graph Laplacians. In other words, it can easily be checked that L(G1 + G2) = L(G1) + L(G2) : Consequently, if for e = fi; jg 2 E we dene L(e) to be the Laplacian on the graph with vertices V and 1 edge of weight 1 between i and j then we see that X L(G) = weL(e) e2E and therefore > X > x Lx = we · x L(e)x : e2E Thus, to reason about the quadratic form of L it suces to reason about the quadratic form of a single edge. Now L(e) for e = fi; jg simply has a 1 in coordinates ii and jj and a −1 in ij and ji. All other coordinates are 0. Consequently, if we let ~1i be the vector the is 0 at all coordinates except for i which is a 1, i.e. ( 1 i = j ~1i(j) = 0 otherwise then we see that ~ ~ ~ ~ > ~ ~> L(e) = 1i − 1j (1i − 1j) = δijδij ~ def where we let δij = ~1i − ~1j for shorthand.Consequently, we have that > 2 x L(e)x = (xi − xj) ≥ 0 and > X 2 x L(G)x = wfi;jg(xi − xj) ≥ 0 fi;jg2E and therefore L(G) is PSD as claimed. CME 305: Discrete Mathematics and Algorithms - Lecture 10 3 2.2 Laplacian Quadratic Form Note that the quadratic form the Laplacian has a nice interpretation. Imagine each edge fi; jg is a spring with stiness wfi;jg and that the vertices designate which springs are linked together. Now given a vector V x 2 R we could imagine putting one edge of spring fi; jg at position xi on a line and the other end at position for each edge. If we do this, then > P 2 would be the total potential xj x L(G)x = fi;jg2E wij(xi −xj) energy of the springs. In other words, x>L(g)x, corresponds to the energy to pin each connection point i 2 V at point xi on the line, i.e. the enrgy to layout the graph along a line. There are several nice implications of this analysis. First, suppose that for some S ⊆ V we have x = ~1S, i.e. the indicator vector for set S, where for all i 2 V we have ( 1 i 2 S ~1S(i) = : 0 i2 = S What is x>Lx in this case? We have > ~> ~ X 2 X x Lx = 1S L1S = wfi;jg(xi − xj) = wij = w(@(S)) fi;jg2E fi;jg2@(S) in other words, it is the total weight of the edges cut by S. Consequently, the quadratic form of L specialized to boolean vectors, i.e. x 2 f0; 1gV , gives the size of all cuts in the graph. 2.3 Incidence Matrix Decomposition Another nice implication of our analysis in deriving the quadratic form of a Laplacian is that it gives us another way of decomposing Laplacian matrices. Let e1; :::; em denote the edges the graph and suppose we pick an arbitrary orientation of the edges so that ei = (ui; vi) for all i 2 [m]. Now, we dene the incidence matrix of G to be B(G) 2 RE×V where 0 1 ~δ> 0 ~ ~ > 1 e1 (1u1 − 1v1 ) > > B ~δ C (~1u − ~1v ) B e2 C B 2 2 C B(G) = B . C = B . C B . C B . C @ A @ A ~δ> (~1 − ~1 )> em um vm and we dene the weight matrix of G to be W(G) 2 RE×E, the diagonal matrix associated with w, i.e. ( we e = e1 = e2 W(G)e e = : 1 2 0 otherwise Now, with these denitions established we have > X ~ ~> X B WB = weδeδe = weL(e) = L(G) : e2E e2E Note that if we dene W1=2 to be W where we take the square root of each diagonal entry then we have > L = W1=2B (W1=2B) and therefore clearly is PSD as > 1=2 2. Consequently, we can prove that is PSD by L x Lx = kW Bxk2 L eciently providing an explicit factorization of L = M>M for some M. It can be shown that all PSD matrices have such a factorization, however that for Laplacians it can be compute so easily is a particularly nice property. CME 305: Discrete Mathematics and Algorithms - Lecture 10 4 3 Linear Algebra Review Now that we have a basic understanding of the Laplacian matrix and its quadratic form. Our next step is to take a closer look at the eigenvectors and eigenvalues of this matrix. Before we do this, here we review some linear algebra theory regarding eigenvalues and eigenvectors of symmetric matrices. First, we recall the denition of an eigenvector and an eigenvalue. Denition 3 (Eigenvalues and Eigenvectors). A vector v is an eigenvector of matrix M with eigenvalue λ if and only if Mv = λv. Next, we recall some basic properties of eigenvectors of symmetric matrices n×n n Lemma 4. Let M 2 R be a symmetric matrix and let v1; v2 2 R be eigenvectors of M with eigenvalues λ1 and λ2 respectively. If λ1 6= λ2 then v1 ? v2 and if λ1 = λ2 then for all α; β 2 R we have that αv1 + βv2 is an eigenvector of A of eigenvalue λ1. Proof. Note that > > by assumption. Furthermore, since is symmetric we have > v1 Mv2 = λ2v1 v2 M v1 M = > and thus > > .