Chapter 3 -- Spectral Graph Theory and Random Walks
Total Page:16
File Type:pdf, Size:1020Kb
Chapter 3 Spectral graph theory and random walks on graphs Algebraic graph theory is a major area within graph theory. One of the main themes of algebraic graph theory comes from the following question: what do matrices and linear algebra tell us about graphs? One of the most useful invariants of a matrix to look in linear algebra at are its eigenvalues and eigenspaces. This is also true in graph theory, and this aspect of graph theory is known as spectral graph theory. Given a graph G, the most obvious matrix to look at is its adjacency matrix A, however there are others. Two other common choices are the Laplacian matrix, motivated from di↵erential geometry, and what we will call the transition matrix, motivated from dynamical systems—specifically random walks on graphs. The Laplacian and transition matrices are closely related (the eigenvalues of one determine the eigenvalues of the other), and which to look at just depends upon one’s point of view. We will first examine some aspects of the adjacency matrix, and then discuss random walks on graphs and transition matrices. On one hand, eigenvalues can be used to measure how good the network flow is and give bounds on the diameter of a graph. This will help us answer some of the questions we raised in the first chapter about communication and transportation networks. On the other hand, eigenvectors will tell us more precise information about the flow of a networks. One application is using eigenvectors to give a di↵erent kind of centrality ranking, and we will use this idea when we explain Google PageRank. Here are some other references on these topics. Algebraic Graph Theory, by Norman Biggs. • Algebraic Graph Theory, by Chris Godsil and Gordon Royle. • Modern Graph Theory, by B´ela Bollob´as. • Spectra of Graphs, by Andries Brouwer and Willem Haemers. • Spectral Graph Theory, by Fan Chung. • The first two books are “classical graph theory” books in the sense that they do not discuss random walks on graphs, and cover more than just spectral theory. Bollob´as’s book covers many 105 Graph Theory/Social Networks Chapter 3 Kimball Martin (Spring 2014) topics, including some spectral theory and random walks on graphs (and random graphs). The latter two books focus on spectral theory. Brouwer–Haemers cover the adjacency and Laplacian spectra but does not really discuss random walks, whereas Chung’s book discusses random walks but focuses entirely on the (normalized) Laplacian matrix. 3.1 A review of some linear algebra Before we get started on applying some linear algebra to graph theory, we will review some standard facts from linear algebra. This is just meant to be a quick reminder of the relevant theory, as well as fixing some notation, but you should refer to a linear algebra text to make sense of what I’m saying. Let Mn(R) denote the set of real n n matrices, and Mn(C) denote the set of complex n n ⇥ ⇥ matrices. In what follows, we mainly work with real matrices, though most of the theory is the same for complex matrices. Let A Mn(R). We may view A as a linear operator 2 n n A : R R ! v Av 7! where we view v Rn as a n 1 matrix, i.e., a column vector. We may sometimes write a column 2 ⇥ x1 . vector v = 0 . 1 as v =(x1,...,xn) out of typographical convenience. xn t B C t t t t t Let A denote@ A the transpose of A.Then(A ) = A. For A, B Mn(R), we have (AB) = B A . 2 Recall AB does not usually equal BA, though it happens from time to time. Denote by I = In the n n identity matrix. This means AI = IA = A for any A Mn(R). ⇥ 2 We use 0 for the number 0 as well as zero matrices. Denote by diag(d1,...,dn) the diagonal matrix with A =(aij)withaii = di and aij =0if i = j. Note I = diag(1, 1,...,1). 6 Recall A is invertible if there is a matrix B Mn(R) such that AB = I.Ifthereissucha 2 1 matrix B, it must be unique, and we say B is the inverse of A and write B = A− . It is true that 1 1 1 1 1 AA− = A− A = I, i.e., that the inverse of A− is just A,i.e.,(A− )− = A. For A, B Mn(R) 1 1 1 2 both invertible, we have that AB is also invertible and (AB)− = B− A− . There is a function det : Mn(R) R called the determinant. It satisfies det(AB)=det(A)det(B), ! and A is invertible if and only if det A = 0. We can define it inductively on n as follows. For n = 1, 6 put det[a]=a. Now A =(aij) Mn(R)withn>1. Let Aij be the ij cofactor matrix, i.e., the 2 matrix obtained by deleting the i-th row and j-th column. Then n 1 det(A)=a det A a det A + +( 1) − a det A . (3.1) 11 11 − 12 12 ··· − 1n 1n Hence the determinant can be computed by using a cofactor expansion along the top row. It can be computed similarly with a cofactor expansion along any row, and since det(A)=det(At), it can be also computed using a cofactor expansion along any column. (Just remember to be careful of signs—the first sign is 1 if you start along an even numbered row or column.) For n = 2, we have − ab det = ad bc. cd − ✓ ◆ 106 Graph Theory/Social Networks Chapter 3 Kimball Martin (Spring 2014) 2 10 − Example 3.1.1. Let A = 02 1 . Then 0 − 1 10 2 − @ A 2 1 0 1 02 det A =2det − ( 1) det − +0det 02− − 12 10 ✓ ◆ ✓− ◆ ✓− ◆ =2 4+1 ( 1) + 0 = 7. · · − For larger n, determinants are unpleasant to compute by hand in general, but some special cases can be worked out. For instance, if D = diag(d1,...,dn) is diagonal, then det D = d1d2 dn.In n ··· particular, det I = 1, and det kIn = k for a scalar k R. Also note that for invertible A,since 1 1 2 1 1 det(A)det(A− )=det(AA− )=detI = 1, we have det(A− )=(detA)− .IfS is invertible, then 1 1 det(SAS− )=detS det A det S− =detA. Hence similar matrices have the same determinant. Thus it makes sense to define the determinant det T of a linear transformation (i.e., this determinant does not depend upon a choice of basis). n n Given any linear transformation T : R R , and any basis = v1,...vn , we can associate ! B { } a matrix [T ] in Mn(R) such that if B T (x v + x v )=y v + + y v 1 1 ··· n n 1 1 ··· n n then x1 y1 [T ] . = . B 0 . 1 0 . 1 x y B nC B nC @ A @ A n In particular, let 0 = e1,...,en denote the standard basis for R ,i.e.,ei is the column vector B { } with a 1 in the i-th position and 0’s elsewhere. Then, if Tei = ui,thei-th column of [T ] 0 is ui, B i.e., [T ] 0 =[u1 u2 un]. B | |···| Given any bases = v ,...,v and = v ,...,v , there is an invertible change of basis B { 1 n} B0 { 10 n0 } matrix S,determinedbyrequiringSvi = vi0 for all i, such that 1 [T ] = S[T ] S− . (3.2) B0 B 1 For any matrices A, B Mn(R), we say they are similar if A = SAS− for an invertible S Mn(R). 2 2 This means A and B can be viewed as representing the same linear transformation T ,justwith respect to di↵erent bases. We say a nonzero vector v Rn is a (real) eigenvector for A with (real) eigenvalue λ R if 2 2 Av = λv.Ifλ is an eigenvalue for A, the set of v Rn (including the zero vector) for which 2 n Av = λv is called the λ-eigenspace for A, and it is a linear subspace of R .EvenifA Mn(R) 2 it may not have any real eigenvalues/vectors but it may have complex ones, so we will sometimes consider those. The dimension of the λ-eigenspace is called the geometric multiplicity of λ. Eigenvalues and eigenvectors are crucial to understanding the geometry of a linear transfor- mation as A having real eigenvalue λ means A simply acts as scaling by λ on the λ-eigenspace. i✓ Any complex eigenvalues must occur in pairs re± , and complex eigenvalues means that A acts by scaling by r together with rotation by ✓ on an appropriate 2-dimensional subspace. 107 Graph Theory/Social Networks Chapter 3 Kimball Martin (Spring 2014) Eigenvalues can be computed as follows. If Av = λv = λIv for a nonzero v, this means (λI A) − is not invertible, so it has determinant 0. The characteristic polynomial of A is p (λ)=det(λI A). (3.3) A − This is a polynomial of degree n in λ, and its roots are the eigenvalues. By the fundamental theorem of algebra, over C it always factors into linear terms p (λ)=(λ λ )m1 (λ λ )m2 (λ λ )mk . (3.4) A − 1 − 2 ··· − k Here λ1,...,λk denote the distinct (possibly complex) eigenvalues. The number mi is called the (algebraic) multiplicity of λi. It is always greater than or equal to the (complex) dimension of the (complex) λi-eigenspace (the geometric multiplicity).