Graph Spectra and Signal Processing on Graphs

Aristotle University of Thessaloniki

Karalias Nikolaos

October 2015 Acknowledgements

I want to thank my advisor, professor Ioannis Pitas for his patience and sug- gestions throughout the course of this project. I would also like to thank my parents and my brother for the support and understanding they showed.

1 Contents

1 Introduction 4

2 5

3 Linear Algebra and Discrete Transforms 9 3.1 Linear Algebra ...... 9 3.2 Discrete Transforms ...... 12 3.3 Circulant Matrices ...... 15

4 Spectral Graph Theory 19 4.1 Basic Properties ...... 19 4.2 Graph Spectra ...... 21 4.2.1 Circulant Graphs ...... 21 4.2.2 The Path Graph ...... 32 4.2.3 The Hypercube Graph ...... 36 4.2.4 The Grid Graph ...... 39 4.3 Additional Remarks ...... 45

5 Digital Signal Processing on Graphs 47 5.1 The Graph and Graph Filtering ...... 47 5.1.1 Graph Products ...... 49 5.1.2 The Bilateral Graph Filter ...... 49 5.2 Experiments ...... 50 5.2.1 Graph Image Filtering ...... 50 5.2.2 Spectral clustering ...... 52 5.2.3 Graph Filtering ...... 55 5.2.4 Concluding remarks ...... 57

Appendices 63

A Eigenvalue and Eigenvector plots 64

2 Abstract

We analyze the spectra of some basic graphs and explore their relationship with known discrete transforms. We then investigate the field of signal processing on graphs, which has emerged in recent years, and conduct graph filtering experi- ments on specfic graph topologies.

3 Chapter 1

Introduction

Graphs are a form of representing data and their structure. Vertices on a graph correspond to data points while edges describe the relationship between them. Weighted graphs are often a natural and intutive description for data in various applications. There, the edge weights are used to represent some notion of (dis)similarity. The increasing need to model and analyze complex networks and large datasets has inevitably led to the drastic popularity boost of graph theoretic methods.[2] However apart from modern applications on social media and networking [13] [14] [15], graphs have also been useful on a variety of other applications, ranging from image processing [16] and graphics [17] to biology [18]. Spectral methods have often been at the core of those developments in a plethora of fields .[27] [19] [20] [41]. The popularity of spectral methods can be largely attributed to their relationship with linear algebra, which lends itself very well to computation. Linear algebra algorithms are often suitable for parallel distributed processing or vectorization which makes them even more appealing in the face of the ever increasing demand to deal with big volumes of data. More recently, an interesting subject of study has been explored, namely the area of signal processing on graphs. [21][22] There the vertices of the graph are indexing a signal, which is then studied using tools from spectral graph theory. As we will see later, the core concepts of signal processing like the Fourier transform emerge naturally in this framework. This approach can be successfully utilized for image processing, where various filters are implemented in order to enhance certain aspects of an image(denoising, edge detection)[32] [31] as well as for big data analysis [29]. Here, we will first introduce the basic concepts from the relevant areas of linear algebra and graph theory, then move on to analyze some common graph spectra, and finally will take a look at signal processing on graphs and some of its applications.

4 Chapter 2

Graph Theory

A graph G = {V,E} consists of a vertex set V (G) and an edge set E(G) where an edge is an unordered pair of distinct vertices of G. Let (x, y) denote an edge. The vertices x and y are called adjacent, which we denote by x ∼ y. A vertex is incident with an edge if it is one of the two vertices of the edge.

Definition 1 Two graphs G1,G1 are equal if and only if they have the same vertex set and the same edge set.

Definition 2 Two graphs G1 and G2 are isomorphic if there is a bijection φ, from V (G1) to V (G2), such that x ∼ y in G1 if and only if φ(x) ∼ φ(y) in Y . We say that φ is an isomorphism from G1 to G2.

Relabelling the vertex set of a graph G is an isomorphism.

Definition 3 A directed graph G, has an edge set of distinct ordered adjacent vertices.

Definition 4 The degree of a vertex in graph G, is the number of edges incident on that vertex.

Next, we state some core definitions that are going to be useful for the spectral analysis of the graphs.

Definition 5 The adjacency AN×N (G) of a graph G is the integer ma- trix with rows and columns indexed by the vertices of X, such that the ij-entry of A(G) is equal to the number of arcs from i to j(which is 0 or 1).

Theorem 1 Let G be a graph with A(G). The number of r walks from u to v in G with length r is (A )uv.

Definition 6 The D(G) of a graph G is the where di is the degree of the vertex in the ith position.

5 Definition 7 The L(G) of a graph G, is a N × N indexed by the vertices of X defined by L = D − A.

Definition 8 The of a graph G, denoted by B(G), is the ma- trix of binary entries with rows and columns indexed by the vertices and the edges of G respectively(some authors define it so that the rows correspond to edges and columns correspond to vertices). The ijth entry of B(G) is 1 if and only if i is in the edge of j.

Definition 9 The normalized Laplacian matrix, is L˜ = D−1/2LD−1/2.

Definition 10 A k − regular graph is one whose vertices all have the same degree k.

Common examples of k − regular graph examples are the cube, the ring, etc.

Definition 11 A complete graph Kn on n vertices, is a graph whose every pair of vertices is connected by an edge.

Definition 12 Let G1,G2,...,GN be graphs. Then the Cartesian Graph Product is the graph N G12G22 ... 2GN = 2 Gi (2.1) i=1 with vertex set {(x1, x2, . . . , xN )|xi ∈ V (Gi)} and for which two vertices (x1, x2, . . . , xN ) are adjacent whenever xiyi ∈ E(Gi) for exactly one index 1 ≤ i ≤ N and xj = yj for each index j 6= i .

The Cartesian Graph product, is commutative in the sense that G2H =∼ H2G i.e the two products are isomorphic. The Cartesian Graph Product is also associative.

Definition 13 Let G1,G2,...,GN be graphs. Then the Strong Graph Prod- uct is the graph N G1 G2 ··· GN = Gi (2.2)    i=1 with vertex set {(x1, x2, . . . , xN )|xi ∈ V (Gi)} and for which two vertices (x1, x2, . . . , xN ) are adjacent whenever xiyi ∈ E(Gi) or xj = yj for each 1 ≤ i ≤ N .

Definition 14 Let G1,G2,...,GN be graphs. Then the Direct Graph Prod- uct is the graph N G1 × G2 × · · · × GN = × Gi (2.3) i=1 with vertex set {(x1, x2, . . . , xN )|xi ∈ V (Gi)} and for which two vertices (x1, x2, . . . , xN ) are adjacent only whenever xiyi ∈ E(Gi) for each 1 ≤ i ≤ N .

6 Figure 2.1: The P2 path graph.

Figure 2.2: The C3 cycle graph.

For a graph G1 on N vertices and G2 on M vertices, the adjacency matrix of the direct product will be

A(G1 × G2) = A(G1) ⊗ A(G2) (2.4)

, the adjacency matrix of the cartesian product will be

A(G12G2) = (A(G1) ⊗ IM ) + (IN ⊗ A(G2)) (2.5)

, the adjacency matrix of the strong product will be

A(G1  G2) = A(G1) ⊗ A(G2) + (A(G1) ⊗ IM ) + (IN ⊗ A(G2)) (2.6) [10] For example, consider the following graphs: the path graph on two vertices P2 and the cycle graph on 3 vertices C3. Their graph products are going to be

Figure 2.3: The cartesian product G = C32P2

7 Figure 2.4: The strong product G = C3  P2

Figure 2.5: The direct product G = C3 × P2

8 Chapter 3

Linear Algebra and Discrete Transforms

3.1 Linear Algebra

Here we are going to present some of the required tools and concepts from linear algebra and the theory of discrete transforms that are going to be useful later on.

Definition 15 A matrix A is symmetric when A = AT.

Definition 16 A matrix A is called singular, when it is not invertible.

Definition 17 Matrices BN×N and CN×N are said to be similar matrixes whenever there exists a nonsingular matrix Q such that B = Q−1CQ.

The product P−1AP is called a similarity transformation on A. If Q = P where P is a permuation matrix, then B, C are said to be permutation- similar. The set of distinct eigenvalues of A is called the spectrum of A.

Definition 18 A matrix A is called normal whenever AAT = ATA.

From this definition, we can observe that symmetric matrices are normal.

Definition 19 For an N × N matrix A, the scalars λ and vectors vN×1 that satisfy Av = λv (3.1) are called eigenvalues and eigenvectors of A, respectively.

Definition 20 A A is said to be diagonalizable whenever A is similar to a diagonal matrix, i.e. P−1AP = D.

9 A is diagonalizable if and only if A posesses a complete set of eigenvectors. −1 Furthermore, P AP = diag(λ1, λ2, ··· , λN ) if and only if the columns of P form a complete set of eigenvectors each one having an associated eigenvalue. Similar matrices have the same eigenvalues with the same multiplicities because they have the same characteristic polynomial. However, this does not hold for the eigenvectors.

Theorem 2 (Spectral Theorem) Let A be a N × N symmetric real matrix. Then there are N pairwise orthogonal eigenvectors associated with the real eigen- values of A.

Definition 21 The trace of an N × N matrix is defined as the sum of the diagonal elements of the matrix N X aii (3.2) i=1 One of the properties of the trace is that

N X trace(A) = λi (3.3) i=1 Definition 22 For real symmetric matrices A, a matrix is positive definite when T n x Ax > 0 ∀x ∈ R6=0 (3.4) The following are equivalent definitions for positive definite matrices. Definition 23 A is positive definite when all the eigenvalues of A are positive.

Definition 24 A is positive definite when A = BTB for some nonsingular B.

Definition 25 For a vector x ∈ Rn and a matrix A ∈ Rn×n, the scalar function defined by n n T X X f(x) = x Ax = aijxixj (3.5) i j is called a quadratic form. A quadratic form is positive definite when A is positive definite.

Definition 26 A real symmetric matrix An×n is positive semidefinite when

T n x Ax ≥ 0, ∀x ∈ R (3.6) The following definitions are equivalent as well. Definition 27 The eigenvalues of positive semidefinite matrices are nonnega- tive.

10 Definition 28 All symmetric positive semidefinite matrices are covariance ma- trices, i.e A = BTB for some B with the same rank as A.

A very important fact that will be used in our analysis is that a matrix A is real symmetric if and only if A is orthogonally similar to a real diagonal matrix D. Essentially, PTAP = D for some orthogonal P.(i.e A has an orthogonal set of eigenvectors P with the elements of the diagonal D being the corresponding eigenvalues).

Definition 29 A matrix A is said to be involutory or an involution when it satisfies A2 = I. A matrix P that satisfies P2 = P is called a projector or an .

Definition 30 The of two matrices AM×N and BP ×Q is defined as   a11B a12B ··· a1N B  a21B a22B ··· a2N B  A ⊗ B =   (3.7)  . . .. .   . . . .  aM1B aM2B ··· aMN B where A × B is a MP × NQ matrix.

Some of its properties are:

• (A + B) ⊗ C = A ⊗ B + B ⊗ C • (A ⊗ B) ⊗ C = A ⊗ (B ⊗ C) • a(A ⊗ B) = (aA) ⊗ B = A ⊗ (aB), a ∈ R

• (A ⊗ B)T = AT ⊗ BT • (A ⊗ B)−1 = A−1 ⊗ B−1 if A and B are square. • (A ⊗ B)(C ⊗ D) = (AC) ⊗ (BD)

• A ⊗ B = (A ⊗ I)(I ⊗ B) Qr Qr Qr • k=1(Ak ⊗ Bk) = ( k=1 Ak) ⊗ ( k=1 Bk) if Ak and Bk are square. • det(A ⊗ B) = (det(B))m(det(B))n • T race(A ⊗ B) = T race(A)T race(B)

• If A and B, then A ⊗ B is also unitary.

Theorem 3 If λ are the eigenvalues of a matrix A and µ the eigenvalues of a matrix B, and v, u their respective eigenvectors,then v ⊗ u are the eigenvectors corresponding to the eigenvalues λµ of the matrix A ⊗ B.

11 For A ∈ CN×N and 0 6= b ∈ CN×1 we have the following.

Definition 31 {b, Ab, A2b,..., Aj−1b} is called the Krylov sequence.

Definition 32 2 j−1 Kj = span{b, Ab, A b,..., A b} (3.8) is called a Krylov subspace.

Definition 33  2 j−1  KN×j = b, Ab, A b,..., A b (3.9) is called a Krylov Matrix.

Krylov subspaces have a variety of applications for efficient methods in compu- tational linear algebra. [34] [35]

3.2 Discrete Transforms

The DFT Here we are going to present some of the important definitions and proper- ties related to the discrete fourier transform. The role of the discrete fourier transform is, given a signal in the time domain, to provide us with the signal’s representation in the frequency domain. First we present the basic definitions. For a positive integer N, the complex numbers {1, ω, ω2, ··· , ωN−1}, where

2πi ω = e N

are the N th roots of unity i.e they are the solutions to zN = 1. The vectors

T  −k −2k −(N−1)k vk = 1, ω , ω , . . . , ω , k = 0, 1,...,N − 1 are the columns of the Fourier matrix F and form an orthogonal basis.

Definition 34 The n × n matrix F where ξjk = ω−jk for 0 ≤ j, k ≤ N − 1, is the Fourier Matrix and has the form:

1 1 ... 1  N−1 1 ξ . . . ξ   2 2(N−1)  1 ξ . . . ξ  F =   . . .. .  . . . .  1 ξN−1 . . . ξ(N−1)(N−1) N×N The inverse Fourier matrix is given by

12 1 1 ... 1  N−1 1 ω . . . ω  1  2 2(N−1)  −1 1 ω . . . ω  F =   n . . .. .  . . . .  1 ωN−1 . . . ω(N−1)(N−1) N×N Definition 35 Given an n-dimensional vector x the product Fx is called the discrete Fourier Transform of x, and F−1x is called the inverese transform of x. The kth entries are N−1 X jk [Fx]k = xjξ (3.10) j=0 and N−1 1 X [F−1x] = x ωjk (3.11) k N j j=0

The DCT The discrete cosine transform only uses cosines to define the basis vectors. First we define some necessary scaling factors for the orthonormality 1 γl = √ for l = 0 or N − 1, 2 1 σl = √ for l = 0 and 2 1 εl = √ for l = N − 1 2 In every other case these factors are equal to 1. The following are the four standard types of DCT, described by the components Cjk of their respective nth order normalized DCT matrix C

r 2  π  DCT-1 : C = γ γ cos lk , lk k l N − 1 N − 1 r 2  1 π  DCT-2 : C = σ cos (l + )k , lk N k 2 N r 2  1 π  DCT-3 : C = σ cos (k + )l , lk N l 2 N r 2  1 1 π  DCT-4 : C = cos (l + )(k + ) , lk N 2 2 N for k, l = 0, 1,...,N − 1.

13 The DST The discrete sine transform only uses sines to define the basis vectors. The components of the nth order normalized DST matrix for each type are

r 2  π  DST-1 : S = sin lk for l, k = 1, 2,...,N lk N + 1 N + 1 r 2 (l + 1/2)(k + 1)π  DST-2 : S = ε sin for l, k = 0, 1,...,N − 1 lk N k N r 2 (l + 1)(k + 1/2)π  DST-3 : S = ε sin for l, k = 0, 1,...,N − 1 lk N l n r 2 (l + 1/2)(k + 1/2)π  DST-4 : S = sin for l, k = 0, 1,...,N − 1 lk N N

It should be noted that the N th order DST-1 and DCT-1 transforms are involutory. For more information on the DFT, DCT, and DST transforms see [23][26][35].

Multidimensional Fourier Transforms The multidimensional finite Fourier transform can be written as a tensor product of 1D finite fourier transforms. For example for the N ×M 2D Fourier transform of a N × M matrix A FN×M A = B, we have

M−1 N−1 X X ln mk blk = anmω w , l = 0, 1,...,N − 1 k = 0, 1,...,M − 1 m=0 n=0 where ω = e−2πi/N and w = e−2πi/M or in matrix form

B = FN AFM . (3.12)

Essentially, the 1D transform first acts on the columns of A and then by right multiplication on the rows. Now we consider the K-dimensional case. Suppose we have a K-dimensional array A. The lexicographic ordering on the N1 × N2 × · · · × NK array A, is the ordering defined on the elements of A by having N1 as the fastest running N parameter followed in order by N2,...,NK . Let a be the vector in C , where N = N1N2 ··· NK , determined by the lexicographic ordering on A. The N1 × N2 × · · · × NK K-dimensional Fourier transform of A

FN1...NK A = B

14 is defined by

NK −1 N1−1 X X l1n1 lK nK bl1...lK = ··· an1...nK w1 ··· wK , (3.13) nk=0 n1=0

−2πi/N where 0 ≤ lk < NK and wk = e k . This can be rewritten as a multidimen- sional tensor product

b = (F(NK ) ⊗ · · · ⊗ F(N1)) a. (3.14)

Given an L × M 2D array A and a vector a ∈ CN ,N = LM corresponding to A(by lexicographic ordering). ZA is the matrix corresponding to vector Za ∈ CN . In general, if Z = X ⊗ Y where X is an M × M matrix, and Y is an L × L matrix, then ZA = YAXT (3.15)

3.3 Circulant Matrices

A is an N × N matrix TN = [tk,j; k, j = 0, 1, ··· ,N − 1] i.e, the matrix   t0,1 t1,1 t1,2 ··· t1,(N−1)  .   t1,1 t0,1 t1,1 ··· .    T =  .  (3.16) N  t2,1 t1,1 t0,1 ··· .     ......   . . . . .  t(N−1),1 ········· t0,1

Toeplitz matrices have interesting properties and appear in various problems and applications.[36][37] A tridiagonal Toeplitz matrix has the form [11]:

δ τ 0 ... 0  σ δ τ     ..  0 σ δ .  TN =   .  . .. ..   . . .     τ  0 σ δ

The eigenvalues are given by:

√  lπ  λ (T) = δ + 2 στ cos , l = 1,...,N (3.17) l N + 1

15 T and for στ 6= 0 the eigenvector vl = [vl,1, vl,2, . . . , vl,N ] associated with λl(T) will be:  lkπ  v = (σ/τ)k/2 sin , for k = 1, . . . , N, l = 1,...,N. (3.18) l,k N + 1 A is a special case of a Toeplitz matrix. It is an N × N T matrix C defined completely by its first column Cj1 = [c0, c1, c2, . . . , cN−1] , with the remaining columns being cyclic permutations of Cj1.   c0 cN−1 ...... c1  c1 c0 ...... c2    c2 c1 ...... c3 C =   (3.19)  c3 c2 ...... c4    ......   ......  cN−1 cN−2 ...... c0 Also, let 2 N−1 p(x) = c0 + c1x + c2x + ··· + cN−1x be the associated polynomial of matrix C.

We are going to show how every N × N circulant matrix is diagonalized by the corresponding N × N Fourier Matrix F. Next we define the cyclic Q:

0 0 ... 1 1 0 ... 0   0 1 ... 0 Q =   (3.20) . . . . . . . . 0 0 ... 0 A nice property is that: QN → Q0s columns left-shifted N-1 times Now, keeping the above property in mind, we observe that(I being the ): 1 2 N−1 p(Q) = c0I + c1Q + c2Q + ··· + cN−1Q = C (3.21)

Additionally we have: ξk(1 + ξk + ξ2k + ··· + ξ(N−1)k) = ξk + ξ2k + ξ3k + ··· + ξ(N−1)k + 1 =⇒ (1 + ξk + ξ2k + ··· + ξ(N−1)k)(1 − ξk) = 0 =⇒ 1 + ξk + ξ2k + ··· + ξ(N−1)k = 0 when ξk 6= 1 (3.22)

16 Having established the above, we turn our attention to the matrix product FQmF−1, where F is the Fourier matrix. Recall that F−1 is just F−1 = F¯/N. The product FQM is an M-column left circular shift of the DFT matrix, so for example for M = 1:

 1 1 ... 1 1 1 ... 1  N−1  ξ ...... 1 1 ω . . . ω   2   2 2(N−1)  −1  ξ ...... 1 1 ω . . . ω  FQF =      . . . . . . . .   . . . . . . . .  ξN−1 ...... 1 1 ωN−1 . . . ω(N−1)(N−1) | {z } | {z } FQ1 F−1

M −1 More generally, we examine the diagonal elements dxy of Dm = FQ F (x = y):

N−1 N−1 1 X 1 X D = ξ(k+(M modn))xωky = ξ(k+(M modN))xξ−kx xy N N k=0 k=0 N−1 N−1 1 X 1 X = ξx(k+(M modN)−k) = ξx(M modN) N N k=0 k=0 N−1 N−1 1 X ξx(M modn) X = ξx(M modN) = 1 N N k=0 k=0 = ξx(M modN). (3.23)

Whereas, for the off-diagonal elements(x 6= y):

N−1 N−1 1 X 1 X D = ξ(k+(M modN))xξ−ky = ξkxξ(M modN)xξ−ky xy N N k=0 k=0 N−1 N−1 ξ(M modN)x X ξ(M modN)x X = ξkxξ−ky = ξk(x−y) N N k=0 k=0 However, ξ(x−y) 6= 1, which by (3.22) implies that N−1 X ξk(x−y) = 0, and therefore k=0 N−1 ξ(M modN)x X D = ξk(x−y) = 0. (3.24) xy N k=0

17 (3.23) and (3.24) imply:

1 0 ... 0  (M modN) 0 ξ ... 0   .  M −1  . 2(M modN)  FQ F = 0 . ξ 0  = DM   . . . .  . . . .  0 ...... ξ(N−1)(M modN)

Having shown that, we use p(Q) and form the product Fp(Q)F−1:

−1 2 N−1 −1 Fp(Q)F = F(c0I + c1Q + c2Q + ··· + cN−1Q )F −1 −1 2 −1 N−1 −1 = c0FIF + c1FQF + c2FQ F + ··· + cN−1FQ F

= c0I + c1D1 + c2D2 + ··· + cN−1DN−1

p(1) 0 ... 0   0 p(ξ) ... 0   .   . 2  −1 =  0 . p(ξ ) 0  = FCF (3.25)    . . . .   . . . .  0 ...... p(ξ(N−1))

since 0 ≤ M ≤ N − 1

This completes the proof of this useful property that the DFT matrix diagonal- izes any circulant matrix C. Circulant matrices are operators. The convolution operation becomes multiplication in the frequency domain. We can draw the parallel with circulant matrices because when the DFT matrix is ap- plied they become diagonal matrices, which are the equivalent of multiplication. For a more detailed treatment of Toeplitz and circulant matrices, as well as an alternative proof of the above property via difference equations you can read Robert Gray’s review.[7]

18 Chapter 4

Spectral Graph Theory

4.1 Basic Properties

The eigenvalues of the adjacency matrix are often referred to as the ordinary spectrum whereas the Laplace eigenvalues are the Laplace spectrum. The Laplacian and the adjacency matrices, are both dependent on the or- dering of the vertices. However, graphs with reordered vertices are isomorphic and their adjacency/Laplacian matrices are permutation-similar. In fact, two undirected graphs G1,G2 are isomorphic if and only if there exists a permutation −1 matrix P such that PA(G1)P = A(G2). [10] Matrices are similar if and only if they have the same eigenvalues, therefore the ordinary and the Laplace spectrum are invariant under relabeling.[9] Two graphs which have the same spectrum are called isospectral. How- ever, the spectrum does not uniquely characterize a graph. There can be non- isomorphic graphs that have the same spectrum i.e, are isospectral. First, we list some properties of the adjacency and the Laplacian matrices. The adjacency matrix has zeros on the diagonal, therefore

N N X X trace(A) = aii = λi = 0. (4.1) i i=1 In general we assume the following ordering of eigenvalues

λ1 ≤ λ2 ≤ · · · ≤ λN (4.2) If G is a k-regular graph, then

λN = d (4.3)

T is the greatest adjacency eigenvalue with eigenvector 1, 1,..., 1 The Laplacian matrix produces a natural quadratic form on graphs

T X 2 x Lx = (xi − xj) ≥ 0. (4.4) (i,j)∈E

19 With that considered, from the definition in (3.6) we can conclude that the Laplacian matrix is positive semidefinite. The normalized Laplacian is also positive semidefinite. Another fact for the Laplacian is that

L1 = 0 (4.5) where 1 is the all ones vector and 0 is the all zeros vector. This means that 0 is always an eigenvalue of the Laplacian associated with the eigenvector 1. So for the eigenvalues µ of the Laplacian the following holds

0 = µ1 ≤ µ2 ≤ · · · ≤ µN . (4.6)

For the normalized Laplacian the eigenvector corresponding to the 0 eigenvalue is D1/21. Let G be a graph on n vertices with normalized Laplacian eigenvalues 0 = µ1 ≤ µ2 ≤ · · · ≤ µN . Then 0 ≤ µi ≤ 2. (4.7) and N µ ≥ . (4.8) N N − 1

The second smallest eigenvalue of the Laplacian, µ2, is called the algebraic connectivity of the graph. The eigenvector associated with µ2 is called the Fiedler vector, in honor of Miroslav Fiedler who first introduced the concept. [44] Exploiting its relationship with the connectivity of the graph, we can use the Fiedler vector for applications such as graph partitioning.[45] Finally, we state a theorem by Fiedler, about the Cartesian graph product that is going to be instrumental in the study of various graph spectra later on.

Theorem 4 The Laplacian eigenvalues of G2H are of the form λ + h, where λ and h are Laplacian eigenvalues of G and H, respectively. If, v and u are the respective eigenvectors of λ and h, then

u = v ⊗ u (4.9) is a Laplacian eigenvector of G2H corresponding to the eigenvalue

µ = λ + h (4.10)

, where ⊗ denotes the Kronecker product.

‘This result holds for the adjacency matrix as well. [10] Note that the Kro- necker product between matrices is not commutative in general. Therefore A ⊗ B 6= B ⊗ A. However when A, B are both square matrices, then A ⊗ B is permutationally similar to B ⊗ A.[12]

20 4.2 Graph Spectra

We are now going to review some basic graph spectra, along with some of their interesting properties. For each case, the spectrum is going to be experimentally verified and compared to the results of Matlab’s eigenanalysis routine eig. It should be noted, that while some of the formulas for the eigenvectors contain the necessary normalization factors, that is not the case for all of them, and consequently the eigenvector matrices V, U for the adjacency and the Lapla- cian matrices of the graph, should be normalized before forming the products V−1AV and U−1LU for the purposes of diagonalization.

4.2.1 Circulant Graphs k th A circulant graph on N vertices is a graph GN whose k vertex, i = 0, 1, 2 ...,N− 1 is adjacent to (i + k) mod N and (i − k) mod N. Every complete graph is also a circulant graph. Every circulant graph has a circulant adjacency matrix. Because of that, there is a set of eigenvectors that works for every adjacency matrix of a graph in the category of circulant graphs. As we have also demonstrated earlier, every circulant matrix has the same set of eigenvectors, namely[7]

1  −2πim/N −2πim(N−1)/N T vm = √ 1, e , . . . , e (4.11) N which are the basis vectors of the discrete fourier transform matrix. The eigen- values for these circulant topologies can easily be computed by[7]

N−1 X −2πimk/N λm = cke , 0 ≤ m ≤ N − 1 (4.12) k=0 where ck are the entries of the first column of the respective circulant matrix. We can express this in matrix notation as well. Let c1 be the first column of the circulant matrix, then the eigenvalues of C are going to be

λ = Fc1. (4.13)

The above is easily extended to the Laplacian matrix of these graphs as well because circulant graphs are regular since every node has the same number of jumps(degree). Therefore the Laplacian matrix will also be circulant and have the aforementioned set of properties.

In general for a circulant graph on N vertices and jumps ki = 1, 2,...,K, the eigenvalues for the adjacency matrix, using (4.12) can be computed by

K X  −2πim(N−k)/N −2πimk/N  λm = e + e (4.14) k=1

21 except for the case when N is even, and K = N/2, then the formula is

N−1 X −2πimk/N λm = e (4.15) k=1 and for the Laplacian(recall that circulant graphs are regular)

K X  −2πim(N−k)/N −2πimk/N  µm = d − e + e (4.16) k=1 and for N even and K = N/2

N−1 X −2πimk/N µm = d − e (4.17) k=1 where d is the degree of the graph.

The Ring Graph Now we are going to take a look at a special case of a circulant graph, the ring graph. It is known that the adjacency matrix of a ring graph is circulant. In the case of the ring adjacency matrix A, the only nonzero entries in the first column are for k = 1 and k = N − 1 cN−1 = c1 = 1. So (4.12) reduces to

−2πim/N −2πim(N−1)/N λm = e + e (4.18)

Each eigenvalue corresponds to a different DFT eigenvector. Since the DFT eigenvectors form an orthogonal basis, in the case of symmetric ciruclants this implies that every eigenvalue has an eigenspace with µA(λi) DFT eigenvectors forming a basis for said eigenspace, where µA(λi) is the algebraic multiplicity of each eigenvalue.

The Laplacian eigenvectors   N U = x0,..., xbN/2c, y0,..., ybN/2c x, y ∈ R of a ring graph RN : {(u, u + 1) : 0 < u ≤ N} ∪ (1,N) are

xk(u) = cos(2πk(u − 1)/N) (4.19)

yk(u) = sin(2πk(u − 1)/N) (4.20) for 0 ≤ k ≤ bN/2c. When N is even, yN/2 is the all zero vector that is dropped, so we only have xn/2. The y0 is the all-zero vector so it can be ignored and x0 is the all-ones vector. The corresponding eigenvalue for eigenvectors xk, yk is

µk = 2 − 2 cos(2πk/N). (4.21) It can be verified that these are indeed the invariant directions by plotting the pairs (xk(u), yk(u)). Alternatively, we can find the eigenvalues and eigenvectors

22 Figure 4.1: The R8 ring graph. The numbering here begins from 1.

for the Laplacian matrix using the following method. Let v be an eigenvector of the adjacency matrix and λ its eigenvalue. Then

Av = λv (D − L)v = λv Lv = Dv − λv. In a k-regular graph every vertex has k adjacent vertices, the degree matrix will be D = kI. Therefore: Lv = kIv − λv Lv = (k − λ)v. (4.22)

In the case of the ring k = 2, so the adjacency matrix will have the same eigenvectors, but with eigenvalue µ = 2 − λ, i.e.

 −2πim/N −2πim(N−1)/N  µm = 2 − e + e (4.23)

Experimental Verification The ring graph has the adjacency matrix  0 1.0 0 0 0 0 0 1.0 1.0 0 1.0 0 0 0 0 0     0 1.0 0 1.0 0 0 0 0     0 0 1.0 0 1.0 0 0 0  A =   .  0 0 0 1.0 0 1.0 0 0     0 0 0 0 1.0 0 1.0 0     0 0 0 0 0 1.0 0 1.0 1.0 0 0 0 0 0 1.0 0

23 We can work out the numbers in the case of the R8 graph. The normalized DFT matrix F = [f1, f2,..., f8] , which can be obtained in Matlab with the command dftmtx(8)/sqrt(8), is : 0.3536 0.3536 0.3536 0.3536 0.3536 0.3536 0.3536 0.3536  0.3536 0.25 − 0.25 i −0.3536 i −0.25 − 0.25 i −0.3536 −0.25 + 0.25 i 0.3536 i 0.25 + 0.25 i    0.3536 −0.3536 i −0.3536 0.3536 i 0.3536 −0.3536 i −0.3536 0.3536 i    0.3536 −0.25 − 0.25 i 0.3536 i 0.25 − 0.25 i −0.3536 0.25 + 0.25 i −0.3536 i −0.25 + 0.25 i  F =   0.3536 −0.3536 0.3536 −0.3536 0.3536 −0.3536 0.3536 −0.3536    0.3536 −0.25 + 0.25 i −0.3536 i 0.25 + 0.25 i −0.3536 0.25 − 0.25 i 0.3536 i −0.25 − 0.25 i    0.3536 0.3536 i −0.3536 −0.3536 i 0.3536 0.3536 i −0.3536 −0.3536 i  0.3536 0.25 + 0.25 i 0.3536 i −0.25 + 0.25 i −0.3536 −0.25 − 0.25 i −0.3536 i 0.25 − 0.25 while the Matlab adjacency matrix eigenvectors from the software’s own eigen- analysis routine V˜ 8 = [˜v1, ˜v2,..., ˜v8] for the ring are: −0.3536 0.1072 0.4884 0.08455 −0.4928 0.3536 0.3536 0.3536  0.3536 0.2695 −0.4211 −0.4928 −0.08455 −1.046 · 10−16 0.5 0.3536   −0.3536 −0.4884 0.1072 −0.08455 0.4928 −0.3536 0.3536 0.3536    0.3536 0.4211 0.2695 0.4928 0.08455 −0.5 −6.104 · 10−17 0.3536 V˜ =   −0.3536 −0.1072 −0.4884 0.08455 −0.4928 −0.3536 −0.3536 0.3536    0.3536 −0.2695 0.4211 −0.4928 −0.08455 6.446 · 10−18 −0.5 0.3536   −0.3536 0.4884 −0.1072 −0.08455 0.4928 0.3536 −0.3536 0.3536 0.3536 −0.4211 −0.2695 0.4928 0.08455 0.5 7.492 · 10−17 0.3536

 −2.0  −1.4142   −1.4142    0  The eigenvalues for these sets of eigenvectors are λ˜ =   from  0     1.4142     1.4142  2.0  2.0   1.4142     0    −1.4142 Matlab’s decomposition and λ =   from the DFT diagonalization. 1  −2.0    −1.4142    0  1.4142 Now let’s consider the eigenvectors for each of the eigenvalues in both the DFT and the Matlab cases. ˜ 0 • λ1 = λ5 = −2.0. Eigenvectors:

˜v1 = −f5

1It is easily verified that (4.12) yields the same eigenvalues.

24 ˜ 0 • λ8 = λ1 = 2.0. Eigenvectors:

˜v8 = f1

˜ ˜ 0 0 • λ2 = λ3 = λ4 = λ6 = −1.4142. Eigenvectors:

˜v2 = (0.1516 + 0.6907i)f4 + (0.1516 − 0.6907i)f6

˜v3 = (0.6907 − 0.1516i)f4 + (0.6907 + 0.1516i)f6

˜ ˜ 0 0 • λ4 = λ5 = λ3 = λ7 = 0. Eigenvectors:

˜v4 = (0.1196 − 0.6969i)f3 + (0.1196 + 0.6969i)f7

˜v5 = −(0.6969 + 0.1196i)f3 + (−0.6969 + 0.1196i)f7

˜ ˜ 0 0 • λ6 = λ7 = λ2 = λ8 = 1.4142. Eigenvectors:

˜v6 = (0.5000 − 0.5000i)f2 + (0.5000 + 0.5000i)f8

˜v7 = (0.5000 + 0.5000i)f2 + (0.5000 − 0.5000i)f8

A more compact way of representing those linear combinations is the use of matrix notation. To get the column vectors of a matrix Y via linear combina- tions of the column vectors of matrix X(asumming these exist) we will have to right-multiply X with a coefficient matrix TXY.

Y = XTXY (4.24)

We can get to V˜ through F by right-multiplication. ˜ V = FTFV˜

The matrix TFV˜ for this one is:

 0 0 0 0 0 0 0 1.0  0 0 0 0 0 0.5 − 0.5i 0.5 + 0.5i 0     0 0 0 0.1196 − 0.6969i −0.6969 − 0.1196i 0 0 0     0 0.1516 + 0.6907i 0.6907 − 0.1516i 0 0 0 0 0  T ˜ =   FV −1.0 0 0 0 0 0 0 0     0 0.1516 − 0.6907i 0.6907 + 0.1516i 0 0 0 0 0     0 0 0 0.1196 + 0.6969i −0.6969 + 0.1196i 0 0 0  0 0 0 0 0 0.5 + 0.5i 0.5 − 0.5i 0

25 We follow the same procedure for the graph Laplacian.

 2.0 −1.0 0 0 0 0 0 −1.0 −1.0 2.0 −1.0 0 0 0 0 0     0 −1.0 2.0 −1.0 0 0 0 0     0 0 −1.0 2.0 −1.0 0 0 0  L =   .  0 0 0 −1.0 2.0 −1.0 0 0     0 0 0 0 −1.0 2.0 −1.0 0     0 0 0 0 0 −1.0 2.0 −1.0 −1.0 0 0 0 0 0 −1.0 2.0

The eigenvector matrix of the Laplacian matrix is calculated:

0.3536 0.2478 −0.4343 −0.276 0.4169 0.3211 0.3833 −0.3536 0.3536 0.4823 −0.1319 0.4169 0.276 −0.4981 −0.04401 0.3536    0.3536 0.4343 0.2478 0.276 −0.4169 0.3833 −0.3211 −0.3536   0.3536 0.1319 0.4823 −0.4169 −0.276 −0.04401 0.4981 0.3536  U˜ =   0.3536 −0.2478 0.4343 −0.276 0.4169 −0.3211 −0.3833 −0.3536   0.3536 −0.4823 0.1319 0.4169 0.276 0.4981 0.04401 0.3536    0.3536 −0.4343 −0.2478 0.276 −0.4169 −0.3833 0.3211 −0.3536 0.3536 −0.1319 −0.4823 −0.4169 −0.276 0.04401 −0.4981 0.3536

 0  0.5858   0.5858    2.0  with eigenvalues: µ˜ =    2.0     3.414     3.414  4.0 and its respective linear combination matrix is

TFU˜ = 1.0 0 0 0 0 0 0 0   0 0.3504 + 0.6142i −0.6142 + 0.3504i 0 0 0 0 0     0 0 0 −0.3903 + 0.5896i 0.5896 + 0.3903i 0 0 0     0 0 0 0 0 0.4541 − 0.5421i 0.5421 + 0.4541i 0     0 0 0 0 0 0 0 −1.0    0 0 0 0 0 0.4541 + 0.5421i 0.5421 − 0.4541i 0     0 0 0 −0.3903 − 0.5896i 0.5896 − 0.3903i 0 0 0  0 0.3504 − 0.6142i −0.6142 − 0.3504i 0 0 0 0 0

˜ such that U = FTFU˜ holds. 1,2 We will use two more examples for the circulant graphs. First, the C8

26 1,2 Figure 4.2: The C8 circulant graph

graph.

 0 1.0 1.0 0 0 0 1.0 1.0 1.0 0 1.0 1.0 0 0 0 1.0   1.0 1.0 0 1.0 1.0 0 0 0     0 1.0 1.0 0 1.0 1.0 0 0  A =    0 0 1.0 1.0 0 1.0 1.0 0     0 0 0 1.0 1.0 0 1.0 1.0   1.0 0 0 0 1.0 1.0 0 1.0 1.0 1.0 0 0 0 1.0 1.0 0

1,2 For the C8 circulant graph, the nonzero ck elements are c1 = c2 = cN−1 = cN−2 = 1 and therefore the eigenvalues are

−2πim(1)/8 −2πim(7)/8 −2πim(2)/8 −2πim(6)/8 λm = e + e + e + e . (4.25)

. The eigenvectors of the adjacency matrix are computed:

 0.5 0.0002 −0.3487 −0.3584 −0.3536 −0.3098 0.3924 −0.3536  0.0002 −0.5 −0.0069 0.5 0.3536 −0.4966 0.0584 −0.3536    −0.5 −0.0002 0.3584 −0.3487 −0.3536 −0.3924 −0.3098 −0.3536   −0.0002 0.5 −0.5 −0.0069 0.3536 −0.0584 −0.4966 −0.3536 V˜ =    0.5 0.0002 0.3487 0.3584 −0.3536 0.3098 −0.3924 −0.3536    0.0002 −0.5 0.0069 −0.5 0.3536 0.4966 −0.0584 −0.3536    −0.5 −0.0002 −0.3584 0.3487 −0.3536 0.3924 0.3098 −0.3536 −0.0002 0.5 0.5 0.0069 0.3536 0.0584 0.4966 −0.3536

27  −2.0   −2.0    −1.414   −1.414 with eigenvalues  .  0     1.414     1.414  4.0 The linear combination matrix is

TFV˜ =  0 0 0 0 0 0 0 −1.0  0 0 0 0 0 −0.4382 − 0.555i 0.555 − 0.4382i 0    0.7071 + 0.0003i 0.0003 − 0.7071i 0 0 0 0 0 0     0 0 −0.4931 − 0.5068i −0.5068 + 0.4931i 0 0 0 0     0 0 0 0 −1.0 0 0 0     0 0 −0.4931 + 0.5068i −0.5068 − 0.4931i 0 0 0 0    0.7071 − 0.0003i 0.0003 + 0.7071i 0 0 0 0 0 0  0 0 0 0 0 −0.4382 + 0.555i 0.555 + 0.4382i 0

˜ such that V = FTFV˜ holds. Same for the Laplacian. For the eigenvalues we have

 −2πim(1)/8 −2πim(7)/8 −2πim(2)/8 −2πim(6)/8 µm = 4 − e + e + e + e . (4.26)

 4.0 −1.0 −1.0 0 0 0 −1.0 −1.0 −1.0 4.0 −1.0 −1.0 0 0 0 −1.0   −1.0 −1.0 4.0 −1.0 −1.0 0 0 0     0 −1.0 −1.0 4.0 −1.0 −1.0 0 0  L =    0 0 −1.0 −1.0 4.0 −1.0 −1.0 0     0 0 0 −1.0 −1.0 4.0 −1.0 −1.0   −1.0 0 0 0 −1.0 −1.0 4.0 −1.0 −1.0 −1.0 0 0 0 −1.0 −1.0 4.0

The eigenvectors of the Laplacian matrix are:

0.3536 0.4587 0.199 0.3536 −0.3673 0.3393 −0.3816 0.3231  0.3536 0.1837 0.465 −0.3536 0.0198 −0.4996 −0.3231 −0.3816   0.3536 −0.199 0.4587 0.3536 0.3393 0.3673 0.3816 −0.3231   0.3536 −0.465 0.1837 −0.3536 −0.4996 −0.0198 0.3231 0.3816  U˜ =   0.3536 −0.4587 −0.199 0.3536 0.3673 −0.3393 −0.3816 0.3231    0.3536 −0.1837 −0.465 −0.3536 −0.0198 0.4996 −0.3231 −0.3816   0.3536 0.199 −0.4587 0.3536 −0.3393 −0.3673 0.3816 −0.3231 0.3536 0.465 −0.1837 −0.3536 0.4996 0.0198 0.3231 0.3816

28 1,2,3,4 Figure 4.3: The C8 circulant graph

 0  2.586   2.586    4.0  with eigenvalues µ˜ =  . 5.414   5.414    6.0  6.0 Again, using (4.24), we can get the eigenvector matrix U˜ of the Laplacian matrix from F with the linear combination matrix

TFU˜ = 1.0 0 0 0 0 0 0 0   0 0.6487 − 0.2814i 0.2814 + 0.6487i 0 0 0 0 0     0 0 0 0 0 0 −0.5396 − 0.457i 0.457 − 0.5396i     0 0 0 0 −0.5194 − 0.4798i 0.4798 − 0.5194i 0 0     0 0 0 1.0 0 0 0 0     0 0 0 0 −0.5194 + 0.4798i 0.4798 + 0.5194i 0 0     0 0 0 0 0 0 −0.5396 + 0.457i 0.457 + 0.5396i  0 0.6487 + 0.2814i 0.2814 − 0.6487i 0 0 0 0 0

˜ such that U = FTFU˜ holds. 1,2,3,4 Next is the circulant graph C8 . It is also a complete bipartite graph,

29 K4,4. Its adjacency eigenvalues are

−2πim(1)/8 −2πim(2)/8 −2πim(3)/8 −2πim(4)/8 −2πim(5)/8 λm = e + e + e + e + e 7 X +e−2πim(6)/8 + e−2πim(7)/8 = e−2πiml/8 (4.27) l=1

 0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0 1.0 1.0 1.0 1.0 1.0 1.0   1.0 1.0 0 1.0 1.0 1.0 1.0 1.0   1.0 1.0 1.0 0 1.0 1.0 1.0 1.0 A =   1.0 1.0 1.0 1.0 0 1.0 1.0 1.0   1.0 1.0 1.0 1.0 1.0 0 1.0 1.0   1.0 1.0 1.0 1.0 1.0 1.0 0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0

The eigenvectors for the adjacency matrix V˜ , are:

 0.044727 −0.19557 −0.12271 0.70711 0.48471 −0.22524 −0.18444 0.35355  0.044727 −0.19557 −0.12271 −0.70711 0.48471 −0.22524 −0.18444 0.35355    0.044727 −0.19557 −0.56277 −8.7561 · 10−17 −0.65825 −0.22524 −0.18444 0.35355    0.044727 −0.19557 0.80819 0 −0.31117 −0.22524 −0.18444 0.35355 V˜ =    0.55156 0.74302 9.0827 · 10−17 0 0 0.097718 −0.095708 0.35355   −0.018675 −0.011572 1.0491 · 10−17 0 0 −0.18872 0.91592 0.35355    −0.82179 0.4085 3.8246 · 10−17 0 0 0.13548 −0.12016 0.35355 0.11 −0.35767 0 0 0 0.85648 0.037719 0.35355

−1.0 −1.0   −1.0   −1.0 with respective eigenvalues: λ˜ =   −1.0   −1.0   −1.0 7.0

30 Then we have the transition matrix TFV˜ =

 0 0 0 0 0 0 0 1.0 −0.147 + 0.306 i −0.418 − 0.219 i −0.276 − 0.0276 i 0.0732 − 0.177 i 0.37 − 0.189 i 0.147 − 0.407 i −0.251 − 0.353 i 0     0.486 − 0.0455 i 0.118 + 0.122 i 0.156 − 0.329 i 0.25 − 0.25 i 0.404 + 0.281 i −0.0133 − 0.369 i 0.0086 + 0.31 i 0    −0.211 − 0.307 i −0.245 + 0.208 i 0.189 + 0.37 i 0.427 − 0.177 i −0.0276 + 0.276 i −0.376 − 0.152 i 0.188 − 0.308 i 0     −0.128 0.538 −0.485 0.5 −0.123 −0.154 −0.413 0    −0.211 + 0.307 i −0.245 − 0.208 i 0.189 − 0.37 i 0.427 + 0.177 i −0.0276 − 0.276 i −0.376 + 0.152 i 0.188 + 0.308 i 0     0.486 + 0.0455 i 0.118 − 0.122 i 0.156 + 0.329 i 0.25 + 0.25 i 0.404 − 0.281 i −0.0133 + 0.369 i 0.0086 − 0.31 i 0  −0.147 − 0.306 i −0.418 + 0.219 i −0.276 + 0.0276 i 0.0732 + 0.177 i 0.37 + 0.189 i 0.147 + 0.407 i −0.251 + 0.353 i 0

˜ so that V = FTFV˜ holds. Same process for the Laplacian. Its eigenvalues will be:

−2πim(1)/8 −2πim(2)/8 −2πim(3)/8 −2πim(4)/8 −2πim(5)/8 µm = 6 − (e + e + e + e + e 7 X +e−2πim(6)/8 + e−2πim(7)/8) = 2 − e−2πiml/8 (4.28) l=1

 7.0 −1.0 −1.0 −1.0 −1.0 −1.0 −1.0 −1.0 −1.0 7.0 −1.0 −1.0 −1.0 −1.0 −1.0 −1.0   −1.0 −1.0 7.0 −1.0 −1.0 −1.0 −1.0 −1.0   −1.0 −1.0 −1.0 7.0 −1.0 −1.0 −1.0 −1.0 L =   −1.0 −1.0 −1.0 −1.0 7.0 −1.0 −1.0 −1.0   −1.0 −1.0 −1.0 −1.0 −1.0 7.0 −1.0 −1.0   −1.0 −1.0 −1.0 −1.0 −1.0 −1.0 7.0 −1.0 −1.0 −1.0 −1.0 −1.0 −1.0 −1.0 −1.0 7.0

−0.3536 0.0117 −0.2809 0.8041 −0.1786 0.3064 −0.1185 0.0981  −0.3536 0.0117 −0.2809 −0.5653 −0.1786 0.5562 −0.1185 0.3479    −0.3536 0.0117 −0.2809 −0.1714 −0.1786 −0.0962 −0.1185 −0.8434   −0.3536 0.0117 −0.2809 −0.0673 −0.1786 −0.7665 −0.1185 0.3975  U˜ =   −0.3536 −0.364 0.6918 0 −0.075 0 −0.5082 0    −0.3536 −0.4805 0.1374 0 −0.0704 0 0.7876 0    −0.3536 0.7975 0.4151 0 −0.0661 0 0.2496 0  −0.3536 0 −0.1206 0 0.926 0 −0.0549 0

31 1.0 9.0   9.0   9.0 with corresponding eigenvalues µ˜ =   and we have the matrix T ˜ = 9.0 FU   9.0   9.0 9.0

−1.0 0 0 0 0 0 0 0   0 −0.246 + 0.187 i −0.391 + 0.0423 i −0.395 − 0.191 i 0.202 − 0.3 i 0.0732 − 0.177 i 0.196 − 0.235 i −0.214 − 0.5 i     0 −0.433 − 0.144 i −0.408 i 0.316 −0.308 − 0.286 i 0.25 − 0.25 i 0.0765 − 0.278 i −0.222 + 0.302 i    0 −0.0423 − 0.391 i 0.187 + 0.246 i −0.395 − 0.0327 i −0.202 + 0.316 i 0.427 − 0.177 i −0.196 − 0.388 i 0.214 − 0.0572 i     0 0.289 −0.408 0.316 0.32 0.5 −0.518 −0.173     0 −0.0423 + 0.391 i 0.187 − 0.246 i −0.395 + 0.0327 i −0.202 − 0.316 i 0.427 + 0.177 i −0.196 + 0.388 i 0.214 + 0.0572 i     0 −0.433 + 0.144 i 0.408 i 0.316 −0.308 + 0.286 i 0.25 + 0.25 i 0.0765 + 0.278 i −0.222 − 0.302 i 0 −0.246 − 0.187 i −0.391 − 0.0423 i −0.395 + 0.191 i 0.202 + 0.3 i 0.0732 + 0.177 i 0.196 + 0.235 i −0.214 + 0.5 i

˜ such that U = FTFU˜ holds.

4.2.2 The Path Graph

The path graph PN where V = {1,...,N} and E = {(v, v+1) : 1 ≤ i ≤ N} has a tridiagonal Toeplitz adjacency matrix, therefore substituting δ = 0, τ = 1, σ = 1 in (3.17) (3.18), the eigenvalues will be:

 lπ  λ (T) = 2 cos (4.29) l N + 1 and the eigenvectors:  lkπ  v = sin , k = 1, . . . , N, l = 1,...,N (4.30) l,k N + 1

Notice that (4.30) is the Discrete Sine Transform of type I. The eigenvectors q 2 can be normalized by multiplying each one with a factor N+1 . One interesting property of the DST-I matrix is that it is involutory. So for the path graph adjacency matrix we have:

A = VΛVT = VΛV.

For the Laplacian spectrum we have the following. The path has the same Laplacian eigenvalues as R2N ,

µk = 2(1 − cos(πk/N)) (4.31)

and Laplacian eigenvectors:

32 Figure 4.4: The P8 path graph

r 2 2πkv − πk  u (v) = cos for 1 ≤ k < N. k N 2N r 1 u (v) = (4.32) 0 N These equations, describe the basis vectors of the Discrete Cosine Trans- form. This is the formula for the DCT of type II. [22] In fact, forming the matrix product VTLV corresponds to the 2D DCT [24]. That is because the transform is separable which means that it can be applied separately to the rows and columns of the matrix. So by applying the 2D DCT to the Laplacian of the path graph, we directly diagonalize it.

Experimental Verification For the eigenstructure of the Path’s adjacency matrix that can be seen below, a potential approach is to notice that the adjacency matrix is tridiagonal Toeplitz.

0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0   0 1 0 1 0 0 0 0   0 0 1 0 1 0 0 0 A =   0 0 0 1 0 1 0 0   0 0 0 0 1 0 1 0   0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0

We compute the adjacency matrix eigenvectors using Matlab’s routine:

−0.1612 0.303 0.4082 0.4642 0.4642 0.4082 −0.303 0.1612  0.303 −0.4642 −0.4082 −0.1612 0.1612 0.4082 −0.4642 0.303    −0.4082 0.4082 0 −0.4082 −0.4082 0 −0.4082 0.4082    0.4642 −0.1612 0.4082 0.303 −0.303 −0.4082 −0.1612 0.4642 V˜ =   −0.4642 −0.1612 −0.4082 0.303 0.303 −0.4082 0.1612 0.4642    0.4082 0.4082 0 −0.4082 0.4082 0 0.4082 0.4082    −0.303 −0.4642 0.4082 −0.1612 −0.1612 0.4082 0.4642 0.303  0.1612 0.303 −0.4082 0.4642 −0.4642 0.4082 0.303 0.1612

33  −1.879   −1.532     −1.0    −0.3473 with eigenvalues λ˜ =  .  0.3473     1.0     1.532  1.879 The eigenvectors of the adjacency matrix, computed with the theoretical formula (4.30), that produce the same eigevalues :

0.1612 0.303 0.4082 0.4642 0.4642 0.4082 0.303 0.1612   0.303 0.4642 0.4082 0.1612 −0.1612 −0.4082 −0.4642 −0.303    0.4082 0.4082 0 −0.4082 −0.4082 0 0.4082 0.4082    0.4642 0.1612 −0.4082 −0.303 0.303 0.4082 −0.1612 −0.4642 V =   0.4642 −0.1612 −0.4082 0.303 0.303 −0.4082 −0.1612 0.4642    0.4082 −0.4082 0 0.4082 −0.4082 0 0.4082 −0.4082    0.303 −0.4642 0.4082 −0.1612 −0.1612 0.4082 −0.4642 0.303  0.1612 −0.303 0.4082 −0.4642 0.4642 −0.4082 0.303 −0.1612

We can get to V˜ with the linear combination matrix

 0 0 0 0 0 0 0 1.0  0 0 0 0 0 0 −1.0 0     0 0 0 0 0 1.0 0 0     0 0 0 0 1.0 0 0 0  T ˜ =   VV  0 0 0 1.0 0 0 0 0     0 0 1.0 0 0 0 0 0     0 1.0 0 0 0 0 0 0  −1.0 0 0 0 0 0 0 0

˜ such that V = VTVV˜ holds. We can directly compute the eigenvectors using Matlab’s dctmtx(8)0 com- mand(transpose is necessary). Currently, this command computes the DCT type II matrix by default. First we compute the eigenvectors using Matlab’s eigenanalysis routine.

 1.0 −1.0 0 0 0 0 0 0  −1.0 2.0 −1.0 0 0 0 0 0     0 −1.0 2.0 −1.0 0 0 0 0     0 0 −1.0 2.0 −1.0 0 0 0  L =    0 0 0 −1.0 2.0 −1.0 0 0     0 0 0 0 −1.0 2.0 −1.0 0     0 0 0 0 0 −1.0 2.0 −1.0 0 0 0 0 0 0 −1.0 1.0

34 0.3536 −0.4904 −0.4619 −0.4157 −0.3536 −0.2778 0.1913 −0.0975 0.3536 −0.4157 −0.1913 0.0975 0.3536 0.4904 −0.4619 0.2778    0.3536 −0.2778 0.1913 0.4904 0.3536 −0.0975 0.4619 −0.4157   0.3536 −0.0975 0.4619 0.2778 −0.3536 −0.4157 −0.1913 0.4904  U˜ =   0.3536 0.0975 0.4619 −0.2778 −0.3536 0.4157 −0.1913 −0.4904   0.3536 0.2778 0.1913 −0.4904 0.3536 0.0975 0.4619 0.4157    0.3536 0.4157 −0.1913 −0.0975 0.3536 −0.4904 −0.4619 −0.2778 0.3536 0.4904 −0.4619 0.4157 −0.3536 0.2778 0.1913 0.0975

 0  0.1522   0.5858    1.235  with eigenvalues µ˜ =  . A simpler way –in terms of code– to com-  2.0     2.765     3.414  3.848 pute the eigenvalues would be to feed the Laplacian into the 2D DCT routine of Matlab using dct2(L), where L is the Laplacian. The formula for the theoretical eigenvectors (4.32), that produces the same eigenvalues gives:

0.3536 0.4904 0.4619 0.4157 0.3536 0.2778 0.1913 0.0975  0.3536 0.4157 0.1913 −0.0975 −0.3536 −0.4904 −0.4619 −0.2778   0.3536 0.2778 −0.1913 −0.4904 −0.3536 0.0975 0.4619 0.4157    0.3536 0.0975 −0.4619 −0.2778 0.3536 0.4157 −0.1913 −0.4904 U =   0.3536 −0.0975 −0.4619 0.2778 0.3536 −0.4157 −0.1913 0.4904    0.3536 −0.2778 −0.1913 0.4904 −0.3536 −0.0975 0.4619 −0.4157   0.3536 −0.4157 0.1913 0.0975 −0.3536 0.4904 −0.4619 0.2778  0.3536 −0.4904 0.4619 −0.4157 0.3536 −0.2778 0.1913 −0.0975

. We can get to U˜ with the linear combination matrix

1.0 0 0 0 0 0 0 0   0 −1.0 0 0 0 0 0 0     0 0 −1.0 0 0 0 0 0     0 0 0 −1.0 0 0 0 0  T ˜ =   UU  0 0 0 0 −1.0 0 0 0     0 0 0 0 0 −1.0 0 0     0 0 0 0 0 0 1.0 0  0 0 0 0 0 0 0 −1.0

˜ so that we have U = UTUU˜

35 Figure 4.5: The K2 complete graph

4.2.3 The Hypercube Graph

The hypercube Qd, d ≥ 1 is the graph whose vertex set is the set of all N -tuples of 0s and 1s, where two N -tuples are adjacent if they differ in precisely one coordinate. They hypercube is a regular graph, i.e every vertex has the same degree. The Qd hypercube can be expressed as the Cartesian graph product of d K2 graphs. Making use of the theorem in (4.9) we can conclude that the eigenvectors(both adjacency and Laplacian, since it is a regular graph of degree d) are

VQd = VK2 ⊗ VK2 ⊗ · · · ⊗ VK2 = UQd (4.33) | {z } d

However VK2 are the DFT eigenvectors, because AK2 is a circulant. So this gives us

VQN = F2 ⊗ F2 ⊗ · · · ⊗ F2 (4.34) | {z } d where 0.7071 0.7071  F = . (4.35) 2 0.7071 −0.7071 Which is a basis matrix for the n-dimensional Discrete Fourier Transform.  T λK2 = −1 1 are the eigenvalues for the K2 graph. According to the theorem in (4.9), the adjacency eigenvalues of the cube will be

λ = λ + λ + ··· + λ , i, j, . . . , n = 1, 2 (4.36) k iK2 jK2 nK2 where λ are the eigenvalues corresponding to each eigenvector of the different xK2 K2 graphs. For the Laplacian eigenvalues, since the cube is a regular graph we have µ = d − (λ + λ + ··· + λ ), i, j, . . . , n = 1, 2 (4.37) k iK2 jK2 nK2

Experimental Verification

We now test the aforementioned properties on the Q3 cube.

36 Figure 4.6: The Q3 hypercube graph

 0 1.0 1.0 0 1.0 0 0 0  1.0 0 0 1.0 0 1.0 0 0    1.0 0 0 1.0 0 0 1.0 0     0 1.0 1.0 0 0 0 0 1.0 A =   1.0 0 0 0 0 1.0 1.0 0     0 1.0 0 0 1.0 0 0 1.0    0 0 1.0 0 1.0 0 0 1.0 0 0 0 1.0 0 1.0 1.0 0

The eigenvectors of the adjacency matrix of the cube are numerically calcu- lated: −0.3536 0.3525 −0.5006 −0.0111 −0.342 −0.3818 0.335 −0.3536  0.3536 −0.5468 −0.1299 −0.2432 −0.0984 −0.5158 −0.3151 −0.3536    0.3536 0.2673 0.4451 −0.3247 0.2927 −0.1321 0.5214 −0.3536   −0.3536 −0.073 0.1855 0.579 0.5363 −0.266 −0.1288 −0.3536 V˜ =    0.3536 −0.073 0.1855 0.579 −0.5363 0.266 0.1288 −0.3536   −0.3536 0.2673 0.4451 −0.3247 −0.2927 0.1321 −0.5214 −0.3536   −0.3536 −0.5468 −0.1299 −0.2432 0.0984 0.5158 0.3151 −0.3536 0.3536 0.3525 −0.5006 −0.0111 0.342 0.3818 −0.335 −0.3536

−3.0 −1.0   −1.0   −1.0 with eigenvalues λ˜ =   .  1.0     1.0     1.0  3.0 0 1 A = K2 1 0

37 The eigenvectors of the adjacency(and Laplacian) matrix of a K2 graph are:

−1 1 V = K2 1 1

This means that for the eigenvectors of the cube’s adjacency matrix we have:

1 −1 1 −1 1 −1 1 V = √ ⊗ ⊗ 8 1 1 1 1 1 1

−0.3536 0.3536 0.3536 −0.3536 0.3536 −0.3536 −0.3536 0.3536  0.3536 0.3536 −0.3536 −0.3536 −0.3536 −0.3536 0.3536 0.3536    0.3536 −0.3536 0.3536 −0.3536 −0.3536 0.3536 −0.3536 0.3536   −0.3536 −0.3536 −0.3536 −0.3536 0.3536 0.3536 0.3536 0.3536 =    0.3536 −0.3536 −0.3536 0.3536 0.3536 −0.3536 −0.3536 0.3536   −0.3536 −0.3536 0.3536 0.3536 −0.3536 −0.3536 0.3536 0.3536   −0.3536 0.3536 −0.3536 0.3536 −0.3536 0.3536 −0.3536 0.3536 0.3536 0.3536 0.3536 0.3536 0.3536 0.3536 0.3536 0.3536

√1 here is just a column normalization factor. 8 We can again get from V to V˜ with a linear combination matrix

1.0 0 0 0 0 0 0 0   0 −0.2748 −0.8917 −0.3596 0 0 0 0     0 0.8765 −0.0786 −0.4749 0 0 0 0     0 0 0 0 −0.2748 0.9162 −0.2917 0  T ˜ =   VV  0 0.3953 −0.4457 0.8032 0 0 0 0     0 0 0 0 0.8977 0.3532 0.2636 0     0 0 0 0 0.3445 −0.1894 −0.9195 0  0 0 0 0 0 0 0 −1.0

˜ by V = VTVV˜ .

Similarly for the Laplacian matrix.

 3.0 −1.0 −1.0 0 −1.0 0 0 0  −1.0 3.0 0 −1.0 0 −1.0 0 0    −1.0 0 3.0 −1.0 0 0 −1.0 0     0 −1.0 −1.0 3.0 0 0 0 −1.0 L =   −1.0 0 0 0 3.0 −1.0 −1.0 0     0 −1.0 0 0 −1.0 3.0 0 −1.0    0 0 −1.0 0 −1.0 0 3.0 −1.0 0 0 0 −1.0 0 −1.0 −1.0 3.0

38 Its eigenvectors are computed: 0.3536 −0.0013 −0.3208 −0.5216 −0.1588 −0.0732 0.5869 −0.3536 0.3536 0.5733 −0.0526 −0.2087 0.5715 0.2177 −0.0312 0.3536    0.3536 −0.3432 0.2892 −0.4166 −0.029 −0.5396 −0.2882 0.3536    0.3536 0.2314 0.5574 −0.1038 −0.3838 0.3951 −0.2675 −0.3536 U˜ =   0.3536 −0.2314 −0.5574 0.1038 −0.3838 0.3951 −0.2675 0.3536    0.3536 0.3432 −0.2892 0.4166 −0.029 −0.5396 −0.2882 −0.3536   0.3536 −0.5733 0.0526 0.2087 0.5715 0.2177 −0.0312 −0.3536 0.3536 0.0013 0.3208 0.5216 −0.1588 −0.0732 0.5869 0.3536  0  2.0   2.0   2.0 with eigenvalues µ˜ =  . The cube is 3-regular graph. The adjacency 4.0   4.0   4.0 6.0 and Laplacian matrices will have the same eigenvectors. We can get to U˜ through V with a linear combination matrix

 0 0 0 0 0 0 0 1.0  0 0 0 0 0.5837 0.2043 0.7858 0     0 0 0 0 −0.2655 −0.8666 0.4225 0     0 −0.3254 −0.3345 0.8844 0 0 0 0  T ˜ =   VU  0 0 0 0 −0.7673 0.4552 0.4516 0     0 −0.4835 0.8627 0.1484 0 0 0 0     0 0.8126 0.3793 0.4425 0 0 0 0  1.0 0 0 0 0 0 0 0

˜ such that U = VTVU˜ holds. Since the hypercube is a k-regular graph, by (4.22), the eigenvectors are the same for the Laplacian and the adjacency matrices.

4.2.4 The Grid Graph Finally, we will be looking at the grid graph.For the M × N grid, we have GM×N = PM 2PN . Again, using theorem 1, by (4.29), the adjacency matrix eigenvalues for this graph will be:  lπ   kπ  λ = 2 cos + 2 cos , for l = 1, . . . , M, k = 1,...,N l,k N + 1 M + 1 (4.38)

39 and the eigenvectors like it was established in (4.30) are computed using the DST-I matrices

VG = VPM ⊗ VPN   1π   2π   Mπ    1π   2π   Nπ  sin M+1 sin M+1 ... sin M+1 sin N+1 sin N+1 ... sin N+1  .   .    2π   4π  .    2π   4π  .  sin M+1 sin M+1 ... .  sin N+1 sin N+1 ... .  =   ⊗   .  . . . .   . . . .   . . .. .   . . .. .       Mπ   M 2π   Nπ   N 2π  sin M+1 ...... sin M+1 sin N+1 ...... sin N+1 (4.39)

VG is a matrix consisting of blocks that form the basis of the 2D Discrete Sine Transform(Type I). As an example, figure 7 shows the 256 × 256 image produced by the grid adjacency eigenvectors with the 16×16 DST basis images.

Figure 4.7: The 16x16 DST basis images

Recall the property of the Kronecker product that

(A ⊗ B)−1 = A−1 ⊗ B−1 therefore from the involutory DST-I matrices

−1 −1 VG = (VPm ⊗ VPn ) = V−1 ⊗ V−1 Pm Pn

= VPm ⊗ VPn

= VG (4.40)

40 we can see that the grid eigenvectors are involutory as well which implies that the eigendecomposition of the adjacency matrix is

AG = VGΛVG (4.41) Similarly, the Laplacian eigenvalues using (4.31) are:

µl,k = 2(1 − cos(πl/M)) + 2(1 − cos(πk/N)) for 0 ≤ l < M, 0 ≤ k < N (4.42) and the Laplacian eigenvectors using the DCT-II :

UG = UPM ⊗ UPN √ √ √ √  2π−π π(M−1)   2π−π π(N−1)  1 2 cos( 2M ) ··· 2 cos( 2M ) 1 2 cos( 2N ) ··· 2 cos( 2N ) √ . √ . 1 1 2 cos( 4π−π ) ··· .  1 2 cos( 4π−π ) ··· .  = √  2M  ⊗  2N  . . . .  . . . .  M . . .. .  . . .. .   √ √   √ √  2πM−π π(M−1)(2M−1) 2πN−π π(N−1)(2N−1) 1 2 cos( 2M ) ··· 2 cos( 2M ) 1 2 cos( 2N ) ··· 2 cos( 2N ) (4.43)

UG is a matrix consisting of blocks that form the basis of the 2D Discrete Cosine Transform[24]. As an example, figure 8 shows an image of the grid Laplacian eigenvectors with the 16 × 16 DCT basis images.

Figure 4.8: The 16x16 DCT basis images

Similar to the cube, the grid eigenvector formula can be extended to any N-dimensional grid. For the adjacency eigenvectors

V = V ⊗ V ⊗ · · · ⊗ V (4.44) G PM1 PM2 PMN which is the n-dimensional Discrete Sine Transform basis.

41 Figure 4.9: The G3,3 grid graph

and likewise for the Laplacian:

U = U ⊗ U ⊗ · · · ⊗ U (4.45) G PM1 PM2 PMN which is the n-dimensional Discrete Cosine Transform basis.

Experimental Verification

The G3,3 grid graph is going to be used as an example.

 0 1.0 0 1.0 0 0 0 0 0  1.0 0 1.0 0 1.0 0 0 0 0     0 1.0 0 0 0 1.0 0 0 0    1.0 0 0 0 1.0 0 1.0 0 0    A =  0 1.0 0 1.0 0 1.0 0 1.0 0     0 0 1.0 0 1.0 0 0 0 1.0    0 0 0 1.0 0 0 0 1.0 0     0 0 0 0 1.0 0 1.0 0 1.0 0 0 0 0 0 1.0 0 1.0 0

We compute the eigenvectors of the adjacency matrix first:

42 −0.25 0 −0.5 −0.375 0.3062 0.375 −0.4944 −0.0746 0.25  0.3536 −0.3536 0.3536 −0.3536 0 −0.3536 −0.2968 −0.4024 0.3536   −0.25 0.5 0 0.3291 0.3979 −0.3291 0.0746 −0.4944 0.25    0.3536 0.3536 0.3536 0.3536 0 0.3536 −0.4024 0.2968 0.3536   V˜ =  −0.5 0 0 0.0459 −0.7041 −0.0459 0 0 0.5    0.3536 −0.3536 −0.3536 0.3536 0 0.3536 0.4024 −0.2968 0.3536   −0.25 −0.5 0 0.3291 0.3979 −0.3291 −0.0746 0.4944 0.25    0.3536 0.3536 −0.3536 −0.3536 0 −0.3536 0.2968 0.4024 0.3536 −0.25 0 0.5 −0.375 0.3062 0.375 0.4944 0.0746 0.25

−2.828 −1.414   −1.414    0    with eigenvalues λ˜ =  0     0     1.414     1.414  2.828 This grid is the cartesian graph product of two P3 path graphs. So we can get the theoretical V through the Kronecker product of the eigenvectors of the adjacency matrices of these path graphs :

 0.5 0.7071 0.5   0.5 0.7071 0.5  V = 0.7071 0 −0.7071 ⊗ 0.7071 0 −0.7071 0.5 −0.7071 0.5 0.5 −0.7071 0.5

 0.25 0.3536 0.25 0.3536 0.5 0.3536 0.25 0.3536 0.25  0.3536 0 −0.3536 0.5 0 −0.5 0.3536 0 −0.3536    0.25 −0.3536 0.25 0.3536 −0.5 0.3536 0.25 −0.3536 0.25    0.3536 0.5 0.3536 0 0 0 −0.3536 −0.5 −0.3536   =  0.5 0 −0.5 0 0 0 −0.5 0 0.5  .   0.3536 −0.5 0.3536 0 0 0 −0.3536 0.5 −0.3536    0.25 0.3536 0.25 −0.3536 −0.5 −0.3536 0.25 0.3536 0.25    0.3536 0 −0.3536 −0.5 0 0.5 0.3536 0 −0.3536 0.25 −0.3536 0.25 −0.3536 0.5 −0.3536 0.25 −0.3536 0.25

We can get to V˜ through V with the linear combination matrix

43  0 0 0 0 0 0 0 0 1.0  0 0 0 0 0 0 −0.8047 0.5936 0     0 0 0 0.4541 0.7041 0.5459 0 0 0     0 0 0 0 0 0 −0.5936 −0.8047 0    T ˜ =  0 0 0 −0.7041 −0.0918 0.7041 0 0 0  VV    0 0.7071 −0.7071 0 0 0 0 0 0     0 0 0 −0.5459 0.7041 −0.4541 0 0 0     0 −0.7071 −0.7071 0 0 0 0 0 0  −1.0 0 0 0 0 0 0 0 0

˜ such that V = VTVV˜ holds. We do the same for the Laplacian matrix.

 2.0 −1.0 0 −1.0 0 0 0 0 0  −1.0 3.0 −1.0 0 −1.0 0 0 0 0     0 −1.0 2.0 0 0 −1.0 0 0 0    −1.0 0 0 3.0 −1.0 0 −1.0 0 0    L =  0 −1.0 0 −1.0 4.0 −1.0 0 −1.0 0     0 0 −1.0 0 −1.0 3.0 0 0 −1.0    0 0 0 −1.0 0 0 2.0 −1.0 0     0 0 0 0 −1.0 0 −1.0 3.0 −1.0 0 0 0 0 0 −1.0 0 −1.0 2.0

We compute the eigenvectors:

−0.3333 −0.5734 −0.0677 0.5 0.3164 −0.1049 0.1964 0.3579 −0.1667 −0.3333 −0.2528 −0.3205 0 −0.0008 0.527 −0.5543 −0.1615 0.3333    −0.3333 0.0677 −0.5734 −0.5 0.3164 −0.1049 0.3579 −0.1964 −0.1667   −0.3333 −0.3205 0.2528 0 −0.3156 −0.4221 0.1615 −0.5543 0.3333    U˜ = −0.3333 0 0 0 −0.6328 0.2099 0 0 −0.6667   −0.3333 0.3205 −0.2528 0 −0.3156 −0.4221 −0.1615 0.5543 0.3333    −0.3333 −0.0677 0.5734 −0.5 0.3164 −0.1049 −0.3579 0.1964 −0.1667   −0.3333 0.2528 0.3205 0 −0.0008 0.527 0.5543 0.1615 0.3333  −0.3333 0.5734 0.0677 0.5 0.3164 −0.1049 −0.1964 −0.3579 −0.1667

 0  1.0   1.0   2.0   with eigenvalues µ˜ = 3.0.   3.0   4.0   4.0 6.0

44 For the theoretical eigenvectors we follow the same Kronecker product pro- cedure as before, this time with the eigenvectors of the Laplacian of the P3 paths:

0.5774 0.7071 0.4082  0.5774 0.7071 0.4082  U = 0.5774 0 −0.8165 ⊗ 0.5774 0 −0.8165 0.5774 −0.7071 0.4082 0.5774 −0.7071 0.4082

0.3333 0.4082 0.2357 0.4082 0.5 0.2887 0.2357 0.2887 0.1667  0.3333 0 −0.4714 0.4082 0 −0.5774 0.2357 0 −0.3333   0.3333 −0.4082 0.2357 0.4082 −0.5 0.2887 0.2357 −0.2887 0.1667    0.3333 0.4082 0.2357 0 0 0 −0.4714 −0.5774 −0.3333   = 0.3333 0 −0.4714 0 0 0 −0.4714 0 0.6667  .   0.3333 −0.4082 0.2357 0 0 0 −0.4714 0.5774 −0.3333   0.3333 0.4082 0.2357 −0.4082 −0.5 −0.2887 0.2357 0.2887 0.1667    0.3333 0 −0.4714 −0.4082 0 0.5774 0.2357 0 −0.3333 0.3333 −0.4082 0.2357 −0.4082 0.5 −0.2887 0.2357 −0.2887 0.1667

And of course we can get to U˜ through U with the linear combination matrix:

−1.0 0 0 0 0 0 0 0 0   0 −0.7851 0.6193 0 0 0 0 0 0     0 0 0 0 0.4485 −0.8938 0 0 0     0 −0.6193 −0.7851 0 0 0 0 0 0    T ˜ =  0 0 0 1.0 0 0 0 0 0  . UU    0 0 0 0 0 0 0.9601 0.2797 0     0 0 0 0 0.8938 0.4485 0 0 0     0 0 0 0 0 0 −0.2797 0.9601 0  0 0 0 0 0 0 0 0 −1.0

˜ such that U = UTUU˜ holds.

4.3 Additional Remarks

We have seen that forming the VTAV product diagonalizes matrix A when V are its orthonormal eigenvectors. When V is related to a known K-dimensional transform, forming this product means essentially applying the 2×K-dimensional equivalent of that transform. It was especially easy to observe in the case of the Path graph Laplacian matrix, with Matlab’s dct2 command which produced the Laplacian eigenvalues directly. Recall from (3.15) that when Z = X ⊗ Y then ZA = YAXT . In the case of eigenvectors X = VT and Y = VT so we have

Z = VT ⊗ VT. (4.46)

45 We can compute the action of such Z on an N × N matrix A by sorting A 2 in lexicographical order into a vector a ∈ CN and then applying Z onto a. Testing this indeed produces a vector that contains only zero components and the desired eigenvalues and by sorting it back into the original N × N we get the diagonalized A.

46 Chapter 5

Digital Signal Processing on Graphs

The idea of signal processing on a graph seems to date back to at least 1995, where the approach was used to filter signals defined on the nodes of surfaces. There, the eigenvectors were interpreted as the natural vibration modes of the surface and the corresponding eigenvalues as the natural frequencies[30]. More than a decade later we can see an example of graph based anisotropic diffusion in [33] where the implementation there was a form of graph filtering. Furthermore P¨uschel et al.[46] [47] [48], introduced a framework for linear signal processing closely related to the publications that followed a few years later that formalized the idea of signal processing on graphs.[8]

5.1 The Graph Fourier Transform and Graph Filtering

For a given a graph G whose (weighted)adjacency matrix A is diagonalizable with eigendecomposition A = VΛVT (5.1) where V are the eigenvectors and Λ the diagonal matrix of their respective  T eigenvalues. If we define a graph signal s = s0, . . . , sN−1 on the vertices of G, then the Graph Fourier Transform is

 T −1 ˆs = sˆ0 ... sˆN−1 = V s = Fs (5.2) where F = V−1 is the graph fourier transform matrix. The eigenvalues represent graph frequencies and the eigenvectors the graph frequency components.[29] Of course, the inverse Graph Fourier Transform is defined

s = F−1ˆs = Vˆs. (5.3)

47 If A is not diagonalizable, then the Jordan decomposition can be used.[29] In DSP, a basic operation is the signal shift, which is a time delay on the signal. On a time series of length N the time shift is ˜sn = sn−1 modN. In vector notation the shifted signal is  T ˜s = s˜0,..., s˜N−1 = Qs. (5.4) where Q is the permutation matrix as defined in (3.20), 0 0 ... 1 1 0 ... 0   0 1 ... 0 Q =   . . . . . . . . 0 0 ... 0 In signal processing on graphs, the graph shift is defined as a local operation by the following linear combination: X s˜n = An,msm. (5.5)

m∈Nn In matrix notation the above can be written as  T ˜s = s˜0 ... s˜N−1 = As. (5.6) Next, we look at the filtering operation on a graph signal. The compute the filtered signals ˜, ˜s = h(A)s = h(F−1ΛF)s = F−1h(Λ)Fs (5.7) We have F˜s = h(Λ)ˆs (5.8) which means that h(λ) modifies the spectrum of A and therefore is the graph frequency response. The term spectral response maybe be used interchangeably for the rest of the analysis. It should be noted however that other graph structures( Laplacian, Normal- ized Laplacian) can be used for spectral analysis [33] [32] [31], and it is not always clear which is the optimal choice in every situation. [21]. For notions of global smoothness the discrete p-Dirichlet form of a signal s is defined as p   2 1 X p 1 X X S (s) = k∇ sk = W (s − s )2 (5.9) p p i 2 p  i,j j i  i∈V i∈V j∈Ni

W being the weighted adjacency matrix and Ni being the neighborhood of vertex i, i.e all the vertices connected to i with an edge. For p = 1 we have the total variation of the signal with respect to the graph. For p = 2 we have

T Ss(s) = s Ls. (5.10)

48 which as we have already mentioned is the known Laplacian quadratic form. S2(f) is small when the signal f has similar values at neighboring vertices con- nected by an edge with a large weight, i.e when it is smooth. [21]

5.1.1 Graph Products As we have already observed in the graph section spectra, the spectrum of graph products has some interesting properties that allows us to relate it to the spectrum of more simple graphs. This can be utilized to optimize the computation of the graph Fourier transform. For example, if a graph G1 on N1 vertices can be expressed as the Cartesian graph product of two simple graphs G2,G3 on N2,N3 vertices respectively, then the cost of the graph Fourier transform can be expressed as

F = V−1 = (V ⊗ V )−1 (5.11) G1 G2 G3 The Kronecker product can be vectorized [42], which provides a nice speedup for the transform. Computing the eigendecomposition for the graph G1 transform 3 would require O(N1) computations, instead via the Kronecker product it’s 3 3 going to be O(N2 +N3 ) which in a large scale problem can be yield a substantial improvement in speed.

5.1.2 The Bilateral Graph Filter The bilateral filter is an edge preserving image smoothing filter proposed by Tomasi and Manduchi.[49] Let W be the weighted adjacency matrix with

 2   2  ||pi − pj|| (xin[i] − xin[j]) wij = exp − 2 exp − 2 (5.12) 2σd 2σr where xin are pixel intensities. This definition uses both the euclidean and the photomoteric distance which implies that pixels that are close and do not differ in intensity have a higher similarity. If we define the degree matrix X Djj = wij (5.13) i then the filtering operation in matrix notation is defined as

−1 xout = D Wxin (5.14)

In this framework we are going to use the normalized Laplacian which is defined as L = D−1/2LD−1/2 (5.15) L is going to stand for the normalized one for the following few steps.

49 substituting in (5.14) L = D − W and then the normalized Laplacian we get 1/2 1/2 D xout = (I − L)D xin (5.16) If we set

1/2 xˆin = D xin 1/2 xˆout = D xout

then considering L’s eigendecomposition L = UΛUT (5.16) gives us

T xˆout = U (I − Λ) U xˆin (5.17) |{z} | {z } | {z } InverseGF T h(Λ) GF T

This essentially means that we have a filter with linear graph frequency re- sponse h(λ) = 1 − λ. Filters with polynomial graph frequency response can also be implemented. In terms of polynomial filtering, one of the proposed approaches is the use of Chebyshev polynomials.[32] A common feature of poly- nomial filters that makes them computationally attractive is that they can be written in terms of the Laplacian L and therefore do not require explicit com- putation of the eigendecomposition of L. In general the filtering operation is denoted as

xout = h(L)xin. (5.18) In cases such as the graph based anisotropic diffusion where the heat kernel etL is required, computing the action of the kernel via eigendecomposition becomes intractable. The action can be computed without explicit computation of the eigendecomposition using numerical methods based on the Krylov subspace. [33]

5.2 Experiments 5.2.1 Graph Image Filtering Modifying (5.16) with extra factors a, b it becomes

1/2 1/2 D xout = (bI − aL)D xin (5.19)

which gives a graph frequency response h(λ) = b − aλ. For positive values of a this is a low pass filter. For negative values of a we get a high pass filter. We are going to use the above formula (5.19) for image filtering. We will be using 2 2 (5.12) as the graph representation of our choice with σd = σr = 1.

50 Figure 5.1: Original Image

1

0.9

0.8

0.7

0.6 )

λ 0.5 h(

0.4

0.3

0.2

0.1

0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 λ

(a) Low pass filtered image for a = −1 and (b) Graph frequency response of low pass filter b = 1

10

8

6 )

λ 4 h(

2

0

-2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 λ

(a) High pass filtered image for a = 10 and (b) Graph frequency response of high pass filter b = −0.5

Results

T Recall that the Laplacian quadratic form of a signal S2(s) = s Ls provides a measure of global smoothness. Lower values imply a smoother graph. We have provided the table for the Laplacian quadratic forms before and after filtering

51 Filter S2(s) No Filter 350.0593 Low Pass 162.3993 High Pass 1645.1731 the graph with a low, and a high pass filter. After the filtering the quadratic form was calculated using the Laplacian of the new, filtered graph. As expected, the smoothing filter reduced the value of S2(s) while the high pass one increased it dramatically. The values on the high pass filter were chosen not to simply sharpen the image, but to function as an edge detector. A high pass filter will also enhance noise. This is reflected in the value of the quadratic form.

5.2.2 Spectral clustering The relationship between graph signal filtering and spectral clustering has been pointed out in [33] in the context of graph based anisotropic diffusion. Here we perform experiments that focus on the effects of the Laplacian eigenvalues 0 = λ1 ≤ λ2 ≤ · · · ≤ λN on the graph structure. In each experiment, a certain eigenvalue has been selected and set to zero. Then the Laplacian is reconstructed using this new set of eigenvalues and a graph is drawn based on this new Laplacian. This is achieved by thresholding the values and setting lij = 0, if lij > −1. This essentially implies that only edges with value larger than one are considered in the adjacency matrix. The adjacency matrix used is not weighted.

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

-0.2 -0.2

-0.4 -0.4

-0.6 -0.6

-0.8 -0.8

-1 -1 -1 0 1 2 3 4 5 -1 0 1 2 3 4 5

(a) Original Graph (b) Graph after λ2 = 0

Figure 5.4: Custom bimodal graph

52 1

0.8

0.6

0.4

0.2

0

-0.2

-0.4

-0.6

-0.8

-1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

Figure 5.5: Ring Graph

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

-0.2 -0.2

-0.4 -0.4

-0.6 -0.6

-0.8 -0.8

-1 -1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

(a) λ2 = 0 (b) λ4 = 0

53 20

18

16

14

12

10

8

6

4

2

0 0 2 4 6 8 10 12 14 16 18 20

Figure 5.7: 20 × 20 Grid graph

20 20

18 18

16 16

14 14

12 12

10 10

8 8

6 6

4 4

2 2

0 0 0 2 4 6 8 10 12 14 16 18 20 0 2 4 6 8 10 12 14 16 18 20

(a) λ2 = 0 (b) λ10 = 0

54 Results

The second smallest Laplacian λ2 eigenvalue is related to the connectivity of the graph. In the case of the grid as well as the first custom graph 5.4, reconstructing the Laplacian after λ2 has been set to zero results in a bipartition of the original graph, while the ring just decays into a path graph. In general, setting larger eigenvalues to zero tends to result in k−partitions of the original graph with larger k as it can be noticed in figures 5.8b and 5.6b.

5.2.3 Graph Filtering The experimental setup for filtering was as follows. The nodes are initialized to 2 2 values 1 and −1. The graph is formed using (5.12) with σd = 2 and σr = 1.5. We then perform linear graph frequency response filtering, both high pass and low pass using the eigenvalues of the Laplacian. The orange dots represent positive values while the blue represent negative. The size of each dot corresponds to its absolute value. Larger dots mean a larger absolute value while smaller dots mean a smaller absolute value.

Linear Spectral Response The filters were based on a linear spectral re- sponse h(λ) = b + aλ. (5.20)

1

0.8

0.6

0.4

0.2

0

-0.2

-0.4

-0.6

-0.8

-1 -1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5

Figure 5.9: Original Graph

55 1 2

0.8 0 0.6

0.4 -2

0.2 -4 0 -6 -0.2

-0.4 -8

-0.6 -10 -0.8

-1 -12 -1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5

(a) Filtered graph (b) Spectral response

Figure 5.10: Low pass filter for a = −2.75, b = 1

1 5

0.8 0 0.6 -5 0.4 -10 0.2

0 -15

-0.2 -20 -0.4 -25 -0.6 -30 -0.8

-1 -35 -1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5

(a) Filtered graph (b) Spectral response

Figure 5.11: Filter for a = −8, b = 1

1 9

0.8 8

0.6 7

0.4 6

0.2 5

0 4

-0.2 3

-0.4 2

-0.6 1

-0.8 0

-1 -1 -1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5

(a) Filtered graph (b) Spectral response

Figure 5.12: High pass filter for a = 2, b = 0

56 1 35

0.8 30 0.6 25 0.4 20 0.2

0 15

-0.2 10 -0.4 5 -0.6 0 -0.8

-1 -5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5

(a) Filtered graph (b) Spectral response

Figure 5.13: Filter for a = 8, b = −1

Results The nodes that are connected to both minus and plus signed nodes are amplified and attenuated in 5.12 and 5.10 respectively. This suggests that these nodes are behaving like the high frequency components of the graph. On the other hand, the nodes connected only to ones that they share the same sign with behave like the low frequency components of the graph. What is also interesting are the effects that the differently signed spectral responses have. For example, mapping the low frequency components to negative values results in a sign flip of the low frequency nodes as we can see in figure 5.13. Similarly, choosing a spectral response that maps the high frequency components to large negative values, results in a sign flip of the high frequency nodes as we can see in figure 5.11.

5.2.4 Concluding remarks In the case of graph image filtering experiments, it is significantly easier to have a solid interpretation of the results. It was easier to identify the low and high frequency operations and achieve the desired filters. However, in the case of random graphs, finding a way to meaningfully interpret or visualize the local or the global character of the filtering was a harder task. This is in part due to the inherent difficulty of selecting an appropriate graph structure to represent the data. For the bilateral filter, the graph spectral interpretation comes quite naturally and its effects on the image are easy to visually verify. The same can not be said for the tests that were conducted on the random graphs. There are important open questions that need to be addressed. Finding the right graph representation so that the transform and filtering operations are meaningful and effective is one of them. In terms of computation, finding ways to improve the speed of the transforms is a vital issue. In large scale graphs it is extremely demanding to compute the eigendecomposition of the corresponding matrix. Utilizing graph products is one of the potential directions to achieve better

57 results for that problem. Krylov subspace methods have also been suggested for certain filtering operations. Nevertheless, it is clear that this framework can be applied to a large number of situations and can allow a fair amount of flexibility in terms of the choice of parameters and angle from which any given problem can be tackled.

58 Bibliography

[1] Chen, Wen-Yen, et al. ”Parallel spectral clustering in distributed systems.” Pattern Analysis and Machine Intelligence, IEEE Transactions on 33.3 (2011): 568-586. [2] Boccaletti, Stefano, et al. ”Complex networks: Structure and dynamics.” Physics reports 424.4 (2006): 175-308. [3] Filippone, Maurizio, et al. ”A survey of kernel and spectral methods for clustering.” Pattern recognition 41.1 (2008): 176-190.

[4] Newman, M. E. J. ”Spectral methods for community detection and graph partitioning.” Physical Review E 88.4 (2013): 042822. [5] Brouwer, Andries E., and Willem H. Haemers. Spectra of graphs. Springer Science and Business Media, 2011.

[6] Tee, Garry J. Eigenvectors of block circulant and alternating circulant ma- trices. (2005). [7] Gray, Robert M. ”Toeplitz and circulant matrices: A review.” Communica- tions and Information Theory 2.3 (2005): 155-239.

[8] Sandryhaila, Aliaksei, and Jos MF Moura. ”Discrete signal processing on graphs.” IEEE transactions on signal processing 61 (2013): 1644-1656. [9] Merris, Russell. ”Laplacian matrices of graphs: a survey.” Linear algebra and its applications 197 (1994): 143-176. [10] Hammack, Richard, Wilfried Imrich, and Sandi Klavar. Handbook of prod- uct graphs. CRC press, 2011. [11] Noschese, Silvia, Lionello Pasquini, and Lothar Reichel. ”Tridiagonal Toeplitz matrices: properties and novel applications.” Numerical Linear Al- gebra with Applications 20.2 (2013): 302-326.

[12] Horn, Roger A., and Charles R. Johnson. ”Topics in matrix analysis.” Cambridge UP, New York (1991).

59 [13] Kwak, Haewoon, et al. ”What is Twitter, a social network or a news me- dia?.” Proceedings of the 19th international conference on World wide web. ACM, 2010. [14] Agichtein, Eugene, et al. ”Finding high-quality content in social media.” Proceedings of the 2008 International Conference on Web Search and Data Mining. ACM, 2008. [15] Ohtsuki, Hisashi, et al. ”A simple rule for the evolution of cooperation on graphs and social networks.” Nature 441.7092 (2006): 502-505. [16] Tu, Zhuowen, et al. ”Image parsing: Unifying segmentation, detection, and recognition.” International Journal of computer vision 63.2 (2005): 113-140. [17] Boykov, Yuri, and Olga Veksler. ”Graph cuts in vision and graphics: The- ories and applications.” Handbook of mathematical models in computer vi- sion. Springer US, 2006. 79-96. [18] Mason, Oliver, and Mark Verwoerd. ”Graph theory and networks in biol- ogy.” Systems Biology, IET 1.2 (2007): 89-119. [19] Franceschet, Massimo. ”PageRank: Standing on the shoulders of giants.” Communications of the ACM 54.6 (2011): 92-101. [20] White, Scott, and Padhraic Smyth. ”A Spectral Clustering Approach To Finding Communities in Graph.” SDM. Vol. 5. 2005. [21] Shuman, David I., et al. ”The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains.” Signal Processing Magazine, IEEE 30.3 (2013): 83-98. [22] Shuman, David I., Benjamin Ricaud, and Pierre Vandergheynst. ”Vertex- frequency analysis on graphs.” Applied and Computational Harmonic Anal- ysis (2015). [23] Britanak, Vladimir, Patrick C. Yip, and Kamisetty Ramamohan Rao. Dis- crete cosine and sine transforms: general properties, fast algorithms and integer approximations. Academic Press, 2010.

[24] Watson, Andrew B. ”Image compression using the discrete cosine trans- form.” Mathematica journal 4.1 (1994): 81. [25] Xiaofei He (2010) Laplacianfaces. Scholarpedia, 5(8):9324. [26] Strang, Gilbert. ”The discrete cosine transform.” SIAM review 41.1 (1999): 135-147. [27] Cvetkovi, Drago, and Slobodan Simi. ”Graph spectra in computer science.” Linear Algebra and its Applications 434.6 (2011): 1545-1562.

60 [28] Elspas, Bernard, and James Turner. ”Graphs with circulant adjacency ma- trices.” Journal of Combinatorial Theory 9.3 (1970): 297-307. [29] Sandryhaila, Aliaksei, and Jose MF Moura. ”Big data analysis with signal processing on graphs: Representation and processing of massive data sets with irregular structure.” Signal Processing Magazine, IEEE 31.5 (2014): 80-90. [30] Taubin, Gabriel. ”A signal processing approach to fair surface design.” Pro- ceedings of the 22nd annual conference on Computer graphics and interactive techniques. ACM, 1995. [31] Gadde, Akshay, Sunil K. Narang, and Antonio Ortega. ”Bilateral filter: Graph spectral interpretation and extensions.” Image Processing (ICIP), 2013 20th IEEE International Conference on. IEEE, 2013. [32] Tian, Dong, et al. ”Chebyshev and conjugate gradient filters for graph image denoising.” Multimedia and Expo Workshops (ICMEW), 2014 IEEE International Conference on. IEEE, 2014. [33] Zhang, Fan, and Edwin R. Hancock. ”Graph spectral image smoothing using the heat kernel.” Pattern Recognition 41.11 (2008): 3328-3342. [34] Hochbruck, Marlis, and Christian Lubich. ”On Krylov subspace approxi- mations to the operator.” SIAM Journal on Numerical Analysis 34.5 (1997): 1911-1925. [35] Meyer, Carl D. Matrix analysis and applied linear algebra. Siam, 2000. [36] Bckstrm, Tom. Linear predictive modelling of speech: constraints and line spectrum pair decomposition. Helsinki University of Technology, 2004. [37] Ng, Michael K. Iterative methods for Toeplitz systems. Oxford University Press, 2004. [38] An, Myoung, and Chao Lu. Mathematics of multidimensional Fourier trans- form algorithms. Springer Science and Business Media, 2012. [39] Kimitei, Symon Kipyagwai. ”Algorithms for Toeplitz Matrices with Appli- cations to Image Deblurring.” (2008). [40] Godsil, Chris, and Gordon F. Royle. Algebraic graph theory. Vol. 207. Springer Science and Business Media, 2013. [41] Gkantsidis, Christos, Milena Mihail, and Ellen Zegura. ”Spectral analysis of Internet topologies.” INFOCOM 2003. Twenty-Second Annual Joint Con- ference of the IEEE Computer and Communications. IEEE Societies. Vol. 1. IEEE, 2003. [42] Franchetti, Franz, et al. ”Discrete Fourier transform on multicore.” Signal Processing Magazine, IEEE 26.6 (2009): 90-102.

61 [43] Zumstein, Philipp. Comparison of spectral methods through the adjacency matrix and the Laplacian of a graph. Diss. Diploma Thesis, ETH Zrich, 2005. [44] Fiedler, Miroslav. ”Algebraic connectivity of graphs.” Czechoslovak math- ematical journal 23.2 (1973): 298-305.

[45] Ding, Chris HQ, et al. ”A min-max cut algorithm for graph partitioning and data clustering.” Data Mining, 2001. ICDM 2001, Proceedings IEEE International Conference on. IEEE, 2001. [46] Pschel, Markus, and Jos MF Moura. ”Algebraic signal processing theory: 1-D space.” Signal Processing, IEEE Transactions on 56.8 (2008): 3586-3599. [47] Puschel, Markus, and Jos MF Moura. ”Algebraic signal processing theory: Foundation and 1-D time.” Signal Processing, IEEE Transactions on 56.8 (2008): 3572-3585. [48] Pschel, Markus, and Jos MF Moura. ”Algebraic signal processing theory: CooleyTukey type algorithms for DCTs and DSTs.” Signal Processing, IEEE Transactions on 56.4 (2008): 1502-1521. [49] Tomasi, Carlo, and Roberto Manduchi. ”Bilateral filtering for gray and color images.” Computer Vision, 1998. Sixth International Conference on. IEEE, 1998.

62 Appendices

63 Appendix A

Eigenvalue and Eigenvector plots

The eigenvectors are sorted according to their corresponding eigenvalues, from smallest to largest.

A.1 Ring R8 A.1.1 Adjacency Spectrum

2

1.5

1

0.5

0

-0.5

-1

-1.5

-2 1 2 3 4 5 6 7 8

Figure A.1: Adjacency eigenvalues of ring

64 0.3 v1 vs. x untitled fit 1 0.2

0.1

v1 0

-0.1

-0.2

-0.3

1 2 3 4 5 6 7 8 x

Figure A.2: Adjacency eigenvector 1 of ring

0.5 v2 vs. x untitled fit 1

v2 0

-0.5 1 2 3 4 5 6 7 8 x

Figure A.3: Adjacency eigenvector 2 of ring

0.5 v3 vs. x untitled fit 1

v3 0

-0.5 1 2 3 4 5 6 7 8 x

Figure A.4: Adjacency eigenvector 3 of ring

0.5 v4 vs. x untitled fit 1

v4 0

-0.5 1 2 3 4 5 6 7 8 x

Figure A.5: Adjacency eigenvector 4 of ring

65 0.5 v5 vs. x untitled fit 1

v5 0

-0.5 1 2 3 4 5 6 7 8 x

Figure A.6: Adjacency eigenvector 5 of ring

0.5 v6 vs. x untitled fit 1

v6 0

-0.5 1 2 3 4 5 6 7 8 x

Figure A.7: Adjacency eigenvector 6 of ring

0.5 v7 vs. x untitled fit 1

v7 0

-0.5 1 2 3 4 5 6 7 8 x

Figure A.8: Adjacency eigenvector 7 of ring

66 0.3536 v8 vs. x 0.3536 untitled fit 1

0.3536 v8 0.3536

0.3536

0.3535 1 2 3 4 5 6 7 8 x

Figure A.9: Adjacency eigenvector 8 of ring

A.1.2 Laplacian Spectrum

4

3.5

3

2.5

2

1.5

1

0.5

0 1 2 3 4 5 6 7 8

Figure A.10: Laplacian eigenvalues of ring

0.3536 v1 vs. x 0.3536 untitled fit 1

0.3536 v1 0.3536

0.3536

0.3535 1 2 3 4 5 6 7 8 x

Figure A.11: Laplacian eigenvector 1 of ring

67 0.5 v2 vs. x untitled fit 1

v2 0

-0.5 1 2 3 4 5 6 7 8 x

Figure A.12: Laplacian eigenvector 2 of ring

0.5 v3 vs. x untitled fit 1

v3 0

-0.5 1 2 3 4 5 6 7 8 x

Figure A.13: Laplacian eigenvector 3 of ring

0.4 v4 vs. x untitled fit 1 0.2

v4 0

-0.2

-0.4

1 2 3 4 5 6 7 8 x

Figure A.14: Laplacian eigenvector 4 of ring

68 0.4 v5 vs. x untitled fit 1 0.2

v5 0

-0.2

-0.4

1 2 3 4 5 6 7 8 x

Figure A.15: Laplacian eigenvector 5 of ring

0.5 v6 vs. x untitled fit 1

v6 0

-0.5 1 2 3 4 5 6 7 8 x

Figure A.16: Laplacian eigenvector 6 of ring

0.5 v7 vs. x untitled fit 1

v7 0

-0.5 1 2 3 4 5 6 7 8 x

Figure A.17: Laplacian eigenvector 7 of ring

69 0.3 v8 vs. x untitled fit 1 0.2

0.1

v8 0

-0.1

-0.2

-0.3

1 2 3 4 5 6 7 8 x

Figure A.18: Laplacian eigenvector 8 of ring

1,2 A.2 Circulant C8 A.2.1 Adjacency Spectrum

5

4

3

2

1

0

-1

-2

-3 1 2 3 4 5 6 7 8

1,2 Figure A.19: Adjacency eigenvalues of C8

70 0.5 v1 vs. x untitled fit 1

v1 0

-0.5 1 2 3 4 5 6 7 8 x

1,2 Figure A.20: Adjacency eigenvector 1 of C8

0.5 v2 vs. x untitled fit 1

v2 0

-0.5 1 2 3 4 5 6 7 8 x

1,2 Figure A.21: Adjacency eigenvector 2 of C8

0.5 v3 vs. x untitled fit 1

v3 0

-0.5 1 2 3 4 5 6 7 8 x

1,2 Figure A.22: Adjacency eigenvector 3 of C8

0.5 v4 vs. x untitled fit 1

v4 0

-0.5 1 2 3 4 5 6 7 8 x

1,2 Figure A.23: Adjacency eigenvector 4 of C8

71 0.3 v5 vs. x untitled fit 1 0.2

0.1

v5 0

-0.1

-0.2

-0.3

1 2 3 4 5 6 7 8 x

1,2 Figure A.24: Adjacency eigenvector 5 of C8

0.5 v6 vs. x untitled fit 1

v6 0

-0.5 1 2 3 4 5 6 7 8 x

1,2 Figure A.25: Adjacency eigenvector 6 of C8

0.5 v7 vs. x untitled fit 1

v7 0

-0.5 1 2 3 4 5 6 7 8 x

1,2 Figure A.26: Adjacency eigenvector 7 of C8

72 -0.3535 v8 vs. x -0.3536 untitled fit 1

-0.3536 v8 -0.3536

-0.3536

-0.3536 1 2 3 4 5 6 7 8 x

1,2 Figure A.27: Adjacency eigenvector 8 of C8

A.2.2 Laplacian Spectrum

6

5

4

3

2

1

0

-1 1 2 3 4 5 6 7 8

1,2 Figure A.28: Laplacian eigenvalues of C8

73 0.3536 v1 vs. x 0.3536 untitled fit 1

0.3536 v1 0.3536

0.3536

0.3535 1 2 3 4 5 6 7 8 x

1,2 Figure A.29: Laplacian eigenvector 1 of C8

0.5 v2 vs. x untitled fit 1

v2 0

-0.5 1 2 3 4 5 6 7 8 x

1,2 Figure A.30: Laplacian eigenvector 2 of C8

0.5 v3 vs. x untitled fit 1

v3 0

-0.5 1 2 3 4 5 6 7 8 x

1,2 Figure A.31: Laplacian eigenvector 3 of C8

0.3 v4 vs. x untitled fit 1 0.2

0.1

v4 0

-0.1

-0.2

-0.3

1 2 3 4 5 6 7 8 x

1,2 Figure A.32: Laplacian eigenvector 4 of C8

74 0.5 v5 vs. x untitled fit 1

v5 0

-0.5 1 2 3 4 5 6 7 8 x

1,2 Figure A.33: Laplacian eigenvector 5 of C8

0.5 v6 vs. x untitled fit 1

v6 0

-0.5 1 2 3 4 5 6 7 8 x

1,2 Figure A.34: Laplacian eigenvector 6 of C8

0.4 v7 vs. x untitled fit 1 0.2

v7 0

-0.2

-0.4 1 2 3 4 5 6 7 8 x

1,2 Figure A.35: Laplacian eigenvector 7 of C8

75 0.4 v8 vs. x untitled fit 1 0.2

v8 0

-0.2

-0.4 1 2 3 4 5 6 7 8 x

1,2 Figure A.36: Laplacian eigenvector 8 of C8

1,2,3,4 A.3 Circulant C8 A.3.1 Adjacency Spectrum

7

6

5

4

3

2

1

0

-1

-2 1 2 3 4 5 6 7 8

1,2,3,4 Figure A.37: Adjacency eigenvalues of C8

76 0.6 v1 vs. x 0.4 untitled fit 1 0.2

0

v1 -0.2

-0.4

-0.6

-0.8 1 2 3 4 5 6 7 8 x

1,2,3,4 Figure A.38: Adjacency eigenvector 1 of C8

v2 vs. x 0.6 untitled fit 1

0.4

0.2 v2

0

-0.2

-0.4 1 2 3 4 5 6 7 8 x

1,2,3,4 Figure A.39: Adjacency eigenvector 2 of C8

0.8 v3 vs. x 0.6 untitled fit 1

0.4

0.2 v3 0

-0.2

-0.4

-0.6 1 2 3 4 5 6 7 8 x

1,2,3,4 Figure A.40: Adjacency eigenvector 3 of C8

0.6 v4 vs. x untitled fit 1 0.4

0.2

v4 0

-0.2

-0.4

-0.6

1 2 3 4 5 6 7 8 x

1,2,3,4 Figure A.41: Adjacency eigenvector 4 of C8

77 0.4 v5 vs. x untitled fit 1 0.2

0 v5 -0.2

-0.4

-0.6

1 2 3 4 5 6 7 8 x

1,2,3,4 Figure A.42: Adjacency eigenvector 5 of C8

0.8 v6 vs. x untitled fit 1 0.6

0.4 v6 0.2

0

-0.2

1 2 3 4 5 6 7 8 x

1,2,3,4 Figure A.43: Adjacency eigenvector 6 of C8

v7 vs. x 0.8 untitled fit 1

0.6

0.4 v7

0.2

0

-0.2 1 2 3 4 5 6 7 8 x

1,2,3,4 Figure A.44: Adjacency eigenvector 7 of C8

78 0.3536 v8 vs. x 0.3536 untitled fit 1

0.3536 v8 0.3536

0.3536

0.3535 1 2 3 4 5 6 7 8 x

1,2,3,4 Figure A.45: Adjacency eigenvector 8 of C8

A.3.2 Laplacian Spectrum

9

8

7

6

5

4

3

2

1

0

-1 1 2 3 4 5 6 7 8

1,2,3,4 Figure A.46: Laplacian eigenvalues of C8

79 -0.3535 v1 vs. x -0.3536 untitled fit 1

-0.3536 v1 -0.3536

-0.3536

-0.3536 1 2 3 4 5 6 7 8 x

1,2,3,4 Figure A.47: Laplacian eigenvector 1 of C8

0.8 v2 vs. x 0.6 untitled fit 1

0.4

v2 0.2

0

-0.2

-0.4 1 2 3 4 5 6 7 8 x

1,2,3,4 Figure A.48: Laplacian eigenvector 2 of C8

0.8 v3 vs. x untitled fit 1 0.6

0.4 v3 0.2

0

-0.2

1 2 3 4 5 6 7 8 x

1,2,3,4 Figure A.49: Laplacian eigenvector 3 of C8

0.8 v4 vs. x untitled fit 1 0.6

0.4 v4 0.2

0

-0.2

1 2 3 4 5 6 7 8 x

1,2,3,4 Figure A.50: Laplacian eigenvector 4 of C8

80 0.8 v5 vs. x 0.6 untitled fit 1 0.4

0.2 v5 0

-0.2

-0.4

1 2 3 4 5 6 7 8 x

1,2,3,4 Figure A.51: Laplacian eigenvector 5 of C8

0.6 v6 vs. x untitled fit 1 0.4

0.2

v6 0

-0.2

-0.4

-0.6

1 2 3 4 5 6 7 8 x

1,2,3,4 Figure A.52: Laplacian eigenvector 6 of C8

0.8 v7 vs. x untitled fit 1 0.6

0.4

v7 0.2

0

-0.2

-0.4 1 2 3 4 5 6 7 8 x

1,2,3,4 Figure A.53: Laplacian eigenvector 7 of C8

81 0.8 v8 vs. x 0.6 untitled fit 1

0.4 v8 0.2

0

-0.2 1 2 3 4 5 6 7 8 x

1,2,3,4 Figure A.54: Laplacian eigenvector 8 of C8

A.4 Path P8 A.4.1 Adjacency Spectrum

2

1.5

1

0.5

0

-0.5

-1

-1.5

-2 1 2 3 4 5 6 7 8

Figure A.55: Adjacency eigenvalues of path

82 0.5 v1 vs. x untitled fit 1

v1 0

-0.5 1 2 3 4 5 6 7 8 x

Figure A.56: Adjacency eigenvector 1 of P8

0.4 v2 vs. x untitled fit 1 0.2

0 v2

-0.2

-0.4

1 2 3 4 5 6 7 8 x

Figure A.57: Adjacency eigenvector 2 of P8

0.4 v3 vs. x untitled fit 1 0.2

v3 0

-0.2

-0.4 1 2 3 4 5 6 7 8 x

Figure A.58: Adjacency eigenvector 3 of P8

0.4 v4 vs. x untitled fit 1

0.2

v4 0

-0.2

-0.4 1 2 3 4 5 6 7 8 x

Figure A.59: Adjacency eigenvector 4 of P8

83 0.5 v5 vs. x untitled fit 1

v5 0

-0.5 1 2 3 4 5 6 7 8 x

Figure A.60: Adjacency eigenvector 5 of P8

0.4 v6 vs. x untitled fit 1 0.2

v6 0

-0.2

-0.4 1 2 3 4 5 6 7 8 x

Figure A.61: Adjacency eigenvector 6 of P8

0.5 v7 vs. x untitled fit 1

v7 0

-0.5 1 2 3 4 5 6 7 8 x

Figure A.62: Adjacency eigenvector 7 of P8

84 0.45 v8 vs. x untitled fit 1 0.4

0.35

v8 0.3

0.25

0.2

0.15 1 2 3 4 5 6 7 8 x

Figure A.63: Adjacency eigenvector 8 of P8

A.4.2 Laplacian Spectrum

4

3.5

3

2.5

2

1.5

1

0.5

0 1 2 3 4 5 6 7 8

Figure A.64: Laplacian eigenvalues of path

85 0.3536 v1 vs. x 0.3536 untitled fit 1

0.3536 v1 0.3536

0.3536

0.3535 1 2 3 4 5 6 7 8 x

Figure A.65: Laplacian eigenvector 1 of P8

0.5 v2 vs. x untitled fit 1

v2 0

-0.5 1 2 3 4 5 6 7 8 x

Figure A.66: Laplacian eigenvector 2 of P8

0.5 v3 vs. x untitled fit 1

v3 0

-0.5 1 2 3 4 5 6 7 8 x

Figure A.67: Laplacian eigenvector 3 of P8

0.5 v4 vs. x untitled fit 1

v4 0

-0.5 1 2 3 4 5 6 7 8 x

Figure A.68: Laplacian eigenvector 4 of P8

86 0.3 v5 vs. x untitled fit 1 0.2

0.1

v5 0

-0.1

-0.2

-0.3

1 2 3 4 5 6 7 8 x

Figure A.69: Laplacian eigenvector 5 of P8

0.5 v6 vs. x untitled fit 1

v6 0

-0.5 1 2 3 4 5 6 7 8 x

Figure A.70: Laplacian eigenvector 6 of P8

0.5 v7 vs. x untitled fit 1

v7 0

-0.5 1 2 3 4 5 6 7 8 x

Figure A.71: Laplacian eigenvector 7 of P8

87 0.5 v8 vs. x untitled fit 1

v8 0

-0.5 1 2 3 4 5 6 7 8 x

Figure A.72: Laplacian eigenvector 8 of P8

A.5 Hypercube Q3 A.5.1 Adjacency Spectrum

3

2

1

0

-1

-2

-3 1 2 3 4 5 6 7 8

Figure A.73: Adjacency eigenvalues of cube

88 0.3 v1 vs. x untitled fit 1 0.2

0.1

v1 0

-0.1

-0.2

-0.3

1 2 3 4 5 6 7 8 x

Figure A.74: Adjacency eigenvector 1 of Q3

v2 vs. x 0.2 untitled fit 1

0 v2 -0.2

-0.4

1 2 3 4 5 6 7 8 x

Figure A.75: Adjacency eigenvector 2 of Q3

0.4 v3 vs. x untitled fit 1 0.2

0 v3

-0.2

-0.4

1 2 3 4 5 6 7 8 x

Figure A.76: Adjacency eigenvector 3 of Q3

0.6 v4 vs. x untitled fit 1 0.4

0.2 v4

0

-0.2

1 2 3 4 5 6 7 8 x

Figure A.77: Adjacency eigenvector 4 of Q3

89 0.5 v5 vs. x untitled fit 1

v5 0

-0.5

1 2 3 4 5 6 7 8 x

Figure A.78: Adjacency eigenvector 5 of Q3

0.5 v6 vs. x untitled fit 1

v6 0

-0.5 1 2 3 4 5 6 7 8 x

Figure A.79: Adjacency eigenvector 6 of Q3

0.5 v7 vs. x untitled fit 1

v7 0

-0.5

1 2 3 4 5 6 7 8 x

Figure A.80: Adjacency eigenvector 7 of Q3

90 -0.3535 v8 vs. x -0.3536 untitled fit 1

-0.3536 v8 -0.3536

-0.3536

-0.3536 1 2 3 4 5 6 7 8 x

Figure A.81: Adjacency eigenvector 8 of Q3

A.5.2 Laplacian Spectrum

6

5

4

3

2

1

0 1 2 3 4 5 6 7 8

Figure A.82: Laplacian eigenvalues of cube

91 0.3536 v1 vs. x 0.3536 untitled fit 1

0.3536 v1 0.3536

0.3536

0.3535 1 2 3 4 5 6 7 8 x

Figure A.83: Laplacian eigenvector 1 of Q3

0.6 v2 vs. x 0.4 untitled fit 1

0.2

v2 0

-0.2

-0.4

-0.6 1 2 3 4 5 6 7 8 x

Figure A.84: Laplacian eigenvector 2 of Q3

0.6 v3 vs. x 0.4 untitled fit 1

0.2

v3 0

-0.2

-0.4

-0.6 1 2 3 4 5 6 7 8 x

Figure A.85: Laplacian eigenvector 3 of Q3

0.5 v4 vs. x untitled fit 1

v4 0

-0.5

1 2 3 4 5 6 7 8 x

Figure A.86: Laplacian eigenvector 4 of Q3

92 0.6 v5 vs. x 0.4 untitled fit 1

0.2 v5 0

-0.2

-0.4 1 2 3 4 5 6 7 8 x

Figure A.87: Laplacian eigenvector 5 of Q3

0.4 v6 vs. x untitled fit 1 0.2

0 v6

-0.2

-0.4

1 2 3 4 5 6 7 8 x

Figure A.88: Laplacian eigenvector 6 of Q3

0.6 v7 vs. x untitled fit 1 0.4

0.2 v7

0

-0.2

1 2 3 4 5 6 7 8 x

Figure A.89: Laplacian eigenvector 7 of Q3

93 0.3 v8 vs. x untitled fit 1 0.2

0.1

v8 0

-0.1

-0.2

-0.3

1 2 3 4 5 6 7 8 x

Figure A.90: Laplacian eigenvector 8 of Q3

A.6 Grid G3×3 A.6.1 Adjacency Spectrum

3

2

1

0

-1

-2

-3 1 2 3 4 5 6 7 8 9

Figure A.91: Adjacency eigenvalues of grid

94 v1 vs. x 0.2 untitled fit 1

0 v1

-0.2

-0.4

1 2 3 4 5 6 7 8 9 x

Figure A.92: Adjacency eigenvector 1 of G3×3

0.5 v2 vs. x untitled fit 1

v2 0

-0.5 1 2 3 4 5 6 7 8 9 x

Figure A.93: Adjacency eigenvector 2 of G3×3

0.5 v3 vs. x untitled fit 1

v3 0

-0.5 1 2 3 4 5 6 7 8 9 x

Figure A.94: Adjacency eigenvector 3 of G3×3

0.3 v4 vs. x untitled fit 1 0.2

0.1

0 v4 -0.1

-0.2

-0.3

-0.4 1 2 3 4 5 6 7 8 9 x

Figure A.95: Adjacency eigenvector 4 of G3×3

95 0.4 v5 vs. x 0.2 untitled fit 1

0

v5 -0.2

-0.4

-0.6

1 2 3 4 5 6 7 8 9 x

Figure A.96: Adjacency eigenvector 5 of G3×3

0.4

0.3 v6 vs. x untitled fit 1 0.2

0.1

v6 0

-0.1

-0.2

-0.3

1 2 3 4 5 6 7 8 9 x

Figure A.97: Adjacency eigenvector 6 of G3×3

0.5 v7 vs. x untitled fit 1

v7 0

-0.5 1 2 3 4 5 6 7 8 9 x

Figure A.98: Adjacency eigenvector 7 of G3×3

96 0.5 v8 vs. x untitled fit 1

v8 0

-0.5 1 2 3 4 5 6 7 8 9 x

Figure A.99: Adjacency eigenvector 8 of G3×3

0.5 v9 vs. x untitled fit 1 0.45

0.4 v9 0.35

0.3

0.25 1 2 3 4 5 6 7 8 9 x

Figure A.100: Adjacency eigenvector 9 of G3×3

A.6.2 Laplacian Spectrum

6

5

4

3

2

1

0

-1 1 2 3 4 5 6 7 8 9

Figure A.101: Laplacian eigenvalues of grid

97 -0.3333 v1 vs. x -0.3333 untitled fit 1

-0.3333 v1 -0.3333

-0.3333

-0.3333 1 2 3 4 5 6 7 8 9 x

Figure A.102: Laplacian eigenvector 1 of G3×3

0.6 v2 vs. x 0.4 untitled fit 1

0.2

v2 0

-0.2

-0.4

-0.6 1 2 3 4 5 6 7 8 9 x

Figure A.103: Laplacian eigenvector 2 of G3×3

0.6 v3 vs. x 0.4 untitled fit 1

0.2

v3 0

-0.2

-0.4

-0.6 1 2 3 4 5 6 7 8 9 x

Figure A.104: Laplacian eigenvector 3 of G3×3

0.5 v4 vs. x untitled fit 1

v4 0

-0.5 1 2 3 4 5 6 7 8 9 x

Figure A.105: Laplacian eigenvector 4 of G3×3

98 v5 vs. x 0.2 untitled fit 1

0

v5 -0.2

-0.4

-0.6

1 2 3 4 5 6 7 8 9 x

Figure A.106: Adjacency eigenvector 5 of G3×3

v6 vs. x 0.4 untitled fit 1

0.2

v6 0

-0.2

-0.4

1 2 3 4 5 6 7 8 9 x

Figure A.107: Adjacency eigenvector 6 of G3×3

0.6 v7 vs. x 0.4 untitled fit 1

0.2

v7 0

-0.2

-0.4

-0.6 1 2 3 4 5 6 7 8 9 x

Figure A.108: Adjacency eigenvector 7 of G3×3

99 0.6 v8 vs. x 0.4 untitled fit 1

0.2

v8 0

-0.2

-0.4

-0.6 1 2 3 4 5 6 7 8 9 x

Figure A.109: Adjacency eigenvector 8 of G3×3

v9 vs. x 0.2 untitled fit 1

0

v9 -0.2

-0.4

-0.6

1 2 3 4 5 6 7 8 9 x

Figure A.110: Laplacian eigenvector 9 of G3×3

100