Lecture 7 1 Normalized Adjacency and Laplacian Matrices

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 7 1 Normalized Adjacency and Laplacian Matrices ORIE 6334 Spectral Graph Theory September 13, 2016 Lecture 7 Lecturer: David P. Williamson Scribe: Sam Gutekunst In this lecture, we introduce normalized adjacency and Laplacian matrices. We state and begin to prove Cheeger's inequality, which relates the second eigenvalue of the normalized Laplacian matrix to a graph's connectivity. Before stating the inequality, we will also define three related measures of expansion properties of a graph: conductance, (edge) expansion, and sparsity. 1 Normalized Adjacency and Laplacian Matrices We use notation from Lap Chi Lau. Definition 1 The normalized adjacency matrix is A ≡ D−1=2AD−1=2; where A is the adjacency matrix of G and D = diag(d) for d(i) the degree of node i. For a graph G (with no isolated vertices), we can see that 0p1 0 ··· 0 1 d(1) B 1 C B 0 p ··· 0 C −1=2 B d(2) C D = B . C : B . .. C @ A 0 0 ··· p1 d(n) Definition 2 The normalized Laplacian matrix is L ≡ I − A : −1=2 −1=2 −1=2 −1=2 Notice that L = I − A = D (D − A)D = D LGD ; for LG the (unnor- malized) Laplacian. Recall that for the largest eigenvalue λ of A and ∆ the maximum degree of a vertex in a graph, davg ≤ λ ≤ ∆. \Normalizing" the adjacency matrix makes its largest eigenvalue 1, so the analogous result for normalized matrices is the following: Claim 1 Let α1 ≥ · · · ≥ αn be the eigenvalues of A and let λ1 ≤ · · · λn be the eigenvalues of L : Then 1 = α1 ≥ · · · ≥ αn ≥ −1; 0 = λ1 ≤ · · · ≤ λn ≤ 2: 0This lecture is derived from Lau's 2012 notes, Week 2, http://appsrv.cse.cuhk.edu.hk/~chi/ csc5160/notes/L02.pdf. 7-1 Proof: First, we show that 0 is an eigenvalue of L using the vector x = D−1=2e: Then 1=2 −1=2 −1=2 1=2 −1=2 L (D e) = D LGD D e = D LGe = 0; 1=2 since e is a eigenvector of LG corresponding to eigenvalue 0. This shows that D e is an eigenvector of L of eigenvalue 0. To show that it's the smallest eigenvalue, notice that L is positive semidefinite1, as for any x 2 Rn: xT L x = xT (I − A )x X 2 X 2x(i)x(j) = x(i) − p i2V (i;j)2E d(i)d(j) !2 X x(i) x(j) = p − p (i;j)2E d(i) d(j) ≥ 0: 2 The last equality can be see \in reverse" by expanding px(i) − px(j) : We have d(i) d(j) now shown that L has nonnegative eigenvalues, so indeed λ1 = 0. To show that α1 ≤ 1; we make use of the positive semidefiniteness of L = I − A : This gives us that, for all x 2 Rn: xT A x xT (I − A )x ≥ 0 =) xT x − xT A x ≥ 0 =) 1 ≥ : (1) xT x This Rayleigh quotient gives us the upper bound that α1 ≤ 1: To get equality, consider again x = D1=2e. Since, for this x, xT L x = 0 =) xT (I − A )x = 0: xT A x The exact same steps as in Equation 1 yield xT x = 1; as we now have equality. To get a similar lower bound on αn, we can show that I+A is positive semidefinite using a similar sum expansion2. Then T T T T x A x x (I + A )x ≥ 0 =) x x + x A x ≥ 0 =) ≥ −1 =) αn ≥ −1: xT x 1 A slick proof that does not make use of this quadratic is to use the fact that LG is positive T T −1=2 semidefinite. Thus LG = BB for some B, so that L = VV for V = D B: 2This time, use !2 X X 2x(i)x(j) X x(i) x(j) xT (I + A )x = x(i)2 + = + ≥ 0: pd(i)d(j) pd(i) pd(j) i2V (i;j)2E (i;j)2E 7-2 Finally, notice that xT (I + A )x ≥ 0 implies the following chain: T T T T T T x L x −x A x ≤ x x =) x Ix − x A x ≤ 2x x =) ≤ 2 =) λn ≤ 2; xT x using the same Rayleigh quotient trick and that λn is the maximizer of that quotient. Remark 1 Notice that, given the spectrum of A , we have the following: −A has spectrum negatives of A , and I − A adds one to each eigenvalue of −A . Hence, 0 = λ1 ≤ · · · ≤ λn ≤ 2 follows directly from 1 = α1 ≥ · · · ≥ αn ≥ −1: 2 Connectivity and λ2(L ) Recall that λ2(LG) = 0 if and only if G is disconnected. The same is true for λ2(L ), and we can say more! 2.1 Flavors of Connectivity Let S ⊂ V: Recall that δ(S) denotes the set of edges with exactly one endpoint in S, P and define vol(S) ≡ i2S d(i): Definition 3 The conductance of S ⊂ V is jδ(S)j φ(S) ≡ : minfvol(S); vol(V − S)g The edge expansion of S is jδ(S)j n α(S) ≡ ; for jSj ≤ : jSj 2 The sparsity of S is jδ(S)j ρ(S) ≡ : jSjjV − Sj These measures are similar if G is d-regular (i.e., d(i) = d for all i 2 V ). In this case, n α(S) = dφ(S); ρ(S) ≤ α(S) ≤ nρ(S): 2 To see the first equality, e.g., notice that the volume of S is djSj. In general, notice that 0 ≤ φ(S) ≤ 1 for all S ⊂ V: We're usually interested in finding the sets S that minimize these quantities over the entire graph. 7-3 Definition 4 We define φ(G) ≡ min φ(S); α(G) ≡ min α(S); ρ(G) ≡ min ρ(S): S⊂V n S⊂V S⊂V :jS|≤ 2 We call a graph G an expander if φ(G) (or α(G)) is \large" (i.e. a constant3). Otherwise, we say that G has a sparse cut. One algorithm for finding a sparse cut that works well in practice, but that lacks strong theoretical guarantees is called spectral partitioning. Algorithm 1: Spectral Partitioning 1 Compute x2 of L (the eigenvector corresponding to λ2(L )); 2 Sort V such that x2(1) ≤ · · · ≤ x2(n):; 3 Define the sweep cuts for i = 1; :::; n − 1 by Si ≡ f1; :::; ig.; 4 Return mini2f1;:::;n−1g φ(Si).; The following picture illustrates the idea of the algorithm; sweep cuts correspond to cuts between consecutive bars: 푥2(1) 푥2(2) 푥2(3) ⋯ 푥2(푛 − 1) 푥2(푛) Cheeger's inequality provides some insight into why this algorithm works well. 3 Cheeger's Inequality We now work towards proving the following: Theorem 2 (Cheeger's Inequality) Let λ2 be the second smallest eigenvalue of L : Then: λ2 p ≤ φ(G) ≤ 2λ : 2 2 The theorem proved by Jeff Cheeger actually has to do with manifolds and hypersur- faces; the theorem above is considered to be a discrete analog of Cheeger's original inequality. But the name has stuck. Typically, people think of the first inequality being \easy" and the second being \hard." We'll prove the first inequality, and start the proof of the second inequality. 3One should then ask \A constant with respect to what?" Usually one defines families of graphs of increasing size as families of expanders, in which case we want the conductance or expansion constant with respect to the number of vertices. 7-4 Proof: Recall that T T −1=2 −1=2 x L x x D LGD x λ2 = min T = min T : x:<x;D1=2e>=0 x x x:<x;D1=2e>=0 x x Consider the change of variables obtained by setting y = D−1=2x and x = D1=2y : T T y LGy y LGy λ2 = min 1=2 T 1=2 = min T : y:<D1=2y;D1=2e>=0 (D y) (D y) y:<D1=2y;D1=2e>=0 y Dy The minimum is being taken over all y such that < D1=2y; D1=2e >= 0: That is, over y such that: X (D1=2y)T D1=2e = 0 () yT De = 0 () d(i)y(i) = 0: i2V Hence, we have that P (y(i) − y(j))2 λ = min (i;j)2E : 2 P P 2 y: i2V d(i)y(i)=0 i2V d(i)y(i) Now let S∗ be such that φ(G) = φ(S∗), and try defining ( 1; i 2 S∗ y^(i) = 0; else: jδ(S∗)j jδ(S∗)j It would be great if λ2 was bounded by P = ∗ : However, there are two i2S∗ d(i) vol(S ) P jδ(S∗)j ∗ problems. We have i2V d(i)^y(i) 6= 0; moreover vol(S∗) might not be φ(S ); as we want the denominator to be minfvol(S∗); vol(V − S∗)g: Hence, we redefine ( 1 ∗ vol(S∗) ; i 2 S y^(i) = 1 − vol(V −S∗) ; else: Now we notice that: P P X ∗ d(i) ∗ d(i) d(i)^y(i) = i2S − i=2S = 1 − 1 = 0: vol(S∗) vol(V − S∗) i2V Thus, this is a feasible solution to the minimization problem defining λ2; and we have that the only edges contributing anything nonzero to the numerator are those with exactly one endpoint in S∗. Thus: 2 ∗ 1 1 jδ(S )j vol(S∗) + vol(V −S∗) λ2 ≤ 2 2 P 1 P 1 i2S∗ d(i) vol(S∗) + i=2S∗ d(i) vol(V −S∗) 7-5 2 ∗ 1 1 jδ(S )j vol(S∗) + vol(V −S∗) = 1 1 vol(S∗) + vol(V −S∗) 1 1 = jδ(S∗)j + vol(S∗) vol(V − S∗) 1 1 ≤ 2jδ(S∗)j max ; vol(S∗) vol(V − S∗) 2jδ(S∗)j = minfvol(S∗); vol(V − S∗)g = 2φ(G): This completes the proof of the first inequality.
Recommended publications
  • Boundary Value Problems on Weighted Paths
    Introduction and basic concepts BVP on weighted paths Bibliography Boundary value problems on a weighted path Angeles Carmona, Andr´esM. Encinas and Silvia Gago Depart. Matem`aticaAplicada 3, UPC, Barcelona, SPAIN Midsummer Combinatorial Workshop XIX Prague, July 29th - August 3rd, 2013 MCW 2013, A. Carmona, A.M. Encinas and S.Gago Boundary value problems on a weighted path Introduction and basic concepts BVP on weighted paths Bibliography Outline of the talk Notations and definitions Weighted graphs and matrices Schr¨odingerequations Boundary value problems on weighted graphs Green matrix of the BVP Boundary Value Problems on paths Paths with constant potential Orthogonal polynomials Schr¨odingermatrix of the weighted path associated to orthogonal polynomials Two-side Boundary Value Problems in weighted paths MCW 2013, A. Carmona, A.M. Encinas and S.Gago Boundary value problems on a weighted path Introduction and basic concepts Schr¨odingerequations BVP on weighted paths Definition of BVP Bibliography Weighted graphs A weighted graphΓ=( V ; E; c) is composed by: V is a set of elements called vertices. E is a set of elements called edges. c : V × V −! [0; 1) is an application named conductance associated to the edges. u, v are adjacent, u ∼ v iff c(u; v) = cuv 6= 0. X The degree of a vertex u is du = cuv . v2V c34 u4 u1 c12 u2 c23 u3 c45 c35 c27 u5 c56 u7 c67 u6 MCW 2013, A. Carmona, A.M. Encinas and S.Gago Boundary value problems on a weighted path Introduction and basic concepts Schr¨odingerequations BVP on weighted paths Definition of BVP Bibliography Matrices associated with graphs Definition The weighted Laplacian matrix of a weighted graph Γ is defined as di if i = j; (L)ij = −cij if i 6= j: c34 u4 u c u c u 1 12 2 23 3 0 d1 −c12 0 0 0 0 0 1 c B −c12 d2 −c23 0 0 0 −c27 C 45 B C c B 0 −c23 d3 −c34 −c35 0 0 C 35 B C c27 L = B 0 0 −c34 d4 −c45 0 0 C B C u5 B 0 0 −c35 −c45 d5 −c56 0 C c56 @ 0 0 0 0 −c56 d6 −c67 A 0 −c27 0 0 0 −c67 d7 u7 c67 u6 MCW 2013, A.
    [Show full text]
  • Adjacency and Incidence Matrices
    Adjacency and Incidence Matrices 1 / 10 The Incidence Matrix of a Graph Definition Let G = (V ; E) be a graph where V = f1; 2;:::; ng and E = fe1; e2;:::; emg. The incidence matrix of G is an n × m matrix B = (bik ), where each row corresponds to a vertex and each column corresponds to an edge such that if ek is an edge between i and j, then all elements of column k are 0 except bik = bjk = 1. 1 2 e 21 1 13 f 61 0 07 3 B = 6 7 g 40 1 05 4 0 0 1 2 / 10 The First Theorem of Graph Theory Theorem If G is a multigraph with no loops and m edges, the sum of the degrees of all the vertices of G is 2m. Corollary The number of odd vertices in a loopless multigraph is even. 3 / 10 Linear Algebra and Incidence Matrices of Graphs Recall that the rank of a matrix is the dimension of its row space. Proposition Let G be a connected graph with n vertices and let B be the incidence matrix of G. Then the rank of B is n − 1 if G is bipartite and n otherwise. Example 1 2 e 21 1 13 f 61 0 07 3 B = 6 7 g 40 1 05 4 0 0 1 4 / 10 Linear Algebra and Incidence Matrices of Graphs Recall that the rank of a matrix is the dimension of its row space. Proposition Let G be a connected graph with n vertices and let B be the incidence matrix of G.
    [Show full text]
  • Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A
    1 Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A. Deri, Member, IEEE, and José M. F. Moura, Fellow, IEEE Abstract—We define and discuss the utility of two equiv- Consider a graph G = G(A) with adjacency matrix alence graph classes over which a spectral projector-based A 2 CN×N with k ≤ N distinct eigenvalues and Jordan graph Fourier transform is equivalent: isomorphic equiv- decomposition A = VJV −1. The associated Jordan alence classes and Jordan equivalence classes. Isomorphic equivalence classes show that the transform is equivalent subspaces of A are Jij, i = 1; : : : k, j = 1; : : : ; gi, up to a permutation on the node labels. Jordan equivalence where gi is the geometric multiplicity of eigenvalue 휆i, classes permit identical transforms over graphs of noniden- or the dimension of the kernel of A − 휆iI. The signal tical topologies and allow a basis-invariant characterization space S can be uniquely decomposed by the Jordan of total variation orderings of the spectral components. subspaces (see [13], [14] and Section II). For a graph Methods to exploit these classes to reduce computation time of the transform as well as limitations are discussed. signal s 2 S, the graph Fourier transform (GFT) of [12] is defined as Index Terms—Jordan decomposition, generalized k gi eigenspaces, directed graphs, graph equivalence classes, M M graph isomorphism, signal processing on graphs, networks F : S! Jij i=1 j=1 s ! (s ;:::; s ;:::; s ;:::; s ) ; (1) b11 b1g1 bk1 bkgk I. INTRODUCTION where sij is the (oblique) projection of s onto the Jordan subspace Jij parallel to SnJij.
    [Show full text]
  • Network Properties Revealed Through Matrix Functions 697
    SIAM REVIEW c 2010 Society for Industrial and Applied Mathematics Vol. 52, No. 4, pp. 696–714 Network Properties Revealed ∗ through Matrix Functions † Ernesto Estrada Desmond J. Higham‡ Abstract. The emerging field of network science deals with the tasks of modeling, comparing, and summarizing large data sets that describe complex interactions. Because pairwise affinity data can be stored in a two-dimensional array, graph theory and applied linear algebra provide extremely useful tools. Here, we focus on the general concepts of centrality, com- municability,andbetweenness, each of which quantifies important features in a network. Some recent work in the mathematical physics literature has shown that the exponential of a network’s adjacency matrix can be used as the basis for defining and computing specific versions of these measures. We introduce here a general class of measures based on matrix functions, and show that a particular case involving a matrix resolvent arises naturally from graph-theoretic arguments. We also point out connections between these measures and the quantities typically computed when spectral methods are used for data mining tasks such as clustering and ordering. We finish with computational examples showing the new matrix resolvent version applied to real networks. Key words. centrality measures, clustering methods, communicability, Estrada index, Fiedler vector, graph Laplacian, graph spectrum, power series, resolvent AMS subject classifications. 05C50, 05C82, 91D30 DOI. 10.1137/090761070 1. Motivation. 1.1. Introduction. Connections are important. Across the natural, technologi- cal, and social sciences it often makes sense to focus on the pattern of interactions be- tween individual components in a system [1, 8, 61].
    [Show full text]
  • Variants of the Graph Laplacian with Applications in Machine Learning
    Variants of the Graph Laplacian with Applications in Machine Learning Sven Kurras Dissertation zur Erlangung des Grades des Doktors der Naturwissenschaften (Dr. rer. nat.) im Fachbereich Informatik der Fakult¨at f¨urMathematik, Informatik und Naturwissenschaften der Universit¨atHamburg Hamburg, Oktober 2016 Diese Promotion wurde gef¨ordertdurch die Deutsche Forschungsgemeinschaft, Forschergruppe 1735 \Structural Inference in Statistics: Adaptation and Efficiency”. Betreuung der Promotion durch: Prof. Dr. Ulrike von Luxburg Tag der Disputation: 22. M¨arz2017 Vorsitzender des Pr¨ufungsausschusses: Prof. Dr. Matthias Rarey 1. Gutachterin: Prof. Dr. Ulrike von Luxburg 2. Gutachter: Prof. Dr. Wolfgang Menzel Zusammenfassung In s¨amtlichen Lebensbereichen finden sich Graphen. Zum Beispiel verbringen Menschen viel Zeit mit der Kantentraversierung des Internet-Graphen. Weitere Beispiele f¨urGraphen sind soziale Netzwerke, ¨offentlicher Nahverkehr, Molek¨ule, Finanztransaktionen, Fischernetze, Familienstammb¨aume,sowie der Graph, in dem alle Paare nat¨urlicher Zahlen gleicher Quersumme durch eine Kante verbunden sind. Graphen k¨onnendurch ihre Adjazenzmatrix W repr¨asentiert werden. Dar¨uber hinaus existiert eine Vielzahl alternativer Graphmatrizen. Viele strukturelle Eigenschaften von Graphen, beispielsweise ihre Kreisfreiheit, Anzahl Spannb¨aume,oder Random Walk Hitting Times, spiegeln sich auf die ein oder andere Weise in algebraischen Eigenschaften ihrer Graphmatrizen wider. Diese grundlegende Verflechtung erlaubt das Studium von Graphen unter Verwendung s¨amtlicher Resultate der Linearen Algebra, angewandt auf Graphmatrizen. Spektrale Graphentheorie studiert Graphen insbesondere anhand der Eigenwerte und Eigenvektoren ihrer Graphmatrizen. Dabei ist vor allem die Laplace-Matrix L = D − W von Bedeutung, aber es gibt derer viele Varianten, zum Beispiel die normalisierte Laplacian, die vorzeichenlose Laplacian und die Diplacian. Die meisten Varianten basieren auf einer \syntaktisch kleinen" Anderung¨ von L, etwa D +W anstelle von D −W .
    [Show full text]
  • Diagonal Sums of Doubly Stochastic Matrices Arxiv:2101.04143V1 [Math
    Diagonal Sums of Doubly Stochastic Matrices Richard A. Brualdi∗ Geir Dahl† 7 December 2020 Abstract Let Ωn denote the class of n × n doubly stochastic matrices (each such matrix is entrywise nonnegative and every row and column sum is 1). We study the diagonals of matrices in Ωn. The main question is: which A 2 Ωn are such that the diagonals in A that avoid the zeros of A all have the same sum of their entries. We give a characterization of such matrices, and establish several classes of patterns of such matrices. Key words. Doubly stochastic matrix, diagonal sum, patterns. AMS subject classifications. 05C50, 15A15. 1 Introduction Let Mn denote the (vector) space of real n×n matrices and on this space we consider arXiv:2101.04143v1 [math.CO] 11 Jan 2021 P the usual scalar product A · B = i;j aijbij for A; B 2 Mn, A = [aij], B = [bij]. A permutation σ = (k1; k2; : : : ; kn) of f1; 2; : : : ; ng can be identified with an n × n permutation matrix P = Pσ = [pij] by defining pij = 1, if j = ki, and pij = 0, otherwise. If X = [xij] is an n × n matrix, the entries of X in the positions of X in ∗Department of Mathematics, University of Wisconsin, Madison, WI 53706, USA. [email protected] †Department of Mathematics, University of Oslo, Norway. [email protected]. Correspond- ing author. 1 which P has a 1 is the diagonal Dσ of X corresponding to σ and P , and their sum n X dP (X) = xi;ki i=1 is a diagonal sum of X.
    [Show full text]
  • The Generalized Dedekind Determinant
    Contemporary Mathematics Volume 655, 2015 http://dx.doi.org/10.1090/conm/655/13232 The Generalized Dedekind Determinant M. Ram Murty and Kaneenika Sinha Abstract. The aim of this note is to calculate the determinants of certain matrices which arise in three different settings, namely from characters on finite abelian groups, zeta functions on lattices and Fourier coefficients of normalized Hecke eigenforms. Seemingly disparate, these results arise from a common framework suggested by elementary linear algebra. 1. Introduction The purpose of this note is three-fold. We prove three seemingly disparate results about matrices which arise in three different settings, namely from charac- ters on finite abelian groups, zeta functions on lattices and Fourier coefficients of normalized Hecke eigenforms. In this section, we state these theorems. In Section 2, we state a lemma from elementary linear algebra, which lies at the heart of our three theorems. A detailed discussion and proofs of the theorems appear in Sections 3, 4 and 5. In what follows below, for any n × n matrix A and for 1 ≤ i, j ≤ n, Ai,j or (A)i,j will denote the (i, j)-th entry of A. A diagonal matrix with diagonal entries y1,y2, ...yn will be denoted as diag (y1,y2, ...yn). Theorem 1.1. Let G = {x1,x2,...xn} be a finite abelian group and let f : G → C be a complex-valued function on G. Let F be an n × n matrix defined by F −1 i,j = f(xi xj). For a character χ on G, (that is, a homomorphism of G into the multiplicative group of the field C of complex numbers), we define Sχ := f(s)χ(s).
    [Show full text]
  • Dynamical Systems Associated with Adjacency Matrices
    DISCRETE AND CONTINUOUS doi:10.3934/dcdsb.2018190 DYNAMICAL SYSTEMS SERIES B Volume 23, Number 5, July 2018 pp. 1945{1973 DYNAMICAL SYSTEMS ASSOCIATED WITH ADJACENCY MATRICES Delio Mugnolo Delio Mugnolo, Lehrgebiet Analysis, Fakult¨atMathematik und Informatik FernUniversit¨atin Hagen, D-58084 Hagen, Germany Abstract. We develop the theory of linear evolution equations associated with the adjacency matrix of a graph, focusing in particular on infinite graphs of two kinds: uniformly locally finite graphs as well as locally finite line graphs. We discuss in detail qualitative properties of solutions to these problems by quadratic form methods. We distinguish between backward and forward evo- lution equations: the latter have typical features of diffusive processes, but cannot be well-posed on graphs with unbounded degree. On the contrary, well-posedness of backward equations is a typical feature of line graphs. We suggest how to detect even cycles and/or couples of odd cycles on graphs by studying backward equations for the adjacency matrix on their line graph. 1. Introduction. The aim of this paper is to discuss the properties of the linear dynamical system du (t) = Au(t) (1.1) dt where A is the adjacency matrix of a graph G with vertex set V and edge set E. Of course, in the case of finite graphs A is a bounded linear operator on the finite di- mensional Hilbert space CjVj. Hence the solution is given by the exponential matrix etA, but computing it is in general a hard task, since A carries little structure and can be a sparse or dense matrix, depending on the underlying graph.
    [Show full text]
  • "Distance Measures for Graph Theory"
    Distance measures for graph theory : Comparisons and analyzes of different methods Dissertation presented by Maxime DUYCK for obtaining the Master’s degree in Mathematical Engineering Supervisor(s) Marco SAERENS Reader(s) Guillaume GUEX, Bertrand LEBICHOT Academic year 2016-2017 Acknowledgments First, I would like to thank my supervisor Pr. Marco Saerens for his presence, his advice and his precious help throughout the realization of this thesis. Second, I would also like to thank Bertrand Lebichot and Guillaume Guex for agreeing to read this work. Next, I would like to thank my parents, all my family and my friends to have accompanied and encouraged me during all my studies. Finally, I would thank Malian De Ron for creating this template [65] and making it available to me. This helped me a lot during “le jour et la nuit”. Contents 1. Introduction 1 1.1. Context presentation .................................. 1 1.2. Contents .......................................... 2 2. Theoretical part 3 2.1. Preliminaries ....................................... 4 2.1.1. Networks and graphs .............................. 4 2.1.2. Useful matrices and tools ........................... 4 2.2. Distances and kernels on a graph ........................... 7 2.2.1. Notion of (dis)similarity measures ...................... 7 2.2.2. Kernel on a graph ................................ 8 2.2.3. The shortest-path distance .......................... 9 2.3. Kernels from distances ................................. 9 2.3.1. Multidimensional scaling ............................ 9 2.3.2. Gaussian mapping ............................... 9 2.4. Similarity measures between nodes .......................... 9 2.4.1. Katz index and its Leicht’s extension .................... 10 2.4.2. Commute-time distance and Euclidean commute-time distance .... 10 2.4.3. SimRank similarity measure .........................
    [Show full text]
  • Adjacency Matrices Text Reference: Section 2.1, P
    Adjacency Matrices Text Reference: Section 2.1, p. 114 The purpose of this set of exercises is to show how powers of a matrix may be used to investigate graphs. Special attention is paid to airline route maps as examples of graphs. In order to study graphs, the notion of graph must first be defined. A graph is a set of points (called vertices,ornodes) and a set of lines called edges connecting some pairs of vertices. Two vertices connected by an edge are said to be adjacent. Consider the graph in Figure 1. Notice that two vertices may be connected by more than one edge (A and B are connected by 2 distinct edges), that a vertex need not be connected to any other vertex (D), and that a vertex may be connected to itself (F). Another example of a graph is the route map that most airlines (or railways) produce. A copy of the northern route map for Cape Air from May 2001 is Figure 2. This map is available at the web page: www.flycapeair.com. Here the vertices are the cities to which Cape Air flies, and two vertices are connected if a direct flight flies between them. Figure 2: Northern Route Map for Cape Air -- May 2001 Application Project: Adjacency Matrices Page 2 of 5 Some natural questions arise about graphs. It might be important to know if two vertices are connected by a sequence of two edges, even if they are not connected by a single edge. In Figure 1, A and C are connected by a two-edge sequence (actually, there are four distinct ways to go from A to C in two steps).
    [Show full text]
  • On Structured Realizability and Stabilizability of Linear Systems
    On Structured Realizability and Stabilizability of Linear Systems Laurent Lessard Maxim Kristalny Anders Rantzer Abstract We study the notion of structured realizability for linear systems defined over graphs. A stabilizable and detectable realization is structured if the state-space matrices inherit the sparsity pattern of the adjacency matrix of the associated graph. In this paper, we demonstrate that not every structured transfer matrix has a structured realization and we reveal the practical meaning of this fact. We also uncover a close connection between the structured realizability of a plant and whether the plant can be stabilized by a structured controller. In particular, we show that a structured stabilizing controller can only exist when the plant admits a structured realization. Finally, we give a parameterization of all structured stabilizing controllers and show that they always have structured realizations. 1 Introduction Linear time-invariant systems are typically represented using transfer functions or state- space realizations. The relationship between these representations is well understood; every transfer function has a minimal state-space realization and one can easily move between representations depending on the need. In this paper, we address the question of whether this relationship between representa- tions still holds for systems defined over directed graphs. Consider the simple two-node graph of Figure 1. For i = 1, 2, the node i represents a subsystem with inputs ui and outputs yi, and the edge means that subsystem 1 can influence subsystem 2 but not vice versa. The transfer matrix for any such a system has the sparsity pattern S1. For a state-space realization to make sense in this context, every state should be as- sociated with a subsystem, which means that the state should be computable using the information available to that subsystem.
    [Show full text]
  • A Note on the Spectra and Eigenspaces of the Universal Adjacency Matrices of Arbitrary Lifts of Graphs
    A note on the spectra and eigenspaces of the universal adjacency matrices of arbitrary lifts of graphs C. Dalf´o,∗ M. A. Fiol,y S. Pavl´ıkov´a,z and J. Sir´aˇnˇ x Abstract The universal adjacency matrix U of a graph Γ, with adjacency matrix A, is a linear combination of A, the diagonal matrix D of vertex degrees, the identity matrix I, and the all-1 matrix J with real coefficients, that is, U = c1A + c2D + c3I + c4J, with ci 2 R and c1 6= 0. Thus, as particular cases, U may be the adjacency matrix, the Laplacian, the signless Laplacian, and the Seidel matrix. In this note, we show that basically the same method introduced before by the authors can be applied for determining the spectra and bases of all the corresponding eigenspaces of arbitrary lifts of graphs (regular or not). Keywords: Graph; covering; lift; universal adjacency matrix; spectrum; eigenspace. 2010 Mathematics Subject Classification: 05C20, 05C50, 15A18. 1 Introduction For a graph Γ on n vertices, with adjacency matrix A and degree sequence d1; : : : ; dn, the universal adjacency matrix is defined as U = c1A+c2D +c3I +c4J, with ci 2 R and c1 6= 0, where D = diag(d1; : : : ; dn), I is the identity matrix, and J is the all-1 matrix. See, for arXiv:1912.04740v1 [math.CO] 8 Dec 2019 instance, Haemers and Omidi [10] or Farrugia and Sciriha [5]. Thus, for given values of the ∗Departament de Matem`atica, Universitat de Lleida, Igualada (Barcelona), Catalonia, [email protected] yDepartament de Matem`atiques,Universitat Polit`ecnicade Catalunya; and Barcelona Graduate
    [Show full text]