Chapter 17 Graphs and Graph Laplacians

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 17 Graphs and Graph Laplacians Chapter 17 Graphs and Graph Laplacians 17.1 Directed Graphs, Undirected Graphs, Incidence Matrices, Adjacency Matrices, Weighted Graphs Definition 17.1. A directed graph is a pair G =(V,E), where V = v1,...,vm is a set of nodes or vertices,and E V V {is a set of ordered} pairs of distinct nodes (that is,✓ pairs⇥ (u, v) V V with u = v), called edges.Given any edge e =(2u, v),⇥ we let s(e6)=u be the source of e and t(e)=v be the target of e. Remark: Since an edge is a pair (u, v)withu = v, self-loops are not allowed. 6 Also, there is at most one edge from a node u to a node v.Suchgraphsaresometimescalledsimple graphs. 733 1 734 CHAPTER 17. GRAPHS AND GRAPH LAPLACIANS e1 v1 v2 e5 e3 e2 e4 v5 e6 v3 v4 e7 Figure 17.1: Graph G1. For every node v V ,thedegree d(v)ofv is the number of edges leaving or2 entering v: d(v)= u V (v, u) E or (u, v) E . |{ 2 | 2 2 }| We abbreviate d(vi)asdi.Thedegree matrix D(G), is the diagonal matrix D(G)=diag(d1,...,dm). For example, for graph G1,wehave 20000 04000 0 1 D(G1)= 00300 . B00030C B C B00002C B C @ A Unless confusion arises, we write D instead of D(G). 17.1. DIRECTED GRAPHS, UNDIRECTED GRAPHS, WEIGHTED GRAPHS 735 Definition 17.2. Given a directed graph G =(V,E), for any two nodes u, v V ,apath from u to v is a 2 sequence of nodes (v0,v1,...,vk)suchthatv0 = u, vk = v,and(vi,vi+1)isanedgeinE for all i with 0 i k 1. The integer k is the length of the path. A path is −closed if u = v.ThegraphG is strongly connected if for any two distinct node u, v V ,thereisapathfrom u to v and there is a path from2v to u. Remark: The terminology walk is often used instead of path,thewordpathbeingreservedtothecasewherethe nodes vi are all distinct, except that v0 = vk when the path is closed. The binary relation on V V defined so that u and v are related i↵there is a path⇥ from u to v and there is a path from v to u is an equivalence relation whose equivalence classes are called the strongly connected components of G. 736 CHAPTER 17. GRAPHS AND GRAPH LAPLACIANS Definition 17.3. Given a directed graph G =(V,E), with V = v1,...,vm ,ifE = e1,...,en ,thenthe incidence matrix{ B(G})ofG is the{ m n matrix} whose ⇥ entries bij are given by +1 if s(ej)=vi bij = 1ift(e )=v 8− j i <>0otherwise. :> Here is the incidence matrix of the graph G1: 1100000 10 1 11 0 0 0− − − 1 B = 0 11 0 0 0 1 . − B 00010 1 1C B − − C B 0000 11 0C B − C @ A Again, unless confusion arises, we write B instead of B(G). Remark: Some authors adopt the opposite convention of sign in defining the incidence matrix, which means that their incidence matrix is B. − 1 17.1. DIRECTED GRAPHS, UNDIRECTED GRAPHS, WEIGHTED GRAPHS 737 a v1 v2 e b c d v5 f v3 v4 g Figure 17.2: The undirected graph G2. Undirected graphs are obtained from directed graphs by forgetting the orientation of the edges. Definition 17.4. A graph (or undirected graph)isa pair G =(V,E), where V = v1,...,vm is a set of nodes or vertices,andE is a set{ of two-element} subsets of V (that is, subsets u, v ,withu, v V and u = v), called edges. { } 2 6 Remark: Since an edge is a set u, v ,wehaveu = v, so self-loops are not allowed. Also,{ for every} set of nodes6 u, v ,thereisatmostoneedgebetweenu and v. { } As in the case of directed graphs, such graphs are some- times called simple graphs. 738 CHAPTER 17. GRAPHS AND GRAPH LAPLACIANS For every node v V ,thedegree d(v)ofv is the number of edges incident2 to v: d(v)= u V u, v E . |{ 2 |{ }2 }| The degree matrix D is defined as before. Definition 17.5. Given a (undirected) graph G =(V,E), for any two nodes u, v V ,apath from u to v is a se- 2 quence of nodes (v0,v1,...,vk)suchthatv0 = u, vk = v, and vi,vi+1 is an edge in E for all i with 0 i k 1. The{ integer k} is the length of the path. A path isclosed− if u = v.ThegraphG is connected if for any two distinct node u, v V ,thereisapathfromu to v. 2 Remark: The terminology walk or chain is often used instead of path,thewordpathbeingreservedtothecase where the nodes vi are all distinct, except that v0 = vk when the path is closed. The binary relation on V V defined so that u and v are related i↵there is a path from⇥ u to v is an equivalence re- lation whose equivalence classes are called the connected components of G. 17.1. DIRECTED GRAPHS, UNDIRECTED GRAPHS, WEIGHTED GRAPHS 739 The notion of incidence matrix for an undirected graph is not as useful as in the case of directed graphs Definition 17.6. Given a graph G =(V,E), with V = v ,...,v ,ifE = e ,...,e ,thentheincidence { 1 m} { 1 n} matrix B(G)ofG is the m n matrix whose entries bij are given by ⇥ +1 if ej = vi,vk for some k bij = { } (0otherwise. Unlike the case of directed graphs, the entries in the incidence matrix of a graph (undirected) are nonnegative. We usually write B instead of B(G). The notion of adjacency matrix is basically the same for directed or undirected graphs. 740 CHAPTER 17. GRAPHS AND GRAPH LAPLACIANS Definition 17.7. Given a directed or undirected graph G =(V,E), with V = v ,...,v ,theadjacency ma- { 1 m} trix A(G)ofG is the symmetric m m matrix (aij) such that ⇥ (1) If G is directed, then 1ifthereissomeedge(v ,v ) E i j 2 aij = or some edge (v ,v) E 8 j i 2 <>0otherwise. (2) Else if G is:> undirected, then 1ifthereissomeedgevi,vj E aij = { }2 (0otherwise. As usual, unless confusion arises, we write A instead of A(G). Here is the adjacency matrix of both graphs G1 and G2: 01100 10111 0 1 A = 11010 . B01101C B C B01010C B C @ A 17.1. DIRECTED GRAPHS, UNDIRECTED GRAPHS, WEIGHTED GRAPHS 741 If G =(V,E)isadirectedoranundirectedgraph,given anodeu V ,anynodev V such that there is an edge (u, v)inthedirectedcaseor2 2 u, v in the undirected case is called adjacent to v,andweoftenusethenotation{ } u v. ⇠ Observe that the binary relation is symmetric when G is an undirected graph, but in general⇠ it is not symmetric when G is a directed graph. If G =(V,E)isanundirectedgraph,theadjacencyma- trix A of G can be viewed as a linear map from RV to RV ,suchthatforallx Rm,wehave 2 (Ax)i = xj; j i X⇠ that is, the value of Ax at vi is the sum of the values of x at the nodes vj adjacent to vi. 742 CHAPTER 17. GRAPHS AND GRAPH LAPLACIANS The adjacency matrix can be viewed as a di↵usion op- erator. This observation yields a geometric interpretation of what it means for a vector x Rm to be an eigenvector of A associated with some eigenvalue2 λ;wemusthave λxi = xj,i=1,...,m, j i X⇠ which means that the the sum of the values of x assigned to the nodes vj adjacent to vi is equal to λ times the value of x at vi. Definition 17.8. Given any undirected graph G =(V,E), an orientation of G is a function σ : E V V assign- ing a source and a target to every edge in!E,whichmeans⇥ that for every edge u, v E,eitherσ( u, v )=(u, v) or σ( u, v )=(v, u{). The}2oriented graph{ G}σ obtained from G{ by} applying the orientation σ is the directed graph Gσ =(V,Eσ), with Eσ = σ(E). 17.1. DIRECTED GRAPHS, UNDIRECTED GRAPHS, WEIGHTED GRAPHS 743 Proposition 17.1. Let G =(V,E) be any undirected graph with m vertices, n edges, and c connected com- ponents. For any orientation σ of G, if B is the in- cidence matrix of the oriented graph Gσ, then c = dim(Ker (B>)), and B has rank m c. Furthermore, − the nullspace of B> has a basis consisting of indica- tor vectors of the connected components of G; that is, vectors (z1,...,zm) such that zj =1i↵ vj is in the ith component Ki of G, and zj =0otherwise. Following common practice, we denote by 1 the (column) vector whose components are all equal to 1. Observe that B>1 =0. According to Proposition 17.1, the graph G is connected i↵ B has rank m 1i↵thenullspaceofB> is the one- dimensional space− spanned by 1. In many applications, the notion of graph needs to be generalized to capture the intuitive idea that two nodes u and v are linked with a degree of certainty (or strength). 744 CHAPTER 17. GRAPHS AND GRAPH LAPLACIANS Thus, we assign a nonnegative weight wij to an edge v ,v ;thesmallerw is, the weaker is the link (or { i j} ij similarity) between vi and vj,andthegreaterwij is, the stronger is the link (or similarity) between vi and vj. Definition 17.9. A weighted graph is a pair G =(V,W), where V = v1,...,vm is a set of nodes or vertices,and W is a symmetric{ matrix} called the weight matrix,such that wij 0foralli, j 1,...,m ,andwii =0for i =1,...,m≥ .Wesaythataset2{ v},v is an edge i↵ { i j} wij > 0.
Recommended publications
  • Quiz7 Problem 1
    Quiz 7 Quiz7 Problem 1. Independence The Problem. In the parts below, cite which tests apply to decide on independence or dependence. Choose one test and show complete details. 1 ! 1 ! (a) Vectors ~v = ;~v = 1 −1 2 1 0 1 1 0 1 1 0 1 1 B C B C B C (b) Vectors ~v1 = @ −1 A ;~v2 = @ 1 A ;~v3 = @ 0 A 0 −1 −1 2 5 (c) Vectors ~v1;~v2;~v3 are data packages constructed from the equations y = x ; y = x ; y = x10 on (−∞; 1). (d) Vectors ~v1;~v2;~v3 are data packages constructed from the equations y = 1 + x; y = 1 − x; y = x3 on (−∞; 1). Basic Test. To show three vectors are independent, form the system of equations c1~v1 + c2~v2 + c3~v3 = ~0; then solve for c1; c2; c3. If the only solution possible is c1 = c2 = c3 = 0, then the vectors are independent. Linear Combination Test. A list of vectors is independent if and only if each vector in the list is not a linear combination of the remaining vectors, and each vector in the list is not zero. Subset Test. Any nonvoid subset of an independent set is independent. The basic test leads to three quick independence tests for column vectors. All tests use the augmented matrix A of the vectors, which for 3 vectors is A =< ~v1j~v2j~v3 >. Rank Test. The vectors are independent if and only if rank(A) equals the number of vectors. Determinant Test. Assume A is square. The vectors are independent if and only if the deter- minant of A is nonzero.
    [Show full text]
  • Boundary Value Problems on Weighted Paths
    Introduction and basic concepts BVP on weighted paths Bibliography Boundary value problems on a weighted path Angeles Carmona, Andr´esM. Encinas and Silvia Gago Depart. Matem`aticaAplicada 3, UPC, Barcelona, SPAIN Midsummer Combinatorial Workshop XIX Prague, July 29th - August 3rd, 2013 MCW 2013, A. Carmona, A.M. Encinas and S.Gago Boundary value problems on a weighted path Introduction and basic concepts BVP on weighted paths Bibliography Outline of the talk Notations and definitions Weighted graphs and matrices Schr¨odingerequations Boundary value problems on weighted graphs Green matrix of the BVP Boundary Value Problems on paths Paths with constant potential Orthogonal polynomials Schr¨odingermatrix of the weighted path associated to orthogonal polynomials Two-side Boundary Value Problems in weighted paths MCW 2013, A. Carmona, A.M. Encinas and S.Gago Boundary value problems on a weighted path Introduction and basic concepts Schr¨odingerequations BVP on weighted paths Definition of BVP Bibliography Weighted graphs A weighted graphΓ=( V ; E; c) is composed by: V is a set of elements called vertices. E is a set of elements called edges. c : V × V −! [0; 1) is an application named conductance associated to the edges. u, v are adjacent, u ∼ v iff c(u; v) = cuv 6= 0. X The degree of a vertex u is du = cuv . v2V c34 u4 u1 c12 u2 c23 u3 c45 c35 c27 u5 c56 u7 c67 u6 MCW 2013, A. Carmona, A.M. Encinas and S.Gago Boundary value problems on a weighted path Introduction and basic concepts Schr¨odingerequations BVP on weighted paths Definition of BVP Bibliography Matrices associated with graphs Definition The weighted Laplacian matrix of a weighted graph Γ is defined as di if i = j; (L)ij = −cij if i 6= j: c34 u4 u c u c u 1 12 2 23 3 0 d1 −c12 0 0 0 0 0 1 c B −c12 d2 −c23 0 0 0 −c27 C 45 B C c B 0 −c23 d3 −c34 −c35 0 0 C 35 B C c27 L = B 0 0 −c34 d4 −c45 0 0 C B C u5 B 0 0 −c35 −c45 d5 −c56 0 C c56 @ 0 0 0 0 −c56 d6 −c67 A 0 −c27 0 0 0 −c67 d7 u7 c67 u6 MCW 2013, A.
    [Show full text]
  • Adjacency and Incidence Matrices
    Adjacency and Incidence Matrices 1 / 10 The Incidence Matrix of a Graph Definition Let G = (V ; E) be a graph where V = f1; 2;:::; ng and E = fe1; e2;:::; emg. The incidence matrix of G is an n × m matrix B = (bik ), where each row corresponds to a vertex and each column corresponds to an edge such that if ek is an edge between i and j, then all elements of column k are 0 except bik = bjk = 1. 1 2 e 21 1 13 f 61 0 07 3 B = 6 7 g 40 1 05 4 0 0 1 2 / 10 The First Theorem of Graph Theory Theorem If G is a multigraph with no loops and m edges, the sum of the degrees of all the vertices of G is 2m. Corollary The number of odd vertices in a loopless multigraph is even. 3 / 10 Linear Algebra and Incidence Matrices of Graphs Recall that the rank of a matrix is the dimension of its row space. Proposition Let G be a connected graph with n vertices and let B be the incidence matrix of G. Then the rank of B is n − 1 if G is bipartite and n otherwise. Example 1 2 e 21 1 13 f 61 0 07 3 B = 6 7 g 40 1 05 4 0 0 1 4 / 10 Linear Algebra and Incidence Matrices of Graphs Recall that the rank of a matrix is the dimension of its row space. Proposition Let G be a connected graph with n vertices and let B be the incidence matrix of G.
    [Show full text]
  • A Brief Introduction to Spectral Graph Theory
    A BRIEF INTRODUCTION TO SPECTRAL GRAPH THEORY CATHERINE BABECKI, KEVIN LIU, AND OMID SADEGHI MATH 563, SPRING 2020 Abstract. There are several matrices that can be associated to a graph. Spectral graph theory is the study of the spectrum, or set of eigenvalues, of these matrices and its relation to properties of the graph. We introduce the primary matrices associated with graphs, and discuss some interesting questions that spectral graph theory can answer. We also discuss a few applications. 1. Introduction and Definitions This work is primarily based on [1]. We expect the reader is familiar with some basic graph theory and linear algebra. We begin with some preliminary definitions. Definition 1. Let Γ be a graph without multiple edges. The adjacency matrix of Γ is the matrix A indexed by V (Γ), where Axy = 1 when there is an edge from x to y, and Axy = 0 otherwise. This can be generalized to multigraphs, where Axy becomes the number of edges from x to y. Definition 2. Let Γ be an undirected graph without loops. The incidence matrix of Γ is the matrix M, with rows indexed by vertices and columns indexed by edges, where Mxe = 1 whenever vertex x is an endpoint of edge e. For a directed graph without loss, the directed incidence matrix N is defined by Nxe = −1; 1; 0 corresponding to when x is the head of e, tail of e, or not on e. Definition 3. Let Γ be an undirected graph without loops. The Laplace matrix of Γ is the matrix L indexed by V (G) with zero row sums, where Lxy = −Axy for x 6= y.
    [Show full text]
  • Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A
    1 Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A. Deri, Member, IEEE, and José M. F. Moura, Fellow, IEEE Abstract—We define and discuss the utility of two equiv- Consider a graph G = G(A) with adjacency matrix alence graph classes over which a spectral projector-based A 2 CN×N with k ≤ N distinct eigenvalues and Jordan graph Fourier transform is equivalent: isomorphic equiv- decomposition A = VJV −1. The associated Jordan alence classes and Jordan equivalence classes. Isomorphic equivalence classes show that the transform is equivalent subspaces of A are Jij, i = 1; : : : k, j = 1; : : : ; gi, up to a permutation on the node labels. Jordan equivalence where gi is the geometric multiplicity of eigenvalue 휆i, classes permit identical transforms over graphs of noniden- or the dimension of the kernel of A − 휆iI. The signal tical topologies and allow a basis-invariant characterization space S can be uniquely decomposed by the Jordan of total variation orderings of the spectral components. subspaces (see [13], [14] and Section II). For a graph Methods to exploit these classes to reduce computation time of the transform as well as limitations are discussed. signal s 2 S, the graph Fourier transform (GFT) of [12] is defined as Index Terms—Jordan decomposition, generalized k gi eigenspaces, directed graphs, graph equivalence classes, M M graph isomorphism, signal processing on graphs, networks F : S! Jij i=1 j=1 s ! (s ;:::; s ;:::; s ;:::; s ) ; (1) b11 b1g1 bk1 bkgk I. INTRODUCTION where sij is the (oblique) projection of s onto the Jordan subspace Jij parallel to SnJij.
    [Show full text]
  • Network Properties Revealed Through Matrix Functions 697
    SIAM REVIEW c 2010 Society for Industrial and Applied Mathematics Vol. 52, No. 4, pp. 696–714 Network Properties Revealed ∗ through Matrix Functions † Ernesto Estrada Desmond J. Higham‡ Abstract. The emerging field of network science deals with the tasks of modeling, comparing, and summarizing large data sets that describe complex interactions. Because pairwise affinity data can be stored in a two-dimensional array, graph theory and applied linear algebra provide extremely useful tools. Here, we focus on the general concepts of centrality, com- municability,andbetweenness, each of which quantifies important features in a network. Some recent work in the mathematical physics literature has shown that the exponential of a network’s adjacency matrix can be used as the basis for defining and computing specific versions of these measures. We introduce here a general class of measures based on matrix functions, and show that a particular case involving a matrix resolvent arises naturally from graph-theoretic arguments. We also point out connections between these measures and the quantities typically computed when spectral methods are used for data mining tasks such as clustering and ordering. We finish with computational examples showing the new matrix resolvent version applied to real networks. Key words. centrality measures, clustering methods, communicability, Estrada index, Fiedler vector, graph Laplacian, graph spectrum, power series, resolvent AMS subject classifications. 05C50, 05C82, 91D30 DOI. 10.1137/090761070 1. Motivation. 1.1. Introduction. Connections are important. Across the natural, technologi- cal, and social sciences it often makes sense to focus on the pattern of interactions be- tween individual components in a system [1, 8, 61].
    [Show full text]
  • Incidence Matrices and Interval Graphs
    Pacific Journal of Mathematics INCIDENCE MATRICES AND INTERVAL GRAPHS DELBERT RAY FULKERSON AND OLIVER GROSS Vol. 15, No. 3 November 1965 PACIFIC JOURNAL OF MATHEMATICS Vol. 15, No. 3, 1965 INCIDENCE MATRICES AND INTERVAL GRAPHS D. R. FULKERSON AND 0. A. GROSS According to present genetic theory, the fine structure of genes consists of linearly ordered elements. A mutant gene is obtained by alteration of some connected portion of this structure. By examining data obtained from suitable experi- ments, it can be determined whether or not the blemished portions of two mutant genes intersect or not, and thus inter- section data for a large number of mutants can be represented as an undirected graph. If this graph is an "interval graph," then the observed data is consistent with a linear model of the gene. The problem of determining when a graph is an interval graph is a special case of the following problem concerning (0, l)-matrices: When can the rows of such a matrix be per- muted so as to make the l's in each column appear consecu- tively? A complete theory is obtained for this latter problem, culminating in a decomposition theorem which leads to a rapid algorithm for deciding the question, and for constructing the desired permutation when one exists. Let A — (dij) be an m by n matrix whose entries ai3 are all either 0 or 1. The matrix A may be regarded as the incidence matrix of elements el9 e2, , em vs. sets Sl9 S2, , Sn; that is, ai3 = 0 or 1 ac- cording as et is not or is a member of S3 .
    [Show full text]
  • Variants of the Graph Laplacian with Applications in Machine Learning
    Variants of the Graph Laplacian with Applications in Machine Learning Sven Kurras Dissertation zur Erlangung des Grades des Doktors der Naturwissenschaften (Dr. rer. nat.) im Fachbereich Informatik der Fakult¨at f¨urMathematik, Informatik und Naturwissenschaften der Universit¨atHamburg Hamburg, Oktober 2016 Diese Promotion wurde gef¨ordertdurch die Deutsche Forschungsgemeinschaft, Forschergruppe 1735 \Structural Inference in Statistics: Adaptation and Efficiency”. Betreuung der Promotion durch: Prof. Dr. Ulrike von Luxburg Tag der Disputation: 22. M¨arz2017 Vorsitzender des Pr¨ufungsausschusses: Prof. Dr. Matthias Rarey 1. Gutachterin: Prof. Dr. Ulrike von Luxburg 2. Gutachter: Prof. Dr. Wolfgang Menzel Zusammenfassung In s¨amtlichen Lebensbereichen finden sich Graphen. Zum Beispiel verbringen Menschen viel Zeit mit der Kantentraversierung des Internet-Graphen. Weitere Beispiele f¨urGraphen sind soziale Netzwerke, ¨offentlicher Nahverkehr, Molek¨ule, Finanztransaktionen, Fischernetze, Familienstammb¨aume,sowie der Graph, in dem alle Paare nat¨urlicher Zahlen gleicher Quersumme durch eine Kante verbunden sind. Graphen k¨onnendurch ihre Adjazenzmatrix W repr¨asentiert werden. Dar¨uber hinaus existiert eine Vielzahl alternativer Graphmatrizen. Viele strukturelle Eigenschaften von Graphen, beispielsweise ihre Kreisfreiheit, Anzahl Spannb¨aume,oder Random Walk Hitting Times, spiegeln sich auf die ein oder andere Weise in algebraischen Eigenschaften ihrer Graphmatrizen wider. Diese grundlegende Verflechtung erlaubt das Studium von Graphen unter Verwendung s¨amtlicher Resultate der Linearen Algebra, angewandt auf Graphmatrizen. Spektrale Graphentheorie studiert Graphen insbesondere anhand der Eigenwerte und Eigenvektoren ihrer Graphmatrizen. Dabei ist vor allem die Laplace-Matrix L = D − W von Bedeutung, aber es gibt derer viele Varianten, zum Beispiel die normalisierte Laplacian, die vorzeichenlose Laplacian und die Diplacian. Die meisten Varianten basieren auf einer \syntaktisch kleinen" Anderung¨ von L, etwa D +W anstelle von D −W .
    [Show full text]
  • Arxiv:1711.06300V1
    EXPLICIT BLOCK-STRUCTURES FOR BLOCK-SYMMETRIC FIEDLER-LIKE PENCILS∗ M. I. BUENO†, M. MARTIN ‡, J. PEREZ´ §, A. SONG ¶, AND I. VIVIANO k Abstract. In the last decade, there has been a continued effort to produce families of strong linearizations of a matrix polynomial P (λ), regular and singular, with good properties, such as, being companion forms, allowing the recovery of eigen- vectors of a regular P (λ) in an easy way, allowing the computation of the minimal indices of a singular P (λ) in an easy way, etc. As a consequence of this research, families such as the family of Fiedler pencils, the family of generalized Fiedler pencils (GFP), the family of Fiedler pencils with repetition, and the family of generalized Fiedler pencils with repetition (GFPR) were con- structed. In particular, one of the goals was to find in these families structured linearizations of structured matrix polynomials. For example, if a matrix polynomial P (λ) is symmetric (Hermitian), it is convenient to use linearizations of P (λ) that are also symmetric (Hermitian). Both the family of GFP and the family of GFPR contain block-symmetric linearizations of P (λ), which are symmetric (Hermitian) when P (λ) is. Now the objective is to determine which of those structured linearizations have the best numerical properties. The main obstacle for this study is the fact that these pencils are defined implicitly as products of so-called elementary matrices. Recent papers in the literature had as a goal to provide an explicit block-structure for the pencils belonging to the family of Fiedler pencils and any of its further generalizations to solve this problem.
    [Show full text]
  • Diagonal Sums of Doubly Stochastic Matrices Arxiv:2101.04143V1 [Math
    Diagonal Sums of Doubly Stochastic Matrices Richard A. Brualdi∗ Geir Dahl† 7 December 2020 Abstract Let Ωn denote the class of n × n doubly stochastic matrices (each such matrix is entrywise nonnegative and every row and column sum is 1). We study the diagonals of matrices in Ωn. The main question is: which A 2 Ωn are such that the diagonals in A that avoid the zeros of A all have the same sum of their entries. We give a characterization of such matrices, and establish several classes of patterns of such matrices. Key words. Doubly stochastic matrix, diagonal sum, patterns. AMS subject classifications. 05C50, 15A15. 1 Introduction Let Mn denote the (vector) space of real n×n matrices and on this space we consider arXiv:2101.04143v1 [math.CO] 11 Jan 2021 P the usual scalar product A · B = i;j aijbij for A; B 2 Mn, A = [aij], B = [bij]. A permutation σ = (k1; k2; : : : ; kn) of f1; 2; : : : ; ng can be identified with an n × n permutation matrix P = Pσ = [pij] by defining pij = 1, if j = ki, and pij = 0, otherwise. If X = [xij] is an n × n matrix, the entries of X in the positions of X in ∗Department of Mathematics, University of Wisconsin, Madison, WI 53706, USA. [email protected] †Department of Mathematics, University of Oslo, Norway. [email protected]. Correspond- ing author. 1 which P has a 1 is the diagonal Dσ of X corresponding to σ and P , and their sum n X dP (X) = xi;ki i=1 is a diagonal sum of X.
    [Show full text]
  • The Generalized Dedekind Determinant
    Contemporary Mathematics Volume 655, 2015 http://dx.doi.org/10.1090/conm/655/13232 The Generalized Dedekind Determinant M. Ram Murty and Kaneenika Sinha Abstract. The aim of this note is to calculate the determinants of certain matrices which arise in three different settings, namely from characters on finite abelian groups, zeta functions on lattices and Fourier coefficients of normalized Hecke eigenforms. Seemingly disparate, these results arise from a common framework suggested by elementary linear algebra. 1. Introduction The purpose of this note is three-fold. We prove three seemingly disparate results about matrices which arise in three different settings, namely from charac- ters on finite abelian groups, zeta functions on lattices and Fourier coefficients of normalized Hecke eigenforms. In this section, we state these theorems. In Section 2, we state a lemma from elementary linear algebra, which lies at the heart of our three theorems. A detailed discussion and proofs of the theorems appear in Sections 3, 4 and 5. In what follows below, for any n × n matrix A and for 1 ≤ i, j ≤ n, Ai,j or (A)i,j will denote the (i, j)-th entry of A. A diagonal matrix with diagonal entries y1,y2, ...yn will be denoted as diag (y1,y2, ...yn). Theorem 1.1. Let G = {x1,x2,...xn} be a finite abelian group and let f : G → C be a complex-valued function on G. Let F be an n × n matrix defined by F −1 i,j = f(xi xj). For a character χ on G, (that is, a homomorphism of G into the multiplicative group of the field C of complex numbers), we define Sχ := f(s)χ(s).
    [Show full text]
  • "Distance Measures for Graph Theory"
    Distance measures for graph theory : Comparisons and analyzes of different methods Dissertation presented by Maxime DUYCK for obtaining the Master’s degree in Mathematical Engineering Supervisor(s) Marco SAERENS Reader(s) Guillaume GUEX, Bertrand LEBICHOT Academic year 2016-2017 Acknowledgments First, I would like to thank my supervisor Pr. Marco Saerens for his presence, his advice and his precious help throughout the realization of this thesis. Second, I would also like to thank Bertrand Lebichot and Guillaume Guex for agreeing to read this work. Next, I would like to thank my parents, all my family and my friends to have accompanied and encouraged me during all my studies. Finally, I would thank Malian De Ron for creating this template [65] and making it available to me. This helped me a lot during “le jour et la nuit”. Contents 1. Introduction 1 1.1. Context presentation .................................. 1 1.2. Contents .......................................... 2 2. Theoretical part 3 2.1. Preliminaries ....................................... 4 2.1.1. Networks and graphs .............................. 4 2.1.2. Useful matrices and tools ........................... 4 2.2. Distances and kernels on a graph ........................... 7 2.2.1. Notion of (dis)similarity measures ...................... 7 2.2.2. Kernel on a graph ................................ 8 2.2.3. The shortest-path distance .......................... 9 2.3. Kernels from distances ................................. 9 2.3.1. Multidimensional scaling ............................ 9 2.3.2. Gaussian mapping ............................... 9 2.4. Similarity measures between nodes .......................... 9 2.4.1. Katz index and its Leicht’s extension .................... 10 2.4.2. Commute-time distance and Euclidean commute-time distance .... 10 2.4.3. SimRank similarity measure .........................
    [Show full text]