Non-Negative Matrices and Graphs MAA704

Total Page:16

File Type:pdf, Size:1020Kb

Non-Negative Matrices and Graphs MAA704 Non-negative matrices and graphs MAA704 Christopher Engstr¨om lecture 2 Connectivity Non-negative matrices and graphs MAA704 and the adjacency matrix Distance Christopher Engstr¨om matrix Shortest path and Dijkstra's algorithm Degree matrix November 20, 2018 of a graph Markov chains 1 / 50 Non-negative MAA704 Applied Matrix Analysis matrices and graphs MAA704 Christopher Engstr¨om lecture 2 Connectivity and the Todays lecture: adjacency matrix I Non-negative matrices and introduction to graph theory. Distance matrix I Connectivity and Irreducibility. Shortest path and Dijkstra's I The Laplacian matrix of a graph algorithm Degree matrix of a graph Markov chains 2 / 50 Non-negative Non-negative matrices matrices and graphs MAA704 Christopher Engstr¨om lecture 2 Connectivity A non-negative matrix is a matrix A where all elements and the I adjacency ai;j ≥ 0. matrix Distance I A positive matrix is a matrix A where all elements ai;j > 0. matrix Remark. Be careful to distinguish between Shortest path I and Dijkstra's positive/non-negative matrices and positive algorithm > Degree matrix definite/semi-definite matrices (recall x Ax). of a graph Markov chains 3 / 50 Non-negative Non-negative matrices in Graph-theory matrices and graphs MAA704 A graph is a collection of vertices (nodes) and edges (links) Christopher such as the one below. Engstr¨om lecture 2 Connectivity and the adjacency B matrix Distance matrix Shortest path and Dijkstra's algorithm A C Degree matrix of a graph Markov chains D 4 / 50 Non-negative Non-negative matrices in Graph-theory matrices and graphs MAA704 Christopher Engstr¨om There are many types of graphs those we will work with are: lecture 2 I Simple graphs, where we only allow a single edge from Connectivity and the one vertex to another. adjacency matrix Undirected graphs, where edges do not have a direction I Distance so we are only interested in if there is a edge between two matrix Shortest path vertices. and Dijkstra's algorithm I Directed graphs, where edges do have a direction, a edge Degree matrix A ! B does not necessary mean there is a edge B ! A. of a graph I Weighted graph, where we assign (positive) weights as Markov chains scalars to every edge. 5 / 50 Non-negative Non-negative matrices in Graph-theory matrices and graphs MAA704 Christopher Engstr¨om lecture 2 Connectivity and the How can we represent a (simple) graph using a matrix? adjacency matrix Multiple ways such as the adjacency matrix, degree matrix, I Distance distance matrix, incidence matrix, Laplacian matrix, etc. matrix Shortest path I Which graph type and which matrix representation you and Dijkstra's use depend on your application. algorithm Degree matrix of a graph Markov chains 6 / 50 Non-negative Application, road network and connectivity matrices and graphs MAA704 Christopher We consider a road network between different cities where we Engstr¨om want to know if we can drive from one city to another, or if we lecture 2 need to go by for example air/boat. Connectivity and the I We represent the road network using a undirected graph adjacency by letting the vertices represent the cities, and a edge matrix Distance between two cities means there is a road between the two matrix cities. Shortest path and Dijkstra's I If there is a edge between, say, two cities A; B we know algorithm there is a road and we can take the car. However if there Degree matrix of a graph isn't we might still be able to take the car for example by Markov chains passing city C which is both connected to cities A and B. I To solve this problem we define connectivity for graphs, and irreducibility for matrices. 7 / 50 Non-negative Connectivity matrices and graphs MAA704 Christopher Engstr¨om lecture 2 Connectivity I Two vertices are said to be connected if by traversing and the adjacency the edges there exists a path from both of the nodes to matrix the other one. Distance matrix A graph is said to be connected if it is connected for all I Shortest path pair of nodes in the undirected graph. and Dijkstra's algorithm I A directed graph is said to be strongly connected if it is Degree matrix connected for all pair of nodes in the directed graph. of a graph Markov chains 8 / 50 Non-negative Connectivity: connected components matrices and graphs MAA704 Christopher A connected component in an undirected graph is a Engstr¨om maximal part of the graph where all nodes are connected with lecture 2 each other. Connectivity and the adjacency matrix Distance matrix Shortest path and Dijkstra's algorithm Degree matrix of a graph Markov chains Figure: A undirected graph with 3 connected components 9 / 50 Non-negative Connectedness: strongly connected components matrices and graphs MAA704 Christopher A strongly connected component is a part of the graph Engstr¨om which is strongly connected. lecture 2 Connectivity and the adjacency matrix Distance matrix Shortest path and Dijkstra's algorithm Degree matrix of a graph Markov chains Figure: A directed graph with 3 strongly connected components 10 / 50 Non-negative Application, road network and adjacency matrix matrices and graphs MAA704 Christopher Engstr¨om lecture 2 So if our two cities belong to the same connected component, Connectivity and the there is a path between them and we can take the car. adjacency matrix I But how do we find the connected components? Distance Especially if we have a large graph it might even be hard matrix Shortest path to visualize the graph. and Dijkstra's algorithm To solve the problem we use a matrix representation called I Degree matrix the adjacency matrix. of a graph Markov chains 11 / 50 Non-negative Application, road network and adjacency matrix matrices and graphs MAA704 Christopher Engstr¨om lecture 2 Definition Connectivity and the The adjacency matrix A of a graph G with n vertices is a adjacency matrix square n × n matrix with elements ai;j such that: Distance matrix 1, if there is a link from vertex i to vertex j Shortest path ai;j = and Dijkstra's 0, otherwise algorithm Degree matrix If the graph is undirected we consider every edge between two of a graph vertices as linking in both directions. Markov chains 12 / 50 Non-negative Non-negative matrices in graph theory matrices and graphs MAA704 Christopher Example of an undirected graph and corresponding adjacency Engstr¨om matrix (vertices ordered A,B,C,D). lecture 2 Connectivity and the adjacency B matrix Distance matrix 2 3 Shortest path 0 1 1 1 and Dijkstra's algorithm 61 1 1 07 A C 6 7 Degree matrix 41 1 0 05 of a graph 1 0 0 0 Markov chains D 13 / 50 Non-negative Application, road network and adjacency matrix matrices and graphs MAA704 Christopher Engstr¨om lecture 2 We note that the adjacency matrix is not unique in itself, we Connectivity and the need to choose in what order we put our vertices in the matrix! adjacency matrix I Sometimes changing the order of the vertices (essentially Distance re-labeling the graph) can make certain structures more matrix Shortest path obvious. and Dijkstra's algorithm For example we could group vertices in the same I Degree matrix connected component together. of a graph Markov chains 14 / 50 Non-negative Adjacency matrix, other applications matrices and graphs MAA704 Christopher Engstr¨om lecture 2 Connectivity and the A undirected or directed graph and it's adjacency matrix could adjacency also be used to represent other things such as: matrix Distance I Links between homepages, and PageRank. matrix Shortest path I Electrical or water flow networks. and Dijkstra's algorithm Linguistic relations between words or phrases. I Degree matrix of a graph Markov chains 15 / 50 Non-negative Path, cycle and length matrices and graphs MAA704 Christopher Engstr¨om Definition lecture 2 Connectivity Given a weighted or unweighted graph: and the adjacency I A path in a graph is a sequence of vertices e1; e2;:::; en matrix such that for every vertex in the sequence there is an edge Distance matrix to the next vertex in the sequence. Shortest path and Dijkstra's I If the first vertex in a path is the same as the last vertex algorithm we call it a cycle. Degree matrix of a graph The length of a path is the number of edges in the path I Markov chains (counting multiple uses of the same edge). 16 / 50 Non-negative Application, road network and adjacency matrix matrices and graphs MAA704 Christopher Engstr¨om lecture 2 Returning to our road network example, while a 1 at element Connectivity a means there is a road between those two cities, we still and the ij adjacency don't know how to find the connected components using the matrix adjacency matrix. Distance matrix k I We take a look at the powers A of the adjacency matrix. Shortest path and Dijkstra's I A non-zero element of A1 means there is a path of algorithm Degree matrix "length" 1 between the two vertices. (length as in number of a graph of edges) Markov chains 17 / 50 Non-negative Application, road network and adjacency matrix matrices and graphs MAA704 Christopher Engstr¨om 2 2 We look at A and when a single element aij > 0. lecture 2 2 Connectivity I For ai;j > 0 we need that at least one of and the adjacency ai;k ak;j > 0; k = 1; 2;:::; n. matrix a = 1 if there is an edge between i; k and a = 1 if Distance I i;k k;j matrix there is an edge between k; j.
Recommended publications
  • Quiz7 Problem 1
    Quiz 7 Quiz7 Problem 1. Independence The Problem. In the parts below, cite which tests apply to decide on independence or dependence. Choose one test and show complete details. 1 ! 1 ! (a) Vectors ~v = ;~v = 1 −1 2 1 0 1 1 0 1 1 0 1 1 B C B C B C (b) Vectors ~v1 = @ −1 A ;~v2 = @ 1 A ;~v3 = @ 0 A 0 −1 −1 2 5 (c) Vectors ~v1;~v2;~v3 are data packages constructed from the equations y = x ; y = x ; y = x10 on (−∞; 1). (d) Vectors ~v1;~v2;~v3 are data packages constructed from the equations y = 1 + x; y = 1 − x; y = x3 on (−∞; 1). Basic Test. To show three vectors are independent, form the system of equations c1~v1 + c2~v2 + c3~v3 = ~0; then solve for c1; c2; c3. If the only solution possible is c1 = c2 = c3 = 0, then the vectors are independent. Linear Combination Test. A list of vectors is independent if and only if each vector in the list is not a linear combination of the remaining vectors, and each vector in the list is not zero. Subset Test. Any nonvoid subset of an independent set is independent. The basic test leads to three quick independence tests for column vectors. All tests use the augmented matrix A of the vectors, which for 3 vectors is A =< ~v1j~v2j~v3 >. Rank Test. The vectors are independent if and only if rank(A) equals the number of vectors. Determinant Test. Assume A is square. The vectors are independent if and only if the deter- minant of A is nonzero.
    [Show full text]
  • Eigenvalues of Euclidean Distance Matrices and Rs-Majorization on R2
    Archive of SID 46th Annual Iranian Mathematics Conference 25-28 August 2015 Yazd University 2 Talk Eigenvalues of Euclidean distance matrices and rs-majorization on R pp.: 1{4 Eigenvalues of Euclidean Distance Matrices and rs-majorization on R2 Asma Ilkhanizadeh Manesh∗ Department of Pure Mathematics, Vali-e-Asr University of Rafsanjan Alemeh Sheikh Hoseini Department of Pure Mathematics, Shahid Bahonar University of Kerman Abstract Let D1 and D2 be two Euclidean distance matrices (EDMs) with correspond- ing positive semidefinite matrices B1 and B2 respectively. Suppose that λ(A) = ((λ(A)) )n is the vector of eigenvalues of a matrix A such that (λ(A)) ... i i=1 1 ≥ ≥ (λ(A))n. In this paper, the relation between the eigenvalues of EDMs and those of the 2 corresponding positive semidefinite matrices respect to rs, on R will be investigated. ≺ Keywords: Euclidean distance matrices, Rs-majorization. Mathematics Subject Classification [2010]: 34B15, 76A10 1 Introduction An n n nonnegative and symmetric matrix D = (d2 ) with zero diagonal elements is × ij called a predistance matrix. A predistance matrix D is called Euclidean or a Euclidean distance matrix (EDM) if there exist a positive integer r and a set of n points p1, . , pn r 2 2 { } such that p1, . , pn R and d = pi pj (i, j = 1, . , n), where . denotes the ∈ ij k − k k k usual Euclidean norm. The smallest value of r that satisfies the above condition is called the embedding dimension. As is well known, a predistance matrix D is Euclidean if and 1 1 t only if the matrix B = − P DP with P = I ee , where I is the n n identity matrix, 2 n − n n × and e is the vector of all ones, is positive semidefinite matrix.
    [Show full text]
  • Clustering by Left-Stochastic Matrix Factorization
    Clustering by Left-Stochastic Matrix Factorization Raman Arora [email protected] Maya R. Gupta [email protected] Amol Kapila [email protected] Maryam Fazel [email protected] University of Washington, Seattle, WA 98103, USA Abstract 1.1. Related Work in Matrix Factorization Some clustering objective functions can be written as We propose clustering samples given their matrix factorization objectives. Let n feature vectors d×n pairwise similarities by factorizing the sim- be gathered into a feature-vector matrix X 2 R . T d×k ilarity matrix into the product of a clus- Consider the model X ≈ FG , where F 2 R can ter probability matrix and its transpose. be interpreted as a matrix with k cluster prototypes n×k We propose a rotation-based algorithm to as its columns, and G 2 R is all zeros except for compute this left-stochastic decomposition one (appropriately scaled) positive entry per row that (LSD). Theoretical results link the LSD clus- indicates the nearest cluster prototype. The k-means tering method to a soft kernel k-means clus- clustering objective follows this model with squared tering, give conditions for when the factor- error, and can be expressed as (Ding et al., 2005): ization and clustering are unique, and pro- T 2 arg min kX − FG kF ; (1) vide error bounds. Experimental results on F;GT G=I simulated and real similarity datasets show G≥0 that the proposed method reliably provides accurate clusterings. where k · kF is the Frobenius norm, and inequality G ≥ 0 is component-wise. This follows because the combined constraints G ≥ 0 and GT G = I force each row of G to have only one positive element.
    [Show full text]
  • Adjacency and Incidence Matrices
    Adjacency and Incidence Matrices 1 / 10 The Incidence Matrix of a Graph Definition Let G = (V ; E) be a graph where V = f1; 2;:::; ng and E = fe1; e2;:::; emg. The incidence matrix of G is an n × m matrix B = (bik ), where each row corresponds to a vertex and each column corresponds to an edge such that if ek is an edge between i and j, then all elements of column k are 0 except bik = bjk = 1. 1 2 e 21 1 13 f 61 0 07 3 B = 6 7 g 40 1 05 4 0 0 1 2 / 10 The First Theorem of Graph Theory Theorem If G is a multigraph with no loops and m edges, the sum of the degrees of all the vertices of G is 2m. Corollary The number of odd vertices in a loopless multigraph is even. 3 / 10 Linear Algebra and Incidence Matrices of Graphs Recall that the rank of a matrix is the dimension of its row space. Proposition Let G be a connected graph with n vertices and let B be the incidence matrix of G. Then the rank of B is n − 1 if G is bipartite and n otherwise. Example 1 2 e 21 1 13 f 61 0 07 3 B = 6 7 g 40 1 05 4 0 0 1 4 / 10 Linear Algebra and Incidence Matrices of Graphs Recall that the rank of a matrix is the dimension of its row space. Proposition Let G be a connected graph with n vertices and let B be the incidence matrix of G.
    [Show full text]
  • A Brief Introduction to Spectral Graph Theory
    A BRIEF INTRODUCTION TO SPECTRAL GRAPH THEORY CATHERINE BABECKI, KEVIN LIU, AND OMID SADEGHI MATH 563, SPRING 2020 Abstract. There are several matrices that can be associated to a graph. Spectral graph theory is the study of the spectrum, or set of eigenvalues, of these matrices and its relation to properties of the graph. We introduce the primary matrices associated with graphs, and discuss some interesting questions that spectral graph theory can answer. We also discuss a few applications. 1. Introduction and Definitions This work is primarily based on [1]. We expect the reader is familiar with some basic graph theory and linear algebra. We begin with some preliminary definitions. Definition 1. Let Γ be a graph without multiple edges. The adjacency matrix of Γ is the matrix A indexed by V (Γ), where Axy = 1 when there is an edge from x to y, and Axy = 0 otherwise. This can be generalized to multigraphs, where Axy becomes the number of edges from x to y. Definition 2. Let Γ be an undirected graph without loops. The incidence matrix of Γ is the matrix M, with rows indexed by vertices and columns indexed by edges, where Mxe = 1 whenever vertex x is an endpoint of edge e. For a directed graph without loss, the directed incidence matrix N is defined by Nxe = −1; 1; 0 corresponding to when x is the head of e, tail of e, or not on e. Definition 3. Let Γ be an undirected graph without loops. The Laplace matrix of Γ is the matrix L indexed by V (G) with zero row sums, where Lxy = −Axy for x 6= y.
    [Show full text]
  • Polynomial Approximation Algorithms for Belief Matrix Maintenance in Identity Management
    Polynomial Approximation Algorithms for Belief Matrix Maintenance in Identity Management Hamsa Balakrishnan, Inseok Hwang, Claire J. Tomlin Dept. of Aeronautics and Astronautics, Stanford University, CA 94305 hamsa,ishwang,[email protected] Abstract— Updating probabilistic belief matrices as new might be constrained to some prespecified (but not doubly- observations arrive, in the presence of noise, is a critical part stochastic) row and column sums. This paper addresses the of many algorithms for target tracking in sensor networks. problem of updating belief matrices by scaling in the face These updates have to be carried out while preserving sum constraints, arising for example, from probabilities. This paper of uncertainty in the system and the observations. addresses the problem of updating belief matrices to satisfy For example, consider the case of the belief matrix for a sum constraints using scaling algorithms. We show that the system with three objects (labelled 1, 2 and 3). Suppose convergence behavior of the Sinkhorn scaling process, used that, at some instant, we are unsure about their identities for scaling belief matrices, can vary dramatically depending (tagged X, Y and Z) completely, and our belief matrix is on whether the prior unscaled matrix is exactly scalable or only almost scalable. We give an efficient polynomial-time algo- a 3 × 3 matrix with every element equal to 1/3. Let us rithm based on the maximum-flow algorithm that determines suppose the we receive additional information that object whether a given matrix is exactly scalable, thus determining 3 is definitely Z. Then our prior, but constraint violating the convergence properties of the Sinkhorn scaling process.
    [Show full text]
  • Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A
    1 Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A. Deri, Member, IEEE, and José M. F. Moura, Fellow, IEEE Abstract—We define and discuss the utility of two equiv- Consider a graph G = G(A) with adjacency matrix alence graph classes over which a spectral projector-based A 2 CN×N with k ≤ N distinct eigenvalues and Jordan graph Fourier transform is equivalent: isomorphic equiv- decomposition A = VJV −1. The associated Jordan alence classes and Jordan equivalence classes. Isomorphic equivalence classes show that the transform is equivalent subspaces of A are Jij, i = 1; : : : k, j = 1; : : : ; gi, up to a permutation on the node labels. Jordan equivalence where gi is the geometric multiplicity of eigenvalue 휆i, classes permit identical transforms over graphs of noniden- or the dimension of the kernel of A − 휆iI. The signal tical topologies and allow a basis-invariant characterization space S can be uniquely decomposed by the Jordan of total variation orderings of the spectral components. subspaces (see [13], [14] and Section II). For a graph Methods to exploit these classes to reduce computation time of the transform as well as limitations are discussed. signal s 2 S, the graph Fourier transform (GFT) of [12] is defined as Index Terms—Jordan decomposition, generalized k gi eigenspaces, directed graphs, graph equivalence classes, M M graph isomorphism, signal processing on graphs, networks F : S! Jij i=1 j=1 s ! (s ;:::; s ;:::; s ;:::; s ) ; (1) b11 b1g1 bk1 bkgk I. INTRODUCTION where sij is the (oblique) projection of s onto the Jordan subspace Jij parallel to SnJij.
    [Show full text]
  • Similarity-Based Clustering by Left-Stochastic Matrix Factorization
    JournalofMachineLearningResearch14(2013)1715-1746 Submitted 1/12; Revised 11/12; Published 7/13 Similarity-based Clustering by Left-Stochastic Matrix Factorization Raman Arora [email protected] Toyota Technological Institute 6045 S. Kenwood Ave Chicago, IL 60637, USA Maya R. Gupta [email protected] Google 1225 Charleston Rd Mountain View, CA 94301, USA Amol Kapila [email protected] Maryam Fazel [email protected] Department of Electrical Engineering University of Washington Seattle, WA 98195, USA Editor: Inderjit Dhillon Abstract For similarity-based clustering, we propose modeling the entries of a given similarity matrix as the inner products of the unknown cluster probabilities. To estimate the cluster probabilities from the given similarity matrix, we introduce a left-stochastic non-negative matrix factorization problem. A rotation-based algorithm is proposed for the matrix factorization. Conditions for unique matrix factorizations and clusterings are given, and an error bound is provided. The algorithm is partic- ularly efficient for the case of two clusters, which motivates a hierarchical variant for cases where the number of desired clusters is large. Experiments show that the proposed left-stochastic decom- position clustering model produces relatively high within-cluster similarity on most data sets and can match given class labels, and that the efficient hierarchical variant performs surprisingly well. Keywords: clustering, non-negative matrix factorization, rotation, indefinite kernel, similarity, completely positive 1. Introduction Clustering is important in a broad range of applications, from segmenting customers for more ef- fective advertising, to building codebooks for data compression. Many clustering methods can be interpreted in terms of a matrix factorization problem.
    [Show full text]
  • Network Properties Revealed Through Matrix Functions 697
    SIAM REVIEW c 2010 Society for Industrial and Applied Mathematics Vol. 52, No. 4, pp. 696–714 Network Properties Revealed ∗ through Matrix Functions † Ernesto Estrada Desmond J. Higham‡ Abstract. The emerging field of network science deals with the tasks of modeling, comparing, and summarizing large data sets that describe complex interactions. Because pairwise affinity data can be stored in a two-dimensional array, graph theory and applied linear algebra provide extremely useful tools. Here, we focus on the general concepts of centrality, com- municability,andbetweenness, each of which quantifies important features in a network. Some recent work in the mathematical physics literature has shown that the exponential of a network’s adjacency matrix can be used as the basis for defining and computing specific versions of these measures. We introduce here a general class of measures based on matrix functions, and show that a particular case involving a matrix resolvent arises naturally from graph-theoretic arguments. We also point out connections between these measures and the quantities typically computed when spectral methods are used for data mining tasks such as clustering and ordering. We finish with computational examples showing the new matrix resolvent version applied to real networks. Key words. centrality measures, clustering methods, communicability, Estrada index, Fiedler vector, graph Laplacian, graph spectrum, power series, resolvent AMS subject classifications. 05C50, 05C82, 91D30 DOI. 10.1137/090761070 1. Motivation. 1.1. Introduction. Connections are important. Across the natural, technologi- cal, and social sciences it often makes sense to focus on the pattern of interactions be- tween individual components in a system [1, 8, 61].
    [Show full text]
  • On the Eigenvalues of Euclidean Distance Matrices
    “main” — 2008/10/13 — 23:12 — page 237 — #1 Volume 27, N. 3, pp. 237–250, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH∗ Department of Mathematics and Statistics University of Windsor, Windsor, Ontario N9B 3P4, Canada E-mail: [email protected] Abstract. In this paper, the notion of equitable partitions (EP) is used to study the eigenvalues of Euclidean distance matrices (EDMs). In particular, EP is used to obtain the characteristic poly- nomials of regular EDMs and non-spherical centrally symmetric EDMs. The paper also presents methods for constructing cospectral EDMs and EDMs with exactly three distinct eigenvalues. Mathematical subject classification: 51K05, 15A18, 05C50. Key words: Euclidean distance matrices, eigenvalues, equitable partitions, characteristic poly- nomial. 1 Introduction ( ) An n ×n nonzero matrix D = di j is called a Euclidean distance matrix (EDM) 1, 2,..., n r if there exist points p p p in some Euclidean space < such that i j 2 , ,..., , di j = ||p − p || for all i j = 1 n where || || denotes the Euclidean norm. i , ,..., Let p , i ∈ N = {1 2 n}, be the set of points that generate an EDM π π ( , ,..., ) D. An m-partition of D is an ordered sequence = N1 N2 Nm of ,..., nonempty disjoint subsets of N whose union is N. The subsets N1 Nm are called the cells of the partition. The n-partition of D where each cell consists #760/08. Received: 07/IV/08. Accepted: 17/VI/08. ∗Research supported by the Natural Sciences and Engineering Research Council of Canada and MITACS.
    [Show full text]
  • Smith Normal Formal of Distance Matrix of Block Graphs∗†
    Ann. of Appl. Math. 32:1(2016); 20-29 SMITH NORMAL FORMAL OF DISTANCE MATRIX OF BLOCK GRAPHS∗y Jing Chen1;2,z Yaoping Hou2 (1. The Center of Discrete Math., Fuzhou University, Fujian 350003, PR China; 2. School of Math., Hunan First Normal University, Hunan 410205, PR China) Abstract A connected graph, whose blocks are all cliques (of possibly varying sizes), is called a block graph. Let D(G) be its distance matrix. In this note, we prove that the Smith normal form of D(G) is independent of the interconnection way of blocks and give an explicit expression for the Smith normal form in the case that all cliques have the same size, which generalize the results on determinants. Keywords block graph; distance matrix; Smith normal form 2000 Mathematics Subject Classification 05C50 1 Introduction Let G be a connected graph (or strong connected digraph) with vertex set f1; 2; ··· ; ng. The distance matrix D(G) is an n × n matrix in which di;j = d(i; j) denotes the distance from vertex i to vertex j: Like the adjacency matrix and Lapla- cian matrix of a graph, D(G) is also an integer matrix and there are many results on distance matrices and their applications. For distance matrices, Graham and Pollack [10] proved a remarkable result that gives a formula of the determinant of the distance matrix of a tree depend- ing only on the number n of vertices of the tree. The determinant is given by det D = (−1)n−1(n − 1)2n−2: This result has attracted much interest in algebraic graph theory.
    [Show full text]
  • Incidence Matrices and Interval Graphs
    Pacific Journal of Mathematics INCIDENCE MATRICES AND INTERVAL GRAPHS DELBERT RAY FULKERSON AND OLIVER GROSS Vol. 15, No. 3 November 1965 PACIFIC JOURNAL OF MATHEMATICS Vol. 15, No. 3, 1965 INCIDENCE MATRICES AND INTERVAL GRAPHS D. R. FULKERSON AND 0. A. GROSS According to present genetic theory, the fine structure of genes consists of linearly ordered elements. A mutant gene is obtained by alteration of some connected portion of this structure. By examining data obtained from suitable experi- ments, it can be determined whether or not the blemished portions of two mutant genes intersect or not, and thus inter- section data for a large number of mutants can be represented as an undirected graph. If this graph is an "interval graph," then the observed data is consistent with a linear model of the gene. The problem of determining when a graph is an interval graph is a special case of the following problem concerning (0, l)-matrices: When can the rows of such a matrix be per- muted so as to make the l's in each column appear consecu- tively? A complete theory is obtained for this latter problem, culminating in a decomposition theorem which leads to a rapid algorithm for deciding the question, and for constructing the desired permutation when one exists. Let A — (dij) be an m by n matrix whose entries ai3 are all either 0 or 1. The matrix A may be regarded as the incidence matrix of elements el9 e2, , em vs. sets Sl9 S2, , Sn; that is, ai3 = 0 or 1 ac- cording as et is not or is a member of S3 .
    [Show full text]