Spectral Graph Theory 1 Recap 2 Laplacian Quadratic Form

Total Page:16

File Type:pdf, Size:1020Kb

Spectral Graph Theory 1 Recap 2 Laplacian Quadratic Form CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford ([email protected]) February 8, 2018 1 Lecture 10 - Spectral Graph Theory In the last lecture we began our study of spectral graph theory, the study of connections between combi- natorial properties of graphs and linear algebraic properties of the fundamental matrices associated with them. In this lecture we continue this study making connections between eigenvalues of and eigenvectors of the Laplacian matrix associated with a simple, undirected, positively weighted graph and combinatorial properties of these graphs. 1 Recap In the remainder of this lecture our focus will be on a simple, undirected, positively weighted graph G = with RE . As we noted, spectral graph theory for broader classes of matrices is possible, but (V; E; w) w 2 >0 this will provide a good taste of the eld of spectral graph theory. Also note that while we might dene things with explicitly in the denition, e.g. or we may drop the when it is clear from context. G L(G) degG G Now, last class we introduced several fundamental matrices associated with a graph, the (weighted) adjacency matrix A(G) 2 RV ×V , the (weighted) degree matrix D(G) 2 RV ×V , and the Laplacian matrix L(G) 2 RV ×V each dened for all i; j 2 V by 8 ( ( >deg(i) i = j def wfi;jg fi; jg 2 E def deg(i) i = j def < A(G)ij = ; D(G)ij = ; L(G)ij == −wfi;jg fi; jg 2 E : 0 otherwise 0 otherwise > :>0 otherwise Also as we pointed out a matrix M is an adjacency matrix of some G if and only if Aji = Aij ≥ 0 for all i; j 2 V and Aii = 0 for all i 2 V and a matrix a Laplacian matrix of some G if and only if Lij = Lji ≤ 0 and L~1 = ~0. Furthermore, each gives a bijection from these matrices to simple, undirected, weighted graphs with positive edge weights. 2 Laplacian Quadratic Form Now why do we focus on Laplacian matrices? As we will show the quadratic form of the Laplacian has a number of nice properties. 2.1 Laplacians are PSD One nice properties of Laplacians are that they are positive semidenite (PSD). Denition 1 (Positive Semidenite (PSD)). A symmetric matrix M = M> is positive semidenite (PSD) if and only if for all x we have x>Mx ≥ 0. We refer to x>Mx as the quadratic form of M and this denition says a symmetric matrix is PSD if and only if the quadratic form is non-negative. As we will see this is equivalent to saying that all eigenvalues of M are non-negative. 1These lecture notes are a work in progress and there may be typos, awkward language, omitted proof details, etc. Moreover, content from lectures might be missing. These notes are intended to converge to a polished superset of the material covered in class so if you would like anything claried, please do not hesitate to post to Piazza and ask. CME 305: Discrete Mathematics and Algorithms - Lecture 10 2 Here we will prove that L(G) is PSD, proving several interesting properties of Laplacians and their quadratic form along the way. We begin, by dening the sum of two graphs as follows: Denition 2 (Graph Sums). For simple, undirected, weighted graphs G1 = (V; E1; w1) and G2 = (V; E2; w2) def we dene the graph sum G1 + G2 = (V; E+; w+) by E+ = E1 [ E2 and for fi; jg 2 E+ 8 w (i; j) + w (i; j) fi; jg 2 E \ E > 1 2 1 2 > <w1(i; j) fi; jg 2 E1 n E2 w+(i; j) = : >w2(i; j) fi; jg 2 E2 n E1 > :>0 otherwise In other words, the sum of two graphs with the same vertices we are dening as the graph on the same vertices with the union of their edges and the weight of the edges dened as the sum of the weights in each graph (where if an edge does not exist in a graph we are summing it with weight 0). One nice property of this denition is that it directly corresponds to summing graph Laplacians. In other words, it can easily be checked that L(G1 + G2) = L(G1) + L(G2) : Consequently, if for e = fi; jg 2 E we dene L(e) to be the Laplacian on the graph with vertices V and 1 edge of weight 1 between i and j then we see that X L(G) = weL(e) e2E and therefore > X > x Lx = we · x L(e)x : e2E Thus, to reason about the quadratic form of L it suces to reason about the quadratic form of a single edge. Now L(e) for e = fi; jg simply has a 1 in coordinates ii and jj and a −1 in ij and ji. All other coordinates are 0. Consequently, if we let ~1i be the vector the is 0 at all coordinates except for i which is a 1, i.e. ( 1 i = j ~1i(j) = 0 otherwise then we see that ~ ~ ~ ~ > ~ ~> L(e) = 1i − 1j (1i − 1j) = δijδij ~ def where we let δij = ~1i − ~1j for shorthand.Consequently, we have that > 2 x L(e)x = (xi − xj) ≥ 0 and > X 2 x L(G)x = wfi;jg(xi − xj) ≥ 0 fi;jg2E and therefore L(G) is PSD as claimed. CME 305: Discrete Mathematics and Algorithms - Lecture 10 3 2.2 Laplacian Quadratic Form Note that the quadratic form the Laplacian has a nice interpretation. Imagine each edge fi; jg is a spring with stiness wfi;jg and that the vertices designate which springs are linked together. Now given a vector V x 2 R we could imagine putting one edge of spring fi; jg at position xi on a line and the other end at position for each edge. If we do this, then > P 2 would be the total potential xj x L(G)x = fi;jg2E wij(xi −xj) energy of the springs. In other words, x>L(g)x, corresponds to the energy to pin each connection point i 2 V at point xi on the line, i.e. the enrgy to layout the graph along a line. There are several nice implications of this analysis. First, suppose that for some S ⊆ V we have x = ~1S, i.e. the indicator vector for set S, where for all i 2 V we have ( 1 i 2 S ~1S(i) = : 0 i2 = S What is x>Lx in this case? We have > ~> ~ X 2 X x Lx = 1S L1S = wfi;jg(xi − xj) = wij = w(@(S)) fi;jg2E fi;jg2@(S) in other words, it is the total weight of the edges cut by S. Consequently, the quadratic form of L specialized to boolean vectors, i.e. x 2 f0; 1gV , gives the size of all cuts in the graph. 2.3 Incidence Matrix Decomposition Another nice implication of our analysis in deriving the quadratic form of a Laplacian is that it gives us another way of decomposing Laplacian matrices. Let e1; :::; em denote the edges the graph and suppose we pick an arbitrary orientation of the edges so that ei = (ui; vi) for all i 2 [m]. Now, we dene the incidence matrix of G to be B(G) 2 RE×V where 0 1 ~δ> 0 ~ ~ > 1 e1 (1u1 − 1v1 ) > > B ~δ C (~1u − ~1v ) B e2 C B 2 2 C B(G) = B . C = B . C B . C B . C @ A @ A ~δ> (~1 − ~1 )> em um vm and we dene the weight matrix of G to be W(G) 2 RE×E, the diagonal matrix associated with w, i.e. ( we e = e1 = e2 W(G)e e = : 1 2 0 otherwise Now, with these denitions established we have > X ~ ~> X B WB = weδeδe = weL(e) = L(G) : e2E e2E Note that if we dene W1=2 to be W where we take the square root of each diagonal entry then we have > L = W1=2B (W1=2B) and therefore clearly is PSD as > 1=2 2. Consequently, we can prove that is PSD by L x Lx = kW Bxk2 L eciently providing an explicit factorization of L = M>M for some M. It can be shown that all PSD matrices have such a factorization, however that for Laplacians it can be compute so easily is a particularly nice property. CME 305: Discrete Mathematics and Algorithms - Lecture 10 4 3 Linear Algebra Review Now that we have a basic understanding of the Laplacian matrix and its quadratic form. Our next step is to take a closer look at the eigenvectors and eigenvalues of this matrix. Before we do this, here we review some linear algebra theory regarding eigenvalues and eigenvectors of symmetric matrices. First, we recall the denition of an eigenvector and an eigenvalue. Denition 3 (Eigenvalues and Eigenvectors). A vector v is an eigenvector of matrix M with eigenvalue λ if and only if Mv = λv. Next, we recall some basic properties of eigenvectors of symmetric matrices n×n n Lemma 4. Let M 2 R be a symmetric matrix and let v1; v2 2 R be eigenvectors of M with eigenvalues λ1 and λ2 respectively. If λ1 6= λ2 then v1 ? v2 and if λ1 = λ2 then for all α; β 2 R we have that αv1 + βv2 is an eigenvector of A of eigenvalue λ1. Proof. Note that > > by assumption. Furthermore, since is symmetric we have > v1 Mv2 = λ2v1 v2 M v1 M = > and thus > > .
Recommended publications
  • Three Conjectures in Extremal Spectral Graph Theory
    Three conjectures in extremal spectral graph theory Michael Tait and Josh Tobin June 6, 2016 Abstract We prove three conjectures regarding the maximization of spectral invariants over certain families of graphs. Our most difficult result is that the join of P2 and Pn−2 is the unique graph of maximum spectral radius over all planar graphs. This was conjectured by Boots and Royle in 1991 and independently by Cao and Vince in 1993. Similarly, we prove a conjecture of Cvetkovi´cand Rowlinson from 1990 stating that the unique outerplanar graph of maximum spectral radius is the join of a vertex and Pn−1. Finally, we prove a conjecture of Aouchiche et al from 2008 stating that a pineapple graph is the unique connected graph maximizing the spectral radius minus the average degree. To prove our theorems, we use the leading eigenvector of a purported extremal graph to deduce structural properties about that graph. Using this setup, we give short proofs of several old results: Mantel's Theorem, Stanley's edge bound and extensions, the K}ovari-S´os-Tur´anTheorem applied to ex (n; K2;t), and a partial solution to an old problem of Erd}oson making a triangle-free graph bipartite. 1 Introduction Questions in extremal graph theory ask to maximize or minimize a graph invariant over a fixed family of graphs. Perhaps the most well-studied problems in this area are Tur´an-type problems, which ask to maximize the number of edges in a graph which does not contain fixed forbidden subgraphs. Over a century old, a quintessential example of this kind of result is Mantel's theorem, which states that Kdn=2e;bn=2c is the unique graph maximizing the number of edges over all triangle-free graphs.
    [Show full text]
  • Manifold Learning
    MANIFOLD LEARNING Kavé Salamatian- LISTIC Learning Machine Learning: develop algorithms to automatically extract ”patterns” or ”regularities” from data (generalization) Typical tasks Clustering: Find groups of similar points Dimensionality reduction: Project points in a lower dimensional space while preserving structure Semi-supervised: Given labelled and unlabelled points, build a labelling function Supervised: Given labelled points, build a labelling function All these tasks are not well-defined Clustering Identify the two groups Dimensionality Reduction 4096 Objects in R ? But only 3 parameters: 2 angles and 1 for illumination. They span a 3- dimensional sub-manifold of R4096! Semi-Supervised Learning Assign labels to black points Supervised learning Build a function that predicts the label of all points in the space Formal definitions Clustering: Given build a function Dimensionality reduction: Given build a function Semi-supervised: Given and with m << n, build a function Supervised: Given , build Geometric learning Consider and exploit the geometric structure of the data Clustering: two ”close” points should be in the same cluster Dimensionality reduction: ”closeness” should be preserved Semi-supervised/supervised: two ”close” points should have the same label Examples: k-means clustering, k-nearest neighbors. Manifold ? A manifold is a topological space that is locally Euclidian Around every point, there is a neighborhood that is topologically isomorph to unit ball Regularization Adopt a functional viewpoint
    [Show full text]
  • Contemporary Spectral Graph Theory
    CONTEMPORARY SPECTRAL GRAPH THEORY GUEST EDITORS Maurizio Brunetti, University of Naples Federico II, Italy, [email protected] Raffaella Mulas, The Alan Turing Institute, UK, [email protected] Lucas Rusnak, Texas State University, USA, [email protected] Zoran Stanić, University of Belgrade, Serbia, [email protected] DESCRIPTION Spectral graph theory is a striking theory that studies the properties of a graph in relationship to the characteristic polynomial, eigenvalues and eigenvectors of matrices associated with the graph, such as the adjacency matrix, the Laplacian matrix, the signless Laplacian matrix or the distance matrix. This theory emerged in the 1950s and 1960s. Over the decades it has considered not only simple undirected graphs but also their generalizations such as multigraphs, directed graphs, weighted graphs or hypergraphs. In the recent years, spectra of signed graphs and gain graphs have attracted a great deal of attention. Besides graph theoretic research, another major source is the research in a wide branch of applications in chemistry, physics, biology, electrical engineering, social sciences, computer sciences, information theory and other disciplines. This thematic special issue is devoted to the publication of original contributions focusing on the spectra of graphs or their generalizations, along with the most recent applications. Contributions to the Special Issue may address (but are not limited) to the following aspects: f relations between spectra of graphs (or their generalizations) and their structural properties in the most general shape, f bounds for particular eigenvalues, f spectra of particular types of graph, f relations with block designs, f particular eigenvalues of graphs, f cospectrality of graphs, f graph products and their eigenvalues, f graphs with a comparatively small number of eigenvalues, f graphs with particular spectral properties, f applications, f characteristic polynomials.
    [Show full text]
  • Learning on Hypergraphs: Spectral Theory and Clustering
    Learning on Hypergraphs: Spectral Theory and Clustering Pan Li, Olgica Milenkovic Coordinated Science Laboratory University of Illinois at Urbana-Champaign March 12, 2019 Learning on Graphs Graphs are indispensable mathematical data models capturing pairwise interactions: k-nn network social network publication network Important learning on graphs problems: clustering (community detection), semi-supervised/active learning, representation learning (graph embedding) etc. Beyond Pairwise Relations A graph models pairwise relations. Recent work has shown that high-order relations can be significantly more informative: Examples include: Understanding the organization of networks (Benson, Gleich and Leskovec'16) Determining the topological connectivity between data points (Zhou, Huang, Sch}olkopf'07). Graphs with high-order relations can be modeled as hypergraphs (formally defined later). Meta-graphs, meta-paths in heterogeneous information networks. Algorithmic methods for analyzing high-order relations and learning problems are still under development. Beyond Pairwise Relations Functional units in social and biological networks. High-order network motifs: Motif (Benson’16) Microfauna Pelagic fishes Crabs & Benthic fishes Macroinvertebrates Algorithmic methods for analyzing high-order relations and learning problems are still under development. Beyond Pairwise Relations Functional units in social and biological networks. Meta-graphs, meta-paths in heterogeneous information networks. (Zhou, Yu, Han'11) Beyond Pairwise Relations Functional units in social and biological networks. Meta-graphs, meta-paths in heterogeneous information networks. Algorithmic methods for analyzing high-order relations and learning problems are still under development. Review of Graph Clustering: Notation and Terminology Graph Clustering Task: Cluster the vertices that are \densely" connected by edges. Graph Partitioning and Conductance A (weighted) graph G = (V ; E; w): for e 2 E, we is the weight.
    [Show full text]
  • Arxiv:1711.06300V1
    EXPLICIT BLOCK-STRUCTURES FOR BLOCK-SYMMETRIC FIEDLER-LIKE PENCILS∗ M. I. BUENO†, M. MARTIN ‡, J. PEREZ´ §, A. SONG ¶, AND I. VIVIANO k Abstract. In the last decade, there has been a continued effort to produce families of strong linearizations of a matrix polynomial P (λ), regular and singular, with good properties, such as, being companion forms, allowing the recovery of eigen- vectors of a regular P (λ) in an easy way, allowing the computation of the minimal indices of a singular P (λ) in an easy way, etc. As a consequence of this research, families such as the family of Fiedler pencils, the family of generalized Fiedler pencils (GFP), the family of Fiedler pencils with repetition, and the family of generalized Fiedler pencils with repetition (GFPR) were con- structed. In particular, one of the goals was to find in these families structured linearizations of structured matrix polynomials. For example, if a matrix polynomial P (λ) is symmetric (Hermitian), it is convenient to use linearizations of P (λ) that are also symmetric (Hermitian). Both the family of GFP and the family of GFPR contain block-symmetric linearizations of P (λ), which are symmetric (Hermitian) when P (λ) is. Now the objective is to determine which of those structured linearizations have the best numerical properties. The main obstacle for this study is the fact that these pencils are defined implicitly as products of so-called elementary matrices. Recent papers in the literature had as a goal to provide an explicit block-structure for the pencils belonging to the family of Fiedler pencils and any of its further generalizations to solve this problem.
    [Show full text]
  • Spectral Geometry for Structural Pattern Recognition
    Spectral Geometry for Structural Pattern Recognition HEWAYDA EL GHAWALBY Ph.D. Thesis This thesis is submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy. Department of Computer Science United Kingdom May 2011 Abstract Graphs are used pervasively in computer science as representations of data with a network or relational structure, where the graph structure provides a flexible representation such that there is no fixed dimensionality for objects. However, the analysis of data in this form has proved an elusive problem; for instance, it suffers from the robustness to structural noise. One way to circumvent this problem is to embed the nodes of a graph in a vector space and to study the properties of the point distribution that results from the embedding. This is a problem that arises in a number of areas including manifold learning theory and graph-drawing. In this thesis, our first contribution is to investigate the heat kernel embed- ding as a route to computing geometric characterisations of graphs. The reason for turning to the heat kernel is that it encapsulates information concerning the distribution of path lengths and hence node affinities on the graph. The heat ker- nel of the graph is found by exponentiating the Laplacian eigensystem over time. The matrix of embedding co-ordinates for the nodes of the graph is obtained by performing a Young-Householder decomposition on the heat kernel. Once the embedding of its nodes is to hand we proceed to characterise a graph in a geometric manner. With the embeddings to hand, we establish a graph character- ization based on differential geometry by computing sets of curvatures associated ii Abstract iii with the graph nodes, edges and triangular faces.
    [Show full text]
  • CS 598: Spectral Graph Theory. Lecture 10
    CS 598: Spectral Graph Theory. Lecture 13 Expander Codes Alexandra Kolla Today Graph approximations, part 2. Bipartite expanders as approximations of the bipartite complete graph Quasi-random properties of bipartite expanders, expander mixing lemma Building expander codes Encoding, minimum distance, decoding Expander Graph Refresher We defined expander graphs to be d- regular graphs whose adjacency matrix eigenvalues satisfy |훼푖| ≤ 휖푑 for i>1, and some small 휖. We saw that such graphs are very good approximations of the complete graph. Expanders as Approximations of the Complete Graph Refresher Let G be a d-regular graph whose adjacency eigenvalues satisfy |훼푖| ≤ 휖푑. As its Laplacian eigenvalues satisfy 휆푖 = 푑 − 훼푖, all non-zero eigenvalues are between (1 − ϵ)d and 1 + ϵ 푑. 푇 푇 Let H=(d/n)Kn, 푠표 푥 퐿퐻푥 = 푑푥 푥 We showed that G is an ϵ − approximation of H 푇 푇 푇 1 − ϵ 푥 퐿퐻푥 ≼ 푥 퐿퐺푥 ≼ 1 + ϵ 푥 퐿퐻푥 Bipartite Expander Graphs Define them as d-regular good approximations of (some multiple of) the complete bipartite 푑 graph H= 퐾 : 푛 푛,푛 푇 푇 푇 1 − ϵ 푥 퐿퐻푥 ≼ 푥 퐿퐺푥 ≼ 1 + ϵ 푥 퐿퐻푥 ⇒ 1 − ϵ 퐻 ≼ 퐺 ≼ 1 + ϵ 퐻 G 퐾푛,푛 U V U V The Spectrum of Bipartite Expanders 푑 Eigenvalues of Laplacian of H= 퐾 are 푛 푛,푛 휆1 = 0, 휆2푛 = 2푑, 휆푖 = 푑, 0 < 푖 < 2푛 푇 푇 푇 1 − ϵ 푥 퐿퐻푥 ≼ 푥 퐿퐺푥 ≼ 1 + ϵ 푥 퐿퐻푥 Says we want d-regular graph G that has evals 휆1 = 0, 휆2푛 = 2푑, 휆푖 ∈ ( 1 − 휖 푑, 1 + 휖 푑), 0 < 푖 < 2푛 Construction of Bipartite Expanders Definition (Double Cover).
    [Show full text]
  • Notes on Elementary Spectral Graph Theory Applications to Graph Clustering Using Normalized Cuts
    University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science 1-1-2013 Notes on Elementary Spectral Graph Theory Applications to Graph Clustering Using Normalized Cuts Jean H. Gallier University of Pennsylvania, [email protected] Follow this and additional works at: https://repository.upenn.edu/cis_reports Part of the Computer Engineering Commons, and the Computer Sciences Commons Recommended Citation Jean H. Gallier, "Notes on Elementary Spectral Graph Theory Applications to Graph Clustering Using Normalized Cuts", . January 2013. University of Pennsylvania Department of Computer and Information Science Technical Report No. MS-CIS-13-09. This paper is posted at ScholarlyCommons. https://repository.upenn.edu/cis_reports/986 For more information, please contact [email protected]. Notes on Elementary Spectral Graph Theory Applications to Graph Clustering Using Normalized Cuts Abstract These are notes on the method of normalized graph cuts and its applications to graph clustering. I provide a fairly thorough treatment of this deeply original method due to Shi and Malik, including complete proofs. I include the necessary background on graphs and graph Laplacians. I then explain in detail how the eigenvectors of the graph Laplacian can be used to draw a graph. This is an attractive application of graph Laplacians. The main thrust of this paper is the method of normalized cuts. I give a detailed account for K = 2 clusters, and also for K > 2 clusters, based on the work of Yu and Shi. Three points that do not appear to have been clearly articulated before are elaborated: 1. The solutions of the main optimization problem should be viewed as tuples in the K-fold cartesian product of projective space RP^{N-1}.
    [Show full text]
  • "Distance Measures for Graph Theory"
    Distance measures for graph theory : Comparisons and analyzes of different methods Dissertation presented by Maxime DUYCK for obtaining the Master’s degree in Mathematical Engineering Supervisor(s) Marco SAERENS Reader(s) Guillaume GUEX, Bertrand LEBICHOT Academic year 2016-2017 Acknowledgments First, I would like to thank my supervisor Pr. Marco Saerens for his presence, his advice and his precious help throughout the realization of this thesis. Second, I would also like to thank Bertrand Lebichot and Guillaume Guex for agreeing to read this work. Next, I would like to thank my parents, all my family and my friends to have accompanied and encouraged me during all my studies. Finally, I would thank Malian De Ron for creating this template [65] and making it available to me. This helped me a lot during “le jour et la nuit”. Contents 1. Introduction 1 1.1. Context presentation .................................. 1 1.2. Contents .......................................... 2 2. Theoretical part 3 2.1. Preliminaries ....................................... 4 2.1.1. Networks and graphs .............................. 4 2.1.2. Useful matrices and tools ........................... 4 2.2. Distances and kernels on a graph ........................... 7 2.2.1. Notion of (dis)similarity measures ...................... 7 2.2.2. Kernel on a graph ................................ 8 2.2.3. The shortest-path distance .......................... 9 2.3. Kernels from distances ................................. 9 2.3.1. Multidimensional scaling ............................ 9 2.3.2. Gaussian mapping ............................... 9 2.4. Similarity measures between nodes .......................... 9 2.4.1. Katz index and its Leicht’s extension .................... 10 2.4.2. Commute-time distance and Euclidean commute-time distance .... 10 2.4.3. SimRank similarity measure .........................
    [Show full text]
  • Chapter 4 Introduction to Spectral Graph Theory
    Chapter 4 Introduction to Spectral Graph Theory Spectral graph theory is the study of a graph through the properties of the eigenvalues and eigenvectors of its associated Laplacian matrix. In the following, we use G = (V; E) to represent an undirected n-vertex graph with no self-loops, and write V = f1; : : : ; ng, with the degree of vertex i denoted di. For undirected graphs our convention will be that if there P is an edge then both (i; j) 2 E and (j; i) 2 E. Thus (i;j)2E 1 = 2jEj. If we wish to sum P over edges only once, we will write fi; jg 2 E for the unordered pair. Thus fi;jg2E 1 = jEj. 4.1 Matrices associated to a graph Given an undirected graph G, the most natural matrix associated to it is its adjacency matrix: Definition 4.1 (Adjacency matrix). The adjacency matrix A 2 f0; 1gn×n is defined as ( 1 if fi; jg 2 E; Aij = 0 otherwise. Note that A is always a symmetric matrix with exactly di ones in the i-th row and the i-th column. While A is a natural representation of G when we think of a matrix as a table of numbers used to store information, it is less natural if we think of a matrix as an operator, a linear transformation which acts on vectors. The most natural operator associated with a graph is the diffusion operator, which spreads a quantity supported on any vertex equally onto its neighbors. To introduce the diffusion operator, first consider the degree matrix: Definition 4.2 (Degree matrix).
    [Show full text]
  • MINIMUM DEGREE ENERGY of GRAPHS Dedicated to the Platinum
    Electronic Journal of Mathematical Analysis and Applications Vol. 7(2) July 2019, pp. 230-243. ISSN: 2090-729X(online) http://math-frac.org/Journals/EJMAA/ |||||||||||||||||||||||||||||||| MINIMUM DEGREE ENERGY OF GRAPHS B. BASAVANAGOUD AND PRAVEEN JAKKANNAVAR Dedicated to the Platinum Jubilee year of Dr. V. R. Kulli Abstract. Let G be a graph of order n. Then an n × n symmetric matrix is called the minimum degree matrix MD(G) of a graph G, if its (i; j)th entry is minfdi; dj g whenever i 6= j, and zero otherwise, where di and dj are the degrees of ith and jth vertices of G, respectively. In the present work, we obtain the characteristic polynomial of the minimum degree matrix of graphs obtained by some graph operations. In addition, bounds for the largest minimum degree eigenvalue and minimum degree energy of graphs are obtained. 1. Introduction Throughout this paper by a graph G = (V; E) we mean a finite undirected graph without loops and multiple edges of order n and size m. Let V = V (G) and E = E(G) be the vertex set and edge set of G, respectively. The degree dG(v) of a vertex v 2 V (G) is the number of edges incident to it in G. The graph G is r-regular if and only if the degree of each vertex in G is r. Let fv1; v2; :::; vng be the vertices of G and let di = dG(vi). Basic notations and terminologies can be found in [8, 12, 14]. In literature, there are several graph polynomials defined on different graph matrices such as adjacency matrix [8, 12, 14], Laplacian matrix [15], signless Laplacian matrix [9, 18], seidel matrix [5], degree sum matrix [13, 19], distance matrix [1] etc.
    [Show full text]
  • MATH7502 Topic 6 - Graphs and Networks
    MATH7502 Topic 6 - Graphs and Networks October 18, 2019 Group ID fbb0bfdc-79d6-44ad-8100-05067b9a0cf9 Chris Lam 41735613 Anthony North 46139896 Stuart Norvill 42938019 Lee Phillips 43908587 Minh Tram Julien Tran 44536389 Tutor Chris Raymond (P03 Tuesday 4pm) 1 [ ]: using Pkg; Pkg.add(["LightGraphs", "GraphPlot", "Laplacians","Colors"]); using LinearAlgebra; using LightGraphs, GraphPlot, Laplacians,Colors; 0.1 Graphs and Networks Take for example, a sample directed graph: [2]: # creating the above directed graph let edges = [(1, 2), (1, 3), (2,4), (3, 2), (3, 5), (4, 5), (4, 6), (5, 2), (5, 6)] global graph = DiGraph(Edge.(edges)) end [2]: {6, 9} directed simple Int64 graph 2 0.2 Incidence matrix • shows the relationship between nodes (columns) via edges (rows) • edges are connected by exactly two nodes (duh), with direction indicated by the sign of each row in the edge column – a value of 1 at A12 indicates that edge 1 is directed towards node 1, or node 2 is the destination node for edge 1 • edge rows sum to 0 and constant column vectors c(1, ... , 1) are in the nullspace • cannot represent self-loops (nodes connected to themselves) Using Strang’s definition in the LALFD book, a graph consists of nodes defined as columns n and edges m as rows between the nodes. An Incidence Matrix A is m × n. For the above sample directed graph, we can generate its incidence matrix. [3]: # create an incidence matrix from a directed graph function create_incidence(graph::DiGraph) M = zeros(Int, ne(graph), nv(graph)) # each edge maps to a row in the incidence
    [Show full text]