Graphs Adjacency Matrix

Total Page:16

File Type:pdf, Size:1020Kb

Graphs Adjacency Matrix LAB 8 MATH 150 LINEAR ALGEBRA SPECTRAL CLUSTERING SPRING 2019 Graphs A graph is a system of vertices and edges, with an edge between vertices indicating some kind of relationship between those vertices. Sometimes edges include arrows to indicate direction, or may loop from a vertex back to itself. There is an entire field of study devoted to graphs, with many problems that are easy to understand but hard to solve. In this lab, we’ll focus on a particular type of graph, with the following properties: • Simple: there should not be multiple edges between any two vertices, and no loops from a vertex to itself. • Labeled: each vertex will be distinguished in some way, usually by numbering. The vertices are meant to represent particular objects, people, places, or other things that can not be considered equivalent. • Undirected: no edges will have an arrow indicating a “one-way” relationship between the vertices. An example of just such a graph is below: 1 3 2 6 4 5 Note that the placement of the vertices means nothing, nor does the length of an edge indicate anything about the relation- ship between the vertices. In fact, the graph below is completely equivalent to the one above: 3 6 1 4 2 5 All of the vertices have the same relationship (as defined by the edges) that they did before, so the meaning of the graph is not changed. Adjacency Matrix The adjacency matrix of a graph has an entry of 1 in the ith row and jth column if there is an edge from i to j, and a 0 otherwise. The adjacency matrix for the graph above would be 20 0 0 0 1 03 60 0 1 1 1 07 6 7 60 1 0 1 0 17 A = 6 7 60 1 1 0 0 17 6 7 41 1 0 0 0 05 0 0 1 1 0 0 Degree Matrix The degree matrix of a graph is a diagonal matrix with an entry of k in row i and column i if vertex i has k edges attached to it. Here again is our graph: 1 3 2 6 4 5 and here is its degree matrix: 21 0 0 0 0 03 60 3 0 0 0 07 6 7 60 0 3 0 0 07 D = 6 7 60 0 0 3 0 07 6 7 40 0 0 0 2 05 0 0 0 0 0 2 We can also find the degree matrix by adding the numbers in each row of the adjacency matrix. The Laplacian The Laplacian matrix of a (simple) graph is the difference between its adjacency matrix and its degree matrix, L = D − A. In our example, 21 0 0 0 0 03 20 0 0 0 1 03 2 1 0 0 0 −1 0 3 60 3 0 0 0 07 60 0 1 1 1 07 6 0 3 −1 −1 −1 0 7 6 7 6 7 6 7 60 0 3 0 0 07 60 1 0 1 0 17 6 0 −1 3 −1 0 −17 L = 6 7 − 6 7 = 6 7 60 0 0 3 0 07 60 1 1 0 0 17 6 0 −1 −1 3 0 −17 6 7 6 7 6 7 40 0 0 0 2 05 41 1 0 0 0 05 4−1 −1 0 0 2 0 5 0 0 0 0 0 2 0 0 1 1 0 0 0 0 −1 −1 0 2 The Fiedler vector The Fieldler vector of a graph is an eigenvector of the Laplacian matrix of that graph. Specifically, it’s the eigenvector asso- ciated with the second smallest eigenvalue, called the “spectral gap,” of the Laplacian. Even in this small example, finding the eigenvectors of a 6 × 6 matrix is labor intensive, so we’ll have Scilab compute every- thing for use. First define the matrix and check that it’s entered correctly: -->L = [1 0 0 0 -1 0//03 -1 -1 -1 0//0 -13 -10 -1//0 -1 -130 -1//-1 -1002 0//0 0 -1 -10 2] L= 1. 0. 0. 0. -1. 0. 0. 3. -1. -1. -1. 0. 0. -1. 3. -1. 0. -1. 0. -1. -1. 3. 0. -1. -1. -1. 0. 0. 2. 0. 0. 0. -1. -1. 0. 2. Next, use the spec method to retrieve the eigenvalues and eigenvectors of the matrix. The method actually returns two ma- trices: one with diagonal entries corresponding to the eigenvalues of L, and the other whose columns correspond to those eigenvalues in the same order. Only the eigenvalues are returned when we type spec(L): --> spec(L) ans = 8.327D-16 0.4384472 2. 3. 4. 4.5615528 But we can see everything we need if we give names to the two matrices, like evals and evecs: --> [evecs, evals] = spec(L) evals= 7.078D-16 0. 0. 0. 0. 0. 0. 0.4384472 0. 0. 0. 0. 0. 0. 2. 0. 0. 0. 0. 0. 0. 3. 0. 0. 0. 0. 0. 0. 4. 0. 0. 0. 0. 0. 0. 4.5615528 evecs= 0.4082483 -0.7018088 -0.5 0.2886751 2.490D-16 -0.0863966 0.4082483 0.0863966 0.5 0.2886751 -8.097D-16 -0.7018088 0.4082483 0.3077061 0. 0.2886751 -0.7071068 0.3941027 0.4082483 0.3077061 1.138D-16 0.2886751 0.7071068 0.3941027 0.4082483 -0.3941027 0.5 -0.5773503 0. 0.3077061 0.4082483 0.3941027 -0.5 -0.5773503 -8.882D-16 -0.3077061 Note: The smallest eigenvalue of a Laplacian will always be 0. However, Scilab computes the eigenvalues numerically, and is open to rounding error. This is a good example of why numerical approximations can be misleading! The second smallest eigenvalue would be λ ≈ 0:4384472. There may be a bit of rounding error here too, but the exact value does not matter. The corresponding eigenvector is the Fiedler vector we want: 2−0:70180883 6 0:0863966 7 6 7 6 0:3077061 7 f~ = 6 7 6 0:3077061 7 6 7 4−0:39410275 0:3941027 Clustering Data The Fiedler vector we just found encodes a method for finding the minimum “graph cut” needed to split the graph into two connected groups. The minimum cut is the one that requires breaking the least number of edges. Hence, it will effectively split our vertices into two strongly related groups. Looking at the graph, you may be able to predict how this will turn out. The vertices corresponding to rows of the Fiedler vector with positive entries will form one cluster, and those corresponding to rows with negative entries will form the second cluster: 2−0:70180883 2−3 6 0:0863966 7 6+7 6 7 6 7 6 0:3077061 7 6+7 f~ = 6 7 ) 6 7 6 0:3077061 7 6+7 6 7 6 7 4−0:39410275 4−5 0:3941027 + So the vertices 1 and 5 would form one cluster, with vertices 2, 3, 4, and 6 forming the other. The minimal cut is only through 1 edge to form these two connected components: 1 3 2 6 4 5 4-Way Clustering The eigenvectors of the Laplacian can also be used to find the minimal way to lay out the graph on a Cartesian grid so that related vertices appear connected in the same quadrants. We then “cut” the edges lying across each axis to form four con- nected clusters. To do so, we’ll use the second and third eigenvectors (the Fiedler vector and the one next to it, corresponding to the next largest eigenvalue). In this case, the eigenvectors we’re interesting in are 2−0:70180883 2 −0:5 3 6 0:0863966 7 6 0:5 7 6 7 6 7 6 0:3077061 7 6 0 7 ~x = 6 7 ~y = 6 7 6 0:3077061 7 and 6−1:260 × 10−167 6 7 6 7 4−0:39410275 4 0:5 5 0:3941027 −0:5 Now for each vertex i, plot a point at (~x[i]; ~y[i]). For instance, the vertex for 1 would be plotted at (−0:7018088; −0:5). Doing so for each vertex yields: 5 2 43 1 6 All of the original edges are there too, just “linearized” to lie along a single path. Next, form clusters by which quadrant each vertex falls in. It’s hard to see, but the vertices 3 and 4 are in the same position in quadrant I. So the clusters would be f2; 3; 4g f5g f1g f6g Returning to the graph to highlight the new clusters: 1 3 2 6 4 5 Collaboration Matrices The matrix below represents the collaboration between faculty, and can also be considered an adjacency matrix. Each row and column represents a faculty member, labeled 1 through 15. A 1 in the matrix indicates that the faculty in that row and column have collaboration on a research project. A 0 indicates the faculty have never worked together. 20101100100100103 61001001100010017 6 7 60000011001100007 6 7 61100100010010017 6 7 61001001001010107 6 7 60010000010001017 6 7 60110100100101007 6 7 A = 61100001001000107 6 7 60001010000110007 6 7 60010100100001017 6 7 61010001010010107 6 7 60101100010100017 6 7 60000011001000117 6 7 41000100100101005 010101000101100 The department is considering rearranging its offices so that faculty that work together are in the same area of the building. A look at the graph suggests this will be difficult, as there is no obvious “split” in faculty collaboration.
Recommended publications
  • Arxiv:1711.06300V1
    EXPLICIT BLOCK-STRUCTURES FOR BLOCK-SYMMETRIC FIEDLER-LIKE PENCILS∗ M. I. BUENO†, M. MARTIN ‡, J. PEREZ´ §, A. SONG ¶, AND I. VIVIANO k Abstract. In the last decade, there has been a continued effort to produce families of strong linearizations of a matrix polynomial P (λ), regular and singular, with good properties, such as, being companion forms, allowing the recovery of eigen- vectors of a regular P (λ) in an easy way, allowing the computation of the minimal indices of a singular P (λ) in an easy way, etc. As a consequence of this research, families such as the family of Fiedler pencils, the family of generalized Fiedler pencils (GFP), the family of Fiedler pencils with repetition, and the family of generalized Fiedler pencils with repetition (GFPR) were con- structed. In particular, one of the goals was to find in these families structured linearizations of structured matrix polynomials. For example, if a matrix polynomial P (λ) is symmetric (Hermitian), it is convenient to use linearizations of P (λ) that are also symmetric (Hermitian). Both the family of GFP and the family of GFPR contain block-symmetric linearizations of P (λ), which are symmetric (Hermitian) when P (λ) is. Now the objective is to determine which of those structured linearizations have the best numerical properties. The main obstacle for this study is the fact that these pencils are defined implicitly as products of so-called elementary matrices. Recent papers in the literature had as a goal to provide an explicit block-structure for the pencils belonging to the family of Fiedler pencils and any of its further generalizations to solve this problem.
    [Show full text]
  • "Distance Measures for Graph Theory"
    Distance measures for graph theory : Comparisons and analyzes of different methods Dissertation presented by Maxime DUYCK for obtaining the Master’s degree in Mathematical Engineering Supervisor(s) Marco SAERENS Reader(s) Guillaume GUEX, Bertrand LEBICHOT Academic year 2016-2017 Acknowledgments First, I would like to thank my supervisor Pr. Marco Saerens for his presence, his advice and his precious help throughout the realization of this thesis. Second, I would also like to thank Bertrand Lebichot and Guillaume Guex for agreeing to read this work. Next, I would like to thank my parents, all my family and my friends to have accompanied and encouraged me during all my studies. Finally, I would thank Malian De Ron for creating this template [65] and making it available to me. This helped me a lot during “le jour et la nuit”. Contents 1. Introduction 1 1.1. Context presentation .................................. 1 1.2. Contents .......................................... 2 2. Theoretical part 3 2.1. Preliminaries ....................................... 4 2.1.1. Networks and graphs .............................. 4 2.1.2. Useful matrices and tools ........................... 4 2.2. Distances and kernels on a graph ........................... 7 2.2.1. Notion of (dis)similarity measures ...................... 7 2.2.2. Kernel on a graph ................................ 8 2.2.3. The shortest-path distance .......................... 9 2.3. Kernels from distances ................................. 9 2.3.1. Multidimensional scaling ............................ 9 2.3.2. Gaussian mapping ............................... 9 2.4. Similarity measures between nodes .......................... 9 2.4.1. Katz index and its Leicht’s extension .................... 10 2.4.2. Commute-time distance and Euclidean commute-time distance .... 10 2.4.3. SimRank similarity measure .........................
    [Show full text]
  • Chapter 4 Introduction to Spectral Graph Theory
    Chapter 4 Introduction to Spectral Graph Theory Spectral graph theory is the study of a graph through the properties of the eigenvalues and eigenvectors of its associated Laplacian matrix. In the following, we use G = (V; E) to represent an undirected n-vertex graph with no self-loops, and write V = f1; : : : ; ng, with the degree of vertex i denoted di. For undirected graphs our convention will be that if there P is an edge then both (i; j) 2 E and (j; i) 2 E. Thus (i;j)2E 1 = 2jEj. If we wish to sum P over edges only once, we will write fi; jg 2 E for the unordered pair. Thus fi;jg2E 1 = jEj. 4.1 Matrices associated to a graph Given an undirected graph G, the most natural matrix associated to it is its adjacency matrix: Definition 4.1 (Adjacency matrix). The adjacency matrix A 2 f0; 1gn×n is defined as ( 1 if fi; jg 2 E; Aij = 0 otherwise. Note that A is always a symmetric matrix with exactly di ones in the i-th row and the i-th column. While A is a natural representation of G when we think of a matrix as a table of numbers used to store information, it is less natural if we think of a matrix as an operator, a linear transformation which acts on vectors. The most natural operator associated with a graph is the diffusion operator, which spreads a quantity supported on any vertex equally onto its neighbors. To introduce the diffusion operator, first consider the degree matrix: Definition 4.2 (Degree matrix).
    [Show full text]
  • MINIMUM DEGREE ENERGY of GRAPHS Dedicated to the Platinum
    Electronic Journal of Mathematical Analysis and Applications Vol. 7(2) July 2019, pp. 230-243. ISSN: 2090-729X(online) http://math-frac.org/Journals/EJMAA/ |||||||||||||||||||||||||||||||| MINIMUM DEGREE ENERGY OF GRAPHS B. BASAVANAGOUD AND PRAVEEN JAKKANNAVAR Dedicated to the Platinum Jubilee year of Dr. V. R. Kulli Abstract. Let G be a graph of order n. Then an n × n symmetric matrix is called the minimum degree matrix MD(G) of a graph G, if its (i; j)th entry is minfdi; dj g whenever i 6= j, and zero otherwise, where di and dj are the degrees of ith and jth vertices of G, respectively. In the present work, we obtain the characteristic polynomial of the minimum degree matrix of graphs obtained by some graph operations. In addition, bounds for the largest minimum degree eigenvalue and minimum degree energy of graphs are obtained. 1. Introduction Throughout this paper by a graph G = (V; E) we mean a finite undirected graph without loops and multiple edges of order n and size m. Let V = V (G) and E = E(G) be the vertex set and edge set of G, respectively. The degree dG(v) of a vertex v 2 V (G) is the number of edges incident to it in G. The graph G is r-regular if and only if the degree of each vertex in G is r. Let fv1; v2; :::; vng be the vertices of G and let di = dG(vi). Basic notations and terminologies can be found in [8, 12, 14]. In literature, there are several graph polynomials defined on different graph matrices such as adjacency matrix [8, 12, 14], Laplacian matrix [15], signless Laplacian matrix [9, 18], seidel matrix [5], degree sum matrix [13, 19], distance matrix [1] etc.
    [Show full text]
  • MATH7502 Topic 6 - Graphs and Networks
    MATH7502 Topic 6 - Graphs and Networks October 18, 2019 Group ID fbb0bfdc-79d6-44ad-8100-05067b9a0cf9 Chris Lam 41735613 Anthony North 46139896 Stuart Norvill 42938019 Lee Phillips 43908587 Minh Tram Julien Tran 44536389 Tutor Chris Raymond (P03 Tuesday 4pm) 1 [ ]: using Pkg; Pkg.add(["LightGraphs", "GraphPlot", "Laplacians","Colors"]); using LinearAlgebra; using LightGraphs, GraphPlot, Laplacians,Colors; 0.1 Graphs and Networks Take for example, a sample directed graph: [2]: # creating the above directed graph let edges = [(1, 2), (1, 3), (2,4), (3, 2), (3, 5), (4, 5), (4, 6), (5, 2), (5, 6)] global graph = DiGraph(Edge.(edges)) end [2]: {6, 9} directed simple Int64 graph 2 0.2 Incidence matrix • shows the relationship between nodes (columns) via edges (rows) • edges are connected by exactly two nodes (duh), with direction indicated by the sign of each row in the edge column – a value of 1 at A12 indicates that edge 1 is directed towards node 1, or node 2 is the destination node for edge 1 • edge rows sum to 0 and constant column vectors c(1, ... , 1) are in the nullspace • cannot represent self-loops (nodes connected to themselves) Using Strang’s definition in the LALFD book, a graph consists of nodes defined as columns n and edges m as rows between the nodes. An Incidence Matrix A is m × n. For the above sample directed graph, we can generate its incidence matrix. [3]: # create an incidence matrix from a directed graph function create_incidence(graph::DiGraph) M = zeros(Int, ne(graph), nv(graph)) # each edge maps to a row in the incidence
    [Show full text]
  • Two One-Parameter Special Geometries
    Two One-Parameter Special Geometries Volker Braun1, Philip Candelas2 and Xenia de la Ossa2 1Elsenstrasse 35 12435 Berlin Germany 2Mathematical Institute University of Oxford Radcliffe Observatory Quarter Woodstock Road Oxford OX2 6GG, UK arXiv:1512.08367v1 [hep-th] 28 Dec 2015 Abstract The special geometries of two recently discovered Calabi-Yau threefolds with h11 = 1 are analyzed in detail. These correspond to the 'minimal three-generation' manifolds with h21 = 4 and the `24-cell' threefolds with h21 = 1. It turns out that the one- dimensional complex structure moduli spaces for these manifolds are both very similar and surprisingly complicated. Both have 6 hyperconifold points and, in addition, there are singularities of the Picard-Fuchs equation where the threefold is smooth but the Yukawa coupling vanishes. Their fundamental periods are the generating functions of lattice walks, and we use this fact to explain why the singularities are all at real values of the complex structure. Contents 1 Introduction 1 2 The Special Geometry of the (4,1)-Manifold 6 2.1 Fundamental Period . .6 2.2 Picard-Fuchs Differential Equation . .8 2.3 Integral Homology Basis and the Prepotential . 11 2.4 The Yukawa coupling . 16 3 The Special Geometry of the (1,1)-Manifold(s) 17 3.1 The Picard-Fuchs Operator . 17 3.2 Monodromy . 19 3.3 Prepotential . 22 3.4 Integral Homology Basis . 23 3.5 Instanton Numbers . 24 3.6 Cross Ratios . 25 4 Lattice Walks 27 4.1 Spectral Considerations . 27 4.2 Square Lattice . 28 1. Introduction Detailed descriptions of special geometries are known only in relatively few cases.
    [Show full text]
  • Seidel Signless Laplacian Energy of Graphs 1. Introduction
    Mathematics Interdisciplinary Research 2 (2017) 181 − 191 Seidel Signless Laplacian Energy of Graphs Harishchandra S. Ramane?, Ivan Gutman, Jayashri B. Patil and Raju B. Jummannaver Abstract Let S(G) be the Seidel matrix of a graph G of order n and let DS (G) = diag(n − 1 − 2d1; n − 1 − 2d2; : : : ; n − 1 − 2dn) be the diagonal matrix with di denoting the degree of a vertex vi in G. The Seidel Laplacian matrix of G is defined as SL(G) = DS (G) − S(G) and the Seidel signless Laplacian + matrix as SL (G) = DS (G) + S(G). The Seidel signless Laplacian energy ESL+ (G) is defined as the sum of the absolute deviations of the eigenvalues of SL+(G) from their mean. In this paper, we establish the main properties + of the eigenvalues of SL (G) and of ESL+ (G). Keywords: Seidel Laplacian eigenvalues, Seidel Laplacian energy, Seidel sign- less Laplacian matrix, Seidel signless Laplacian eigenvalues, Seidel signless Laplacian energy. 2010 Mathematics Subject Classification: 05C50. 1. Introduction Let G be a simple, undirected graph with n vertices and m edges. We say that G is an (n; m)-graph. Let v1; v2; : : : ; vn be the vertices of G. The degree of a vertex vi is the number of edges incident to it and is denoted by di. If di = r for all i = 1; 2; : : : ; n, then G is said to be an r-regular graph. By G will be denoted the complement of the graph G. The adjacency matrix of a graph G is the square matrix A(G) = (aij) , in which aij = 1 if vi is adjacent to vj and aij = 0 otherwise.
    [Show full text]
  • A Short Tutorial on Graph Laplacians, Laplacian Embedding, and Spectral Clustering
    A Short Tutorial on Graph Laplacians, Laplacian Embedding, and Spectral Clustering Radu Horaud INRIA Grenoble Rhone-Alpes, France [email protected] http://perception.inrialpes.fr/ Radu Horaud Graph Laplacian Tutorial Introduction The spectral graph theory studies the properties of graphs via the eigenvalues and eigenvectors of their associated graph matrices: the adjacency matrix and the graph Laplacian and its variants. Both matrices have been extremely well studied from an algebraic point of view. The Laplacian allows a natural link between discrete representations, such as graphs, and continuous representations, such as vector spaces and manifolds. The most important application of the Laplacian is spectral clustering that corresponds to a computationally tractable solution to the graph partitionning problem. Another application is spectral matching that solves for graph matching. Radu Horaud Graph Laplacian Tutorial Applications of spectral graph theory Spectral partitioning: automatic circuit placement for VLSI (Alpert et al 1999), image segmentation (Shi & Malik 2000), Text mining and web applications: document classification based on semantic association of words (Lafon & Lee 2006), collaborative recommendation (Fouss et al. 2007), text categorization based on reader similarity (Kamvar et al. 2003). Manifold analysis: Manifold embedding, manifold learning, mesh segmentation, etc. Radu Horaud Graph Laplacian Tutorial Basic graph notations and definitions We consider simple graphs (no multiple edges or loops), G = fV; Eg: V(G) = fv1; : : : ; vng is called the vertex set with n = jVj; E(G) = feijg is called the edge set with m = jEj; An edge eij connects vertices vi and vj if they are adjacent or neighbors. One possible notation for adjacency is vi ∼ vj; The number of neighbors of a node v is called the degree of v and is denoted by d(v), d(v ) = P e .
    [Show full text]
  • The Distance Matrix and Its Variants for Graphs and Digraphs by Carolyn
    The distance matrix and its variants for graphs and digraphs by Carolyn Reinhart A dissertation submitted to the graduate faculty in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Major: Mathematics Program of Study Committee: Leslie Hogben, Major Professor Steve Butler Jonas Hartwig Sung-Yell Song Michael Young The student author, whose presentation of the scholarship herein was approved by the program of study committee, is solely responsible for the content of this dissertation. The Graduate College will ensure this dissertation is globally accessible and will not permit alterations after a degree is conferred. Iowa State University Ames, Iowa 2021 Copyright © Carolyn Reinhart, 2021. All rights reserved. ii TABLE OF CONTENTS Page ACKNOWLEDGMENTS . iii ABSTRACT . iv CHAPTER 1. GENERAL INTRODUCTION . .1 1.1 Definitions for graphs . .3 1.2 Digraphs . .4 1.3 References . .5 CHAPTER 2. THE NORMALIZED DISTANCE LAPLACIAN . .7 2.1 Introduction . .7 2.2 The normalized distance Laplacian . 11 2.3 Using twin vertices to determine eigenvalues . 18 2.4 Characteristic polynomials . 21 2.5 Cospectral graphs . 25 2.5.1 Cospectral pairs on 10 or fewer vertices . 26 2.5.2 The number of graphs with a cospectral mate . 32 2.6 Concluding remarks . 33 2.7 References . 34 CHAPTER 3. A NOTE ON THE PRESERVATION OF GRAPH PARAMETERS BY COSPECTRALITY FOR THE DISTANCE MATRIX AND ITS VARIANTS . 37 3.1 Introduction . 37 3.2 Preservation of parameters . 39 3.3 Concluding remarks . 44 3.4 References . 44 CHAPTER 4. DISTANCE COSPECTRALITY IN DIGRAPHS . 46 4.1 Introduction . 46 4.2 The number of digraphs with distance cospectral mates .
    [Show full text]
  • Eigenvector Localization
    First, parse the title ... Eigenvector localization: • Eigenvectors are “usually” global entities • But they can be localized in extremely sparse/noisy graphs/matrices Implicit regularization: • Usually “exactly” optimize f+λg, for some λ and g • Regularization often a side effect of approximations to f Algorithmic anti-differentiation: • What is the objective that approximate computation exactly optimizes Large-scale graphs and network data: • Small versus medium versus large versus big • Social/information networks versus “constructed” graphs Outline Motivation: large informatics graphs • Downward-sloping, flat, and upward-sloping NCPs (i.e., not “nice” at large size scales, but instead expander-like/tree-like) • Implicit regularization in graph approximation algorithms Eigenvector localization & semi-supervised eigenvectors • Strongly and weakly local diffusions • Extension to semi-supervised eigenvectors Implicit regularization & algorithmic anti-differentiation • Early stopping in iterative diffusion algorithms • Truncation in diffusion algorithms Outline Motivation: large informatics graphs • Downward-sloping, flat, and upward-sloping NCPs (i.e., not “nice” at large size scales, but instead expander-like/tree-like) • Implicit regularization in graph approximation algorithms Eigenvector localization & semi-supervised eigenvectors • Strongly and weakly local diffusions • Extension to semi-supervised eigenvectors Implicit regularization & algorithmic anti-differentiation • Early stopping in iterative diffusion algorithms • Truncation
    [Show full text]
  • 35. Finding Clusters in Graphs
    MITOCW | 35. Finding Clusters in Graphs The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. GILBERT OK. Now, clustering for graphs. So this is a topic-- this is one of the important things you can STRANG: try to do with a graph. So you have a large graph. Let me kind of divide it into two clusters. So you've got a giant graph. And then the job is to make some sense out of it. And one possible step is to be able to subdivide it, if, as I see here, there's a cut between two reasonably equal parts of the graph-- reasonable-- reasonably same size. And therefore, that graph could be studied in two pieces. So the question is, how do you find such a cut by a algorithm? What's an algorithm that would find that cut? So that's a problem. Let's say we're looking for two clusters. We could look for more clusters, but let's say we want to look for two clusters. So what are we trying to do? We're trying to minimize. So this is the problem, then. So we look for-- find positions x and y, let's say. Two which will be the centers, so to speak, of the-- and really, it's just these points that-- so the data is the points and the edges, as always-- the nodes and the edges.
    [Show full text]
  • Laplacian Matrix and Applications
    LAPLACIAN MATRIX AND APPLICATIONS Alice Nanyanzi Supervisors: Dr. Franck Kalala Mutombo & Dr. Simukai Utete [email protected] August 24, 2017 1 Complex systems & Complex Networks 2 Networks Overview 3 Laplacian Matrix Laplacian Centrality Diffusion on networks Alice Nanyanzi (AIMS-SU) Laplacian Matrix August 24, 2017 1 / 22 Complex Systems; Complex Network/Large graph Approach Alice Nanyanzi (AIMS-SU) Laplacian Matrix August 24, 2017 2 / 22 Figure Introduction to Networks Intuition of Networks Whenever one mentions the word 'network', one normally thinks of an interconnection of items or things. Alice Nanyanzi (AIMS-SU) Laplacian Matrix August 24, 2017 3 / 22 Introduction to Networks Intuition of Networks Whenever one mentions the word 'network', one normally thinks of an interconnection of items or things. Formal Definition A network, G, is a pair (V ; E). Where V is the set of vertices (nodes) of G and E is the set of edges (links) of G. (Estrada & Knight). Categories of networks include simple networks, directed networks, undirected networks, weighted networks, etc (Estrada, 2015) Alice Nanyanzi (AIMS-SU) Laplacian Matrix August 24, 2017 3 / 22 Real-world Networks (a) Internet (b) Protein-Protein (c) Food web (d) Citation network Source: www.wikipedia.com Alice Nanyanzi (AIMS-SU) Laplacian Matrix August 24, 2017 4 / 22 Laplacian Matrix Definition Consider a simple undirected network, the Laplacian matrix L is the difference between the Degree matrix D and Adjacency matrix A i.e L = D − A. The entries of L are given as 8 k if i = j <> i Li;j = −1 if i 6= j and i is adjacent to j :>0 otherwise; where ki denotes the degree of node i (Estrada, 2011).
    [Show full text]