Eigenvector Centrality and Uniform Dominant Eigenvalue of Graph Components
Total Page:16
File Type:pdf, Size:1020Kb
Eigenvector Centrality and Uniform Dominant Eigenvalue of Graph Components Collins Anguzua,1, Christopher Engströmb,2, John Mango Mageroa,3,∗, Henry Kasumbaa,4, Sergei Silvestrovb,5, Benard Abolac,6 aDepartment of Mathematics, School of Physical Sciences, Makerere University, Box 7062, Kampala, Uganda bDivision of Applied Mathematics, The School of Education, Culture and Communication (UKK), Mälardalen University, Box 883, 721 23, Västerås,Sweden cDepartment of Mathematics, Gulu University, Box 166, Gulu, Uganda Abstract Eigenvector centrality is one of the outstanding measures of central tendency in graph theory. In this paper we consider the problem of calculating eigenvector centrality of graph partitioned into components and how this partitioning can be used. Two cases are considered; first where the a single component in the graph has the dominant eigenvalue, secondly when there are at least two components that share the dominant eigenvalue for the graph. In the first case we implement and compare the method to the usual approach (power method) for calculating eigenvector centrality while in the second case with shared dominant eigenvalues we show some theoretical and numerical results. Keywords: Eigenvector centrality, power iteration, graph, strongly connected component. ?This document is a result of the PhD program funded by Makerere-Sida Bilateral Program under Project 316 "Capacity Building in Mathematics and Its Applications". arXiv:2107.09137v1 [math.NA] 19 Jul 2021 ∗John Mango Magero Email addresses: [email protected] (Collins Anguzu), [email protected] (Christopher Engström), [email protected] (John Mango Magero), [email protected] (Henry Kasumba), [email protected] (Sergei Silvestrov), [email protected] (Benard Abola) 1PhD student in Mathematics, Makerere University. 2Senior Lecturer, Mälardalen University. 3Professor of Mathematics, Makerere University - the corresponding author. 4Senior Lecturer, Makerere University. 5Professor of Mathematics and Applied Mathematics, Mälardalen University 6Lecturer, Gulu University. Preprint submitted to Journal of Applied Numerical Mathematics July 21, 2021 1. Introduction 1.1. Ranking vertices in a network In the contemporary world of today, to be popular or to have one’s products popular or better still to associate with popular figures is every one’s desire. 5 The notion of popularity comes along with the idea of hierarchical assembling or ordering of items of a certain peculiar characteristics. In other words items of a certain particular category are ranked in order of importance. One of the first ranking algorithms is the PageRank algorithm. The PageR- ank algorithm [1, 2] was developed by Sergey Brin and Larry Page in 1998 to 10 rank web pages. Since then, the web search engine (now the famous Google) has made rounds in the world of data science. Alongside PageRank, other measures of central tendency, have emerged over the years. In this paper we will look at another important centrality, namely Eigenvector centrality. It has several applications in social networks, engineering 15 and telecommunications. Various algorithms are used to calculate the eigenvec- tor centrality of a graph, with the most efficient being the Power method. In this paper we derive a method, blended by the usual power iteration for component-wise computation of eigenvector centrality. We then compare this method with the power method of computing eigenvector centrality of the 20 whole graph, in terms of the number of iterations and the computational time. We begin with the case where the strongly connected components have varying dominant eigenvalues. In this case there exists a component that has the overall dominant eigenvalue. Furthermore, we put our attention on components that have the same dominant eigenvalues in a sub-graph that is a single block of 25 strongly connected components. Also of our keen interest is the case where the graph has several independent blocks of strongly connected components, with all components having the same dominant eigenvalue. It is well known that eigenvector centrality measures the level of importance of a vertex within a graph. This is achieved by awarding scores to every vertex 30 in the graph, which scores are dependent upon the number and strength of 2 connections to the particular vertex. A vertex joined by links from popular vertices scores highly and hence is popular [3, 4] . The description given here above qualifies the PageRank as a variant of the eigenvector centrality based on directed graphs since it uses back links. 35 1.2. Power Method The Power method is an algorithm that defines a sequence of values recur- sively obtained by the relation > A xk xk+1 = > : (1) kA xkk The matrix A in relation (1) is a diagonalisable matrix of dominant eigenvalue λ and x is a nonzero vector (called the dominant eigenvector corresponding to λ ), such that A>x = λx: (2) The algorithm in relation (1) starts with an approximation or a random vector, x0. For the sequence xk to converge, two assumptions must be made: (i) Matrix A must have a dominant eigenvalue. (ii) The starting vector x0 must have a nonzero component in direction of the 40 dominant eigenvector. To this end, we incline our work to graphs that can be partitioned into compo- nents. Details of graph partitioning can however, be obtained from [5], [6], [7], [8], [9] and [10]. Here we give only a brief summary of partitioning of graphs into diagonal blocks. 45 1.3. Graph Partitioning In ranking problems, partitioning network structures as strongly connected components plays a crucial role in improving computational drawbacks [11]. In such a case, the corresponding components yield adjacency matrix A with a 3 block triangular form, that is, 0 1 W1;1 W1;2 ··· W1;r B C B C B 02;1 W2;2 ··· W2;rC A = B C (3) B . C B . .. C @ A 0r;1 0r;2 ··· Wr;r where Wii = ni, i = 1; 2;:::; r and n1 +n2 +:::+nr = n. Suppose Wir = 0; i > r, then A is a block triangular matrix. You can get such a block-triangularisation by finding the strongly connected components as described later in Section 3. In this section we have highlighted on ranking vertices in a network, the 50 power method and graph partitioning. The rest of the article is organized as follows: In Section 2 we give a full description of the method we use in this paper, sighting lemmas and theorems involved. We also mention the circumstances under which the method works or fails. Section 3 presents the implementation of the method with large data. In Section 4 we give results of the implementation 55 and Section 5 presents conclusions of the study. 2. Description of the method 2.1. An overview of the method In the next section we show the systematic steps in deriving the method to efficiently calculate the eigenvector centrality of a graph following the chain of components. In the same section we give some numerical examples to illustrate the method. We thus begin the section with a few notations that are or shall be used in this paper. The adjacency matrix A shall have its transposed form > denoted by A . The symbol λmax shall denote the maximum of the dominant eigenvalues of matrix A. Consider a block diagonal matrix A whose transposed form is 2 3 > W1;1 0 A = 4 5 ; W2;1 W2;2 4 where W1;1 and W2;2 represent the two block partitions (components) of the network. When raised to some integer powers m (≥ 2), we get 2 3 2 > 2 W1;1 0 (A ) = 4 5 ; 2 W2;1W1;1 + W2;1W2;2 W2;2 2 3 3 > 3 W1;1 0 (A ) = 4 5 ; 2 2 3 W2;1W1;1 + W2;2W2;1W1;1 + W2;1W2;2 W2;2 2 3 4 > 4 W1;1 0 (A ) = 4 5 ; 3 2 2 3 4 W2;1W1;1 + W2;2W2;1W1;1 + W2;2W2;1W1;1 + W2;1W2;2 W2;2 and so on. 2 3 W1;1 0 0 6 7 > 6 7 While for a 3 × 3 block matrix, A = 6 W21 W2;2 0 7, 4 5 W31 W32 W3;3 60 we have 2 3 2 W1;1 0 0 6 7 (A>)2 = 6 2 7 6 W2;1W1;1 + W2;1W2;2 W2;2 0 7, 4 5 2 W3;1W1;1 + W3;1W3;3 + W3;1W3;2 W3;2W2;2 + W3;2W3;3 W3;3 and so on. We now consider a large network that has generated r block components having links within themselves and also between the components except for the lowest component Wr;r that we assume not to have out links or is a dangling component. The transpose of the said matrix is 0 1 W1;1 0 ··· 0 B C BW W ··· 0 C > B 2;1 2;2 C A = B C ; (4) B . C B . .. C @ A Wr;1 Wr;2 ··· Wr;r where Wi;i for i = 1; : : : ; r are transitions within the components, Wi;j are 65 transitions into the respective diagonal states in the lower levels, Wi;j = 0 for 5 j > i. Raising this matrix to an arbitrary integer power m gives 0 m 1 W1;1 0 ··· 0 B C BW(m) Wm ··· 0 C > m B 2;1 2;2 C (A ) = B C : (5) B . C B . .. C @ A (m) (m) m Wr;1 Wr;2 ··· Wr;r Considering the case where m = 2, we have the simplified form of expres- (2) 0 1 sion for the new W2;1 as W2;1 = W2;1W1;1 + W2;2W2;1 = W2;2W2;1W1;1 + 1 0 (3) 2 W2;2W2;1W1;1. Similarly, for m = 3 we have W3;1 = W3;1W1;1+W2;2W3;1W1;1+ 2 0 2 1 1 3 0 70 W2;2W3;1 = W2;2W3;1W1;1 + W2;2W3;1W1;1 + W2;2W3;1W1;1.