Clustering Scientific Collaboration Networks

Total Page:16

File Type:pdf, Size:1020Kb

Clustering Scientific Collaboration Networks Clustering Scientific Collaboration Networks Fabricio Murai Haibin Huang Jie Bing December 15, 2011 Abstract In this work, we evaluate the performance of different clustering methods applied to Scientific collaboration networks. In particular, we study two subsets of the coauthorship network we obtain from the DBLP dataset: the network composed by the faculty of the CS department at UMass and the largest connected component of the coauthorship network (containing about 848,000 nodes). We apply the following methods to both weighted and unweighted versions of those networks: Spectral and Hierarchical Clustering, the Kernighan-Lin algorithm, Spectral Partitioning, Edge betweenness and Leading eigenvector community detection. Our results show that methods that are not spectral-based perform better in general, but are clearly not scalable. On the other hand, we empirically show that the Spectral Clustering can perform almost as good as non-spectral-based methods, while retaining scalability as long as we use an approximate (but accurate) spectral decomposition. At last, we include some useful discussion about how to handle large sparse matrices. 1 Introduction Most real networks exhibit non-trivial characteristics such as long-tail degree distributions, small distances and high clusterization. In particular, consider the scientific collaboration networks, where nodes represent authors and edges between two nodes indicate that they have published at least one paper together. It is well-known these networks are highly clusterized, which means that there are groups of scientists that are much more likely to be connected to each other than if they were randomly selected. Although graph visualization techniques can be used to find clusters in very small networks, this becomes impracticable when graphs grow larger. Fortunately, there are many methods proposed for graph clustering in the literature that can deal with thousands of nodes, including the Kernighan- Lin algorithm, hierarchical and edge-betweenness community detection. For even larger graphs (e.g.: hundreds of thousands of nodes), while some clustering methods cannot be directly applied, some cannot be used at all. For instance, spectral-based methods include a spectral decomposition step, and hence can only be used if we replace the this step by an approximation that can be efficiently computed. However, we will see that some approximations can lead to poor clusterization in comparison to the exact decomposition. It is worth to note that unlike regression or classification problems, there is no right answer for a clustering problem. Nevertheless, one can define the quality of the clustering in the context of graphs with respect to the connectivity (i.e.: the relative number of edges) inside (e.g.: modularity) or outside clusters (e.g: cut set size). In this work, we compare different clustering methods applied to scientific collaboration networks in Computer Science extracted from the DBLP dataset. This dataset comprises more than 1 million authors that together published more than 2.8 million papers to the date of this work. In 1 variable description A adjacency matrix of G n number of vertices (nodes) in G m number of edges in G ki = j Aij degree of vertex i (in unweighted graph) KP vector such that Ki = ki D diagonal matrix such that Dii = ki si cluster to which vertex i belongs S = (s1,...,sn) partitioning of G θ(Sa,Sb) cut set size of the bisection (Sa,Sb) of G δ(.) Kronecker delta Table 1: Notation description. what concerns the methods, we compare Spectral and Hierarchical Clustering, the Kernighan-Lin algorithm, Spectral Partitioning, Edge betweenness and Leading eigenvector community detection. The clustering is then assessed using the modularity metric. 2 Problem statement Consider a set of publications and the corresponding authors extracted from the DBLP dataset. We build an undirected graph G where nodes are authors, including edges between pairs of authors that have published at least one paper together. We work with both weighted and unweighted versions of this graph. In the former version, the weight Aij of the edge between nodes i and j is computed as follows. Let P be the set of papers that i and j published together. Also let np be the number of authors of paper p. We set Aij = p∈P 1/np. In the unweighted version, A is simply the adjacency matrix of G. Therefore,P given the (un)weighted graph G we want to find a high quality clustering. In order to do this, we apply different clustering methods and evaluate the results with respect to the modularity metric, which is explained in detail in Section 3.1. The notation we use throughout this document is summarized in Table 1. 3 Background 3.1 Modularity The modularity is a metric that has been proposed by Newman [1] to evaluate clustering quality. It measures how many more edges there are inside clusters than expected with respect to a null model. In particular, the null model Newman uses is the configuration model, in which the probability of kikj i and j being connected is given by 2m . The modularity for unweighted graphs is given by 1 k k Q(S)= A − i j δ(s ,s ) (1) 2m ij 2m i j i,jX∈V This metric can be easily extended to weighted graphs, by taking A as the weighted adjacency matrix. 2 4 Methods 4.1 Spectral-based methods The k-means algorithm uses Euclidean distance as a similarity measure between observations. However, we cannot directly apply k-means to the adjacency matrix A because nodes are not embedded in an Euclidean space. In fact, if we consider the rows of A as coordinates in an Euclidean space, the distance between pairs of nodes will be very similar and will not correspond to the distance in the graph. Since the k-means algorithm mainly depends on the distance measure, it will not work well if we use the Euclidean distance in the original data space. Hence we use spectral-based methods which help us to translate our data into a vector space. 4.1.1 Spectral clustering The basic idea of spectral clustering is to map the original data into a vector space spanned by a few eigenvectors and apply the k-means algorithm in that space. The assumption here is that although our data samples are high dimensional, they lie in a low dimensional subspace of the original space. In the literature, there are several versions of spectral clustering based on different definitions of the graph Laplacian operator. Here we use the spectral clustering proposed by Shi and Malik [2], based on the normalized graph Laplacian. The algorithm is as following: Step 1. Compute the normalized graph Laplacian M = D−1/2AD−1/2 Step 3. Compute top k eigenvectors of M Step 4. Arrange the eigenvectors as the columns of a matrix Y Step 5. Run K-means on the new embedding matrix Y . 4.1.2 Spectral partitioning This method attempts to minimize the cut set size of a partitioning of a graph. The optimization problem is defined as a function of the cut set size between two groups: 1 arg min Aij1(si = sj) S 2 Xi,j It is shown in [3] that if we allow si to assume any value in [−1, 1], then the solution to this minimization problem is the eigenvector corresponding to the second smallest eigenvalue, also called Fiedler vector. Step 1: Compute the graph Laplacian L = D − A Step 2: Find the Fiedler vector v of L Step 3: Run k-means on the elements of the eigenvector v 4.1.3 Leading eigenvector Newman proposes an algorithm to directly maximize the modularity [4]. It is based on the relax- ation of the following maximization problem: 1 T arg max Q = 4 s Bs s m s.t. i si = 0 P KK′ where si is either −1 or 1, and B = A − 2m is called modularity matrix. In the relaxed version, si is allowed to assume any value in [−1, 1]. 3 The solution s to the relaxed optimization problem is the eigenvector corresponding to the largest eigenvalue of matrix B. Hence we have the Leading eigenvector algorithm: KK′ Step 1: Compute B = A − 2m Step 2: Find the eigenvector v corresponding to the largest eigenvalue Step 3: Run k-means on the elements of the eigenvector v 4.2 Non-spectral-based methods 4.2.1 Hierarchical Clustering Using a given similarity measure between pairs of nodes and also between groups, we can also perform hierarchical clustering as follows [5]: Step 1: Evaluate the similarity between all pairs of nodes Step 2: Assign each node to a group of its own. Step 3: Find the pair of groups with the highest similarity and join them Step 4: Repeat step (3) until we have k groups 4.2.2 Kernighan-Lin algorithm This method [6] bisects the graph until we find k groups. In each step, it randomly assigns nodes to one of two clusters and swaps pairs of nodes in order to reduce the cut size θ(S1,S2). Step 1: Randomly divide the network in two groups, S1 and S2, with n1 and n2 nodes, respec- tively, marking all nodes as untouched Step 2: For each pair of untouched nodes (i, j), i ∈ S1, j ∈ S2, calculate how much the cut size would change if we swap i and j Step 3: Swap the pair (i, j) that leads to the smallest cut size and mark the nodes as touched Step 4: For every state (S1,S2) that the network passed through during the swapping procedure, ′ ′ let (S1,S2) be the one with the smallest cut size ′ ′ Step 5: Go to Step 2 with S1 = S1,S2 = S2.
Recommended publications
  • Optimal Subgraph Structures in Scale-Free Configuration
    Submitted to the Annals of Applied Probability OPTIMAL SUBGRAPH STRUCTURES IN SCALE-FREE CONFIGURATION MODELS , By Remco van der Hofstad∗ §, Johan S. H. van , , Leeuwaarden∗ ¶ and Clara Stegehuis∗ k Eindhoven University of Technology§, Tilburg University¶ and Twente Universityk Subgraphs reveal information about the geometry and function- alities of complex networks. For scale-free networks with unbounded degree fluctuations, we obtain the asymptotics of the number of times a small connected graph occurs as a subgraph or as an induced sub- graph. We obtain these results by analyzing the configuration model with degree exponent τ (2, 3) and introducing a novel class of op- ∈ timization problems. For any given subgraph, the unique optimizer describes the degrees of the vertices that together span the subgraph. We find that subgraphs typically occur between vertices with specific degree ranges. In this way, we can count and characterize all sub- graphs. We refrain from double counting in the case of multi-edges, essentially counting the subgraphs in the erased configuration model. 1. Introduction Scale-free networks often have degree distributions that follow power laws with exponent τ (2, 3) [1, 11, 22, 34]. Many net- ∈ works have been reported to satisfy these conditions, including metabolic networks, the internet and social networks. Scale-free networks come with the presence of hubs, i.e., vertices of extremely high degrees. Another property of real-world scale-free networks is that the clustering coefficient (the probability that two uniformly chosen neighbors of a vertex are neighbors themselves) decreases with the vertex degree [4, 10, 24, 32, 34], again following a power law.
    [Show full text]
  • Detecting Statistically Significant Communities
    1 Detecting Statistically Significant Communities Zengyou He, Hao Liang, Zheng Chen, Can Zhao, Yan Liu Abstract—Community detection is a key data analysis problem across different fields. During the past decades, numerous algorithms have been proposed to address this issue. However, most work on community detection does not address the issue of statistical significance. Although some research efforts have been made towards mining statistically significant communities, deriving an analytical solution of p-value for one community under the configuration model is still a challenging mission that remains unsolved. The configuration model is a widely used random graph model in community detection, in which the degree of each node is preserved in the generated random networks. To partially fulfill this void, we present a tight upper bound on the p-value of a single community under the configuration model, which can be used for quantifying the statistical significance of each community analytically. Meanwhile, we present a local search method to detect statistically significant communities in an iterative manner. Experimental results demonstrate that our method is comparable with the competing methods on detecting statistically significant communities. Index Terms—Community Detection, Random Graphs, Configuration Model, Statistical Significance. F 1 INTRODUCTION ETWORKS are widely used for modeling the structure function that is able to always achieve the best performance N of complex systems in many fields, such as biology, in all possible scenarios. engineering, and social science. Within the networks, some However, most of these objective functions (metrics) do vertices with similar properties will form communities that not address the issue of statistical significance of commu- have more internal connections and less external links.
    [Show full text]
  • Processes on Complex Networks. Percolation
    Chapter 5 Processes on complex networks. Percolation 77 Up till now we discussed the structure of the complex networks. The actual reason to study this structure is to understand how this structure influences the behavior of random processes on networks. I will talk about two such processes. The first one is the percolation process. The second one is the spread of epidemics. There are a lot of open problems in this area, the main of which can be innocently formulated as: How the network topology influences the dynamics of random processes on this network. We are still quite far from a definite answer to this question. 5.1 Percolation 5.1.1 Introduction to percolation Percolation is one of the simplest processes that exhibit the critical phenomena or phase transition. This means that there is a parameter in the system, whose small change yields a large change in the system behavior. To define the percolation process, consider a graph, that has a large connected component. In the classical settings, percolation was actually studied on infinite graphs, whose vertices constitute the set Zd, and edges connect each vertex with nearest neighbors, but we consider general random graphs. We have parameter ϕ, which is the probability that any edge present in the underlying graph is open or closed (an event with probability 1 − ϕ) independently of the other edges. Actually, if we talk about edges being open or closed, this means that we discuss bond percolation. It is also possible to talk about the vertices being open or closed, and this is called site percolation.
    [Show full text]
  • Percolation Thresholds for Robust Network Connectivity
    Percolation Thresholds for Robust Network Connectivity Arman Mohseni-Kabir1, Mihir Pant2, Don Towsley3, Saikat Guha4, Ananthram Swami5 1. Physics Department, University of Massachusetts Amherst, [email protected] 2. Massachusetts Institute of Technology 3. College of Information and Computer Sciences, University of Massachusetts Amherst 4. College of Optical Sciences, University of Arizona 5. Army Research Laboratory Abstract Communication networks, power grids, and transportation networks are all examples of networks whose performance depends on reliable connectivity of their underlying network components even in the presence of usual network dynamics due to mobility, node or edge failures, and varying traffic loads. Percolation theory quantifies the threshold value of a local control parameter such as a node occupation (resp., deletion) probability or an edge activation (resp., removal) probability above (resp., below) which there exists a giant connected component (GCC), a connected component comprising of a number of occupied nodes and active edges whose size is proportional to the size of the network itself. Any pair of occupied nodes in the GCC is connected via at least one path comprised of active edges and occupied nodes. The mere existence of the GCC itself does not guarantee that the long-range connectivity would be robust, e.g., to random link or node failures due to network dynamics. In this paper, we explore new percolation thresholds that guarantee not only spanning network connectivity, but also robustness. We define and analyze four measures of robust network connectivity, explore their interrelationships, and numerically evaluate the respective robust percolation thresholds arXiv:2006.14496v1 [physics.soc-ph] 25 Jun 2020 for the 2D square lattice.
    [Show full text]
  • 15 Netsci Configuration Model.Key
    Configuring random graph models with fixed degree sequences Daniel B. Larremore Santa Fe Institute June 21, 2017 NetSci [email protected] @danlarremore Brief note on references: This talk does not include references to literature, which are numerous and important. Most (but not all) references are included in the arXiv paper: arxiv.org/abs/1608.00607 Stochastic models, sets, and distributions • a generative model is just a recipe: choose parameters→make the network • a stochastic generative model is also just a recipe: choose parameters→draw a network • since a single stochastic generative model can generate many networks, the model itself corresponds to a set of networks. • and since the generative model itself is some combination or composition of random variables, a random graph model is a set of possible networks, each with an associated probability, i.e., a distribution. this talk: configuration models: uniform distributions over networks w/ fixed deg. seq. Why care about random graphs w/ fixed degree sequence? Since many networks have broad or peculiar degree sequences, these random graph distributions are commonly used for: Hypothesis testing: Can a particular network’s properties be explained by the degree sequence alone? Modeling: How does the degree distribution affect the epidemic threshold for disease transmission? Null model for Modularity, Stochastic Block Model: Compare an empirical graph with (possibly) community structure to the ensemble of random graphs with the same vertex degrees. e a loopy multigraphs c d simple simple loopy graphs multigraphs graphs Stub Matching tono multiedgesdraw from theno config. self-loops model ~k = 1, 2, 2, 1 b { } 3 3 4 3 4 3 4 4 multigraphs 2 5 2 5 2 5 2 5 1 1 6 6 1 6 1 6 3 3 the 4standard algorithm:4 3 4 4 loopy and/or 5 5 draw2 from5 the distribution2 5 by sequential2 “Stub Matching”2 1 1 6 6 1 6 1 6 1.
    [Show full text]
  • A Multigraph Approach to Social Network Analysis
    1 Introduction Network data involving relational structure representing interactions between actors are commonly represented by graphs where the actors are referred to as vertices or nodes, and the relations are referred to as edges or ties connecting pairs of actors. Research on social networks is a well established branch of study and many issues concerning social network analysis can be found in Wasserman and Faust (1994), Carrington et al. (2005), Butts (2008), Frank (2009), Kolaczyk (2009), Scott and Carrington (2011), Snijders (2011), and Robins (2013). A common approach to social network analysis is to only consider binary relations, i.e. edges between pairs of vertices are either present or not. These simple graphs only consider one type of relation and exclude the possibility for self relations where a vertex is both the sender and receiver of an edge (also called edge loops or just shortly loops). In contrast, a complex graph is defined according to Wasserman and Faust (1994): If a graph contains loops and/or any pairs of nodes is adjacent via more than one line the graph is complex. [p. 146] In practice, simple graphs can be derived from complex graphs by collapsing the multiple edges into single ones and removing the loops. However, this approach discards information inherent in the original network. In order to use all available network information, we must allow for multiple relations and the possibility for loops. This leads us to the study of multigraphs which has not been treated as extensively as simple graphs in the literature. As an example, consider a network with vertices representing different branches of an organ- isation.
    [Show full text]
  • Critical Window for Connectivity in the Configuration Model 3
    CRITICAL WINDOW FOR CONNECTIVITY IN THE CONFIGURATION MODEL LORENZO FEDERICO AND REMCO VAN DER HOFSTAD Abstract. We identify the asymptotic probability of a configuration model CMn(d) to produce a connected graph within its critical window for connectivity that is identified by the number of vertices of degree 1 and 2, as well as the expected degree. In this window, the probability that the graph is connected converges to a non-trivial value, and the size of the complement of the giant component weakly converges to a finite random variable. Under a finite second moment con- dition we also derive the asymptotics of the connectivity probability conditioned on simplicity, from which the asymptotic number of simple connected graphs with a prescribed degree sequence follows. Keywords: configuration model, connectivity threshold, degree sequence MSC 2010: 05A16, 05C40, 60C05 1. Introduction In this paper we consider the configuration model CMn(d) on n vertices with a prescribed degree sequence d = (d1,d2, ..., dn). We investigate the condition on d for CMn(d) to be with high probability connected or disconnected in the limit as n , and we analyse the behaviour of the model in the critical window for connectivity→ ∞ (i.e., when the asymptotic probability of producing a connected graph is in the interval (0, 1)). Given a vertex v [n]= 1, 2, ..., n , we call d its degree. ∈ { } v The configuration model is constructed by assigning dv half-edges to each vertex v, after which the half-edges are paired randomly: first we pick two half-edges at arXiv:1603.03254v1 [math.PR] 10 Mar 2016 random and create an edge out of them, then we pick two half-edges at random from the set of remaining half-edges and pair them into an edge, etc.
    [Show full text]
  • Using Artificial Neural Network to Detect Fetal Alcohol Spectrum
    applied sciences Article Using Artificial Neural Network to Detect Fetal Alcohol Spectrum Disorder in Children Vannessa Duarte 1 , Paul Leger 2 , Sergio Contreras 1,* and Hiroaki Fukuda 3 1 Escuela de Ciencias Empresariales, Universidad Católica del Norte, Coquimbo 1780000, Chile; [email protected] 2 Escuela de Ingeniería, Universidad Católica del Norte, Coquimbo 1780000, Chile; [email protected] 3 Shibaura Institute of Technology, Tokio 138-8548, Japan; [email protected] * Correspondence: [email protected] Abstract: Fetal alcohol spectrum disorder (FASD) is an umbrella term for children’s conditions due to their mother having consumed alcohol during pregnancy. These conditions can be mild to severe, affecting the subject’s quality of life. An earlier diagnosis of FASD is crucial for an improved quality of life of children by allowing a better inclusion in the educational system. New trends in computer- based diagnosis to detect FASD include using Machine Learning (ML) tools to detect this syndrome. However, most of these studies rely on children’s images that can be invasive and costly. Therefore, this paper presents a study that focuses on evaluating an ANN to classify children with FASD using non-invasive and more accessible data. This data used comes from a battery of tests obtained from children, including psychometric, saccade eye movement, and diffusion tensor imaging (DTI). We study the different configurations of ANN with dense layers being the psychometric data that correctly perform the best with 75% of the outcome. The other models include a feature layer, and we used it to predict FASD using every test individually.
    [Show full text]
  • 0848736-Bachelorproject Peter Verleijsdonk
    Eindhoven University of Technology BACHELOR Site percolation on the hierarchical configuration model Verleijsdonk, Peter Award date: 2017 Link to publication Disclaimer This document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Student theses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the document as presented in the repository. The required complexity or quality of research of student theses may vary by program, and the required minimum study period may vary in duration. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain Department of Mathematics and Computer Science Site Percolation on the Hierarchical Configuration Model Bachelor Thesis P. Verleijsdonk Supervisors: prof. dr. R.W. van der Hofstad C. Stegehuis (MSc) Eindhoven, March 2017 Abstract This paper extends the research on percolation on the hierarchical configuration model. The hierarchical configuration model is a configuration model where single vertices are replaced by small community structures. We study site percolation on the hierarchical configuration model, as well as the critical percolation value, size of the giant component and distance distribution after percolation.
    [Show full text]
  • Arxiv:1806.06941V1 [Physics.Soc-Ph] 18 Jun 2018
    Reconstruction methods for networks: the case of economic and financial systems Tiziano Squartinia, Guido Caldarellia,b,c,∗, Giulio Ciminia,b, Andrea Gabriellib,a, Diego Garlaschellia,d aIMT School for Advanced Studies Lucca, P.zza San Francesco 19, 55100 Lucca (Italy) bIstituto dei Sistemi Complessi (ISC) - CNR, UoS Sapienza, Dipartimento di Fisica, Universit`a\Sapienza", P.le Aldo Moro 5, 00185 Rome (Italy) cEuropean Centre for Living Technology (ECLT) San Marco 2940, 30124 Venezia , ITALY dLorentz Institute for Theoretical Physics, Leiden Institute of Physics, University of Leiden, Niels Bohrweg 2, 2333 CA Leiden (The Netherlands) Abstract The study of social, economic and biological systems is often (when not always) lim- ited by the partial information about the structure of the underlying networks. An example of paramount importance is provided by financial systems: information on the interconnections between financial institutions is privacy-protected, dramatically reduc- ing the possibility of correctly estimating crucial systemic properties such as the resilience to the propagation of shocks. The need to compensate for the scarcity of data, while optimally employing the available information, has led to the birth of a research field known as network reconstruction. Since the latter has benefited from the contribution of researchers working in disciplines as different as mathematics, physics and economics, the results achieved so far are still scattered across heterogeneous publications. Most importantly, a systematic comparison of the network reconstruction methods proposed up to now is currently missing. This review aims at providing a unifying framework to present all these studies, mainly focusing on their application to economic and financial networks.
    [Show full text]
  • Assortativity Measures for Weighted and Directed Networks
    Assortativity measures for weighted and directed networks Yelie Yuan1, Jun Yan1, and Panpan Zhang2,∗ 1Department of Statistics, University of Connecticut, Storrs, CT 06269 2Department of Biostatistics, Epidemiology and Informatics, University of Pennsylvania, Philadelphia, PA 19104 ∗Corresponding author: [email protected] January 15, 2021 arXiv:2101.05389v1 [stat.AP] 13 Jan 2021 Abstract Assortativity measures the tendency of a vertex in a network being connected by other ver- texes with respect to some vertex-specific features. Classical assortativity coefficients are defined for unweighted and undirected networks with respect to vertex degree. We propose a class of assortativity coefficients that capture the assortative characteristics and structure of weighted and directed networks more precisely. The vertex-to-vertex strength correlation is used as an example, but the proposed measure can be applied to any pair of vertex-specific features. The effectiveness of the proposed measure is assessed through extensive simula- tions based on prevalent random network models in comparison with existing assortativity measures. In application World Input-Ouput Networks, the new measures reveal interesting insights that would not be obtained by using existing ones. An implementation is publicly available in a R package wdnet. 1 Introduction In traditional network analysis, assortativity or assortative mixing (Newman, 2002) is a mea- sure assessing the preference of a vertex being connected (by edges) with other vertexes in a network. The measure reflects the principle of homophily (McPherson et al., 2001)|the tendency of the entities to be associated with similar partners in a social network. The prim- itive assortativity measure proposed by Newman(2002) was defined to study the tendency of connections between nodes based on their degrees, which is why it is also called degree-degree correlation (van der Hofstad and Litvak, 2014).
    [Show full text]
  • Distance Distribution in Configuration Model Networks
    Distance distribution in configuration model networks Mor Nitzan,1, 2 Eytan Katzav,1 Reimer K¨uhn,3 and Ofer Biham1 1Racah Institute of Physics, The Hebrew University, Jerusalem 91904, Israel 2Department of Microbiology and Molecular Genetics, Faculty of Medicine, The Hebrew University, Jerusalem 91120, Israel 3Department of Mathematics, King’s College London, Strand, London WC2R 2LS, UK Abstract We present analytical results for the distribution of shortest path lengths between random pairs of nodes in configuration model networks. The results, which are based on recursion equations, are shown to be in good agreement with numerical simulations for networks with degenerate, binomial and power-law degree distributions. The mean, mode and variance of the distribution of shortest path lengths are also evaluated. These results provide expressions for central measures and dispersion measures of the distribution of shortest path lengths in terms of moments of the degree distribution, illuminating the connection between the two distributions. arXiv:1603.04473v2 [cond-mat.dis-nn] 5 Jun 2016 1 I. INTRODUCTION The study of complex networks has attracted much attention in recent years. It was found that network models provide a useful description of a large number of processes which involve interacting objects [1–5]. In these models, the objects are represented by nodes and the interactions are expressed by edges. Pairs of adjacent nodes can affect each other directly. However, the interactions between most pairs of nodes are indirect, mediated by intermediate nodes and edges. A pair of nodes, i and j, may be connected by a large number of paths. The short- est among these paths are of particular importance because they are likely to provide the fastest and strongest interaction.
    [Show full text]