Efficient Sparse Artificial Neural Networks

Total Page:16

File Type:pdf, Size:1020Kb

Efficient Sparse Artificial Neural Networks 1 Efficient Sparse Artificial Neural Networks Seyed Majid Naji∗, Azra Abtahi∗, and Farokh Marvasti∗ ∗ Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran. Abstract—The brain, as the source of inspiration for Artificial suitable sparse structure which also leads to an acceptable Neural Networks (ANN), is based on a sparse structure. This accuracy. sparse structure helps the brain to consume less energy, learn There are three main approaches to sparsifying a neural easier and generalize patterns better than any other ANN. In this paper, two evolutionary methods for adopting sparsity to ANNs network: are proposed. In the proposed methods, the sparse structure of 1) Training a conventional network completely, and at the a network as well as the values of its parameters are trained end, making it sparse with pruning techniques [27]–[29]. and updated during the learning process. The simulation results 2) Modification of the loss function to enforce the sparsity show that these two methods have better accuracy and faster [30], [31]. convergence while they need fewer training samples compared to their sparse and non-sparse counterparts. Furthermore, the 3) Updating a sparse structure during the learning process proposed methods significantly improve the generalization power (evolutionary methods) [32]–[36]. and reduce the number of parameters. For example, the spar- The methods using the first approach train a conventional sification of the ResNet47 network by exploiting our proposed ANN. Then, they remove the least important connections methods for the image classification of ImageNet dataset uses which their elimination does not decrease the accuracy of 40% fewer parameters while the top-1 accuracy of the model improves by 12% and 5% compared to the dense network and network significantly. Hence, finding a good measure to their sparse counterpart, respectively. As another example, the identify such connections is the most important part in this proposed methods for the CIFAR10 dataset converge to their approach. Pruning in this way guarantees good performance of final structure 7 times faster than its sparse counterpart, while the network model [28]. However, the learning time does not the final accuracy increases by 6%. decrease in this approach as a conventional non-sparse network Index Terms—Artificial Neural Network, Natural Neural Net- must be trained completely. There are also methods which work, Sparse Structure, Sparsifying, Scalable Training. fine-tune the resultant pruned model to compensate even small drop of accuracy in the pruning procedure [37]–[39]. In [40], I. INTRODUCTION it is shown that a randomly initialized dense neural network RTIFICIAL Neural Networks (ANN) are inspired by contains a sub-network that can match the test accuracy of A Natural Neural Networks (NNN) which constitute the the dense network and also needs the same iteration number brain. Hence, scientists are trying to adopt natural network for training (called winning lottery ticket). However, finding functionalities to the ANNs. The observations show that the this sub-network is computationally expensive as it needs NNNs have sparse properties. First, neurons in NNNs fire multiple pruning and re-training starting from a dense network. sparsely in both time and spatial domains [1]–[6]. Further- Thus, several references have concentrated on finding this more, the NNNs profit from a scale-free [7], small-world configuration faster [41]–[43] and using less memory [44]. [8], and sparse structure which means each neuron connects In the second approach, a sparsifying term is added to the to a very limited set of other neurons. In other words, the loss function in order to shape the sparse structure of the net- connections in a natural network is a very small portion of work. If the loss function is chosen properly, the network tends the possible connections [9]–[11]. This property eventuates towards a sparse structure with good performance properties. This can be achieved by adding a sparse regularization term arXiv:2103.07674v1 [cs.LG] 13 Mar 2021 in very important features of NNNs such as low complexity, l low power consumption, and stronge generalization power such as 1 norm. In MorphNet [45], this term is added to the [12], [13]. On the other hand, conventional ANNs do not loss function. MorphNet is a method which iteratively shrinks have sparse structures which lead to the important differences and expands a network to find a suitable configuration. In between them and their natural counterparts. They have an other words, it needs multiple re-training to achieve its final extraordinary large number of parameters and need extraordi- configuration. nary large storage space, large sample number, and powerful In the third approach, both the network structure and the underlying hardware [14]. Although deep learning has gained values of connections are trained and updated in the learning enormous success in wide variety of domains such as image process. First of all, an initial sparse graph model is generated and video processing [15]–[17], Biomedical processing [15], for the network. Then, at each epoch of the training phase, [18]–[22], speech recognition [23], Physics [24], [25], and both the weights of connections and the sparse structure are Geophysics [26]; the mentioned barriers make it impractical updated. NeuroEvolution of Augmenting Topologies (NEAT) to exploit deep networks in portable and cheap applications. [46] is an example of such methods which seeks to optimize Sparsifying a neural network can significantly reduce network both the weights and the topology of an ANN for a given parameters and learning time. However, we need to find a task. Although its impressive results have been reported [47], [48], NEAT is not scalable to large networks as it should Corresponding author: Azra Abtahi (email: azra [email protected]). search a large space. References [38], [39], [49] also propose 2 unscalable evolutionary methods which are not practical for with low weight magnitude in a strong path should not be large Networks. eliminated in the learning procedure as by eliminating this In [32], a scalable evolutionary method is proposed, called connection, its underlying path is also eliminated which is not SET. In this method, the weights are updated by back- desirable. propagation algorithm [50] and the candidate connections are By considering the aforementioned facts, an evolutionary chosen to be replaced by new ones in each training epoch. The method leading to a sparse structure called the Path-Weight candidate connections are tried to be chosen from the weakest method is proposed. In this method, we tend to choose the ones. Thus, replacing them leads to the structure improvement weakest connections in the weakest paths and replace them of the model. This method profits from the reduced learning with the best candidates as opposed to random ones described time and acceptable performance. However, there are some in [32]. drawbacks with this evolutionary method.The novelty of this In the proposed method, the initial structure is based on method is the process through which the structure is updated: Erdos-Renyi random graph [52] which leads to a binomial identifying weak connections and replacing them with new distribution for degree of hidden neurons [53]. Let us denote ones. Hence, there are two questions to be answered: 1) what is the connection between the ith neuron in the k − 1th layer th th (k) a “weak” connection and 2) what makes a connection suitable and the j neuron in the k layer by Wi;j . In this initial for replacing the eliminated one? In [32], the magnitude of (k) distribution, the existence probability of connection Wi;j is the connections are considered as their strength (importance), (k) (k−1) which means connections with less magnitudes are weaker. (k) (n + n ) P (W ) = ; (1) Furthermore, new connections are chosen randomly. It seems i;j n(k)n(k−1) that these two metrics can be chosen more efficiently by where n(k) is number of neurons in the kth layer and is a utilizing methods of evolution in natural networks. parameter which controls the sparsity of the model; the higher In this paper, we proposed two evolutionary sparse meth- the , the fewer the sparsity becomes. ods: 1) “Path-Weight” method and 2) “Sensitivity” method, In the rest of this section, the proposed method is described both of which have more reasonable updating structures. The in three parts: identifying weak connections, adding new proposed methods show significant improvement in updating connections to the network in the substitution of the eliminated the structure compared to other existing methods. They show ones, and the time-varying version of the method. more generalization power and have higher accuracy in chal- lenging datasets like ImageNet [51]. Furthermore, they need less training samples and training epochs which make them A. The Identification of Weak Connections suitable for cases with low number of training samples. In each training epoch, the structure must be updated as This paper is organized as follows. In Section II and Sec- well as the weights. Updating the structure needs a measure tion III, we propose the “Path-Weight” and the “Sensitivity” to identify the importance or the effectiveness of paths. Let us th methods, respectively. The simulation results are discussed in denote the m path of the network by pm and the importance Section IV, and finally, we conclude the paper in section V. measure of pm by Ipm . This importance measure is defined as follows: L II. PATH-WEIGHT METHOD Y (pm) Ipm = I(Wl ); (2) As discussed earlier, an important part in the evolutionary l=1 methods for finding the sparse structures is to find the weak (pm) where L is the number of the network layers, and Wl is the connections at each training epoch; this requires a measure th (pm) constituent connection of pm in the l layer.
Recommended publications
  • Introduction to Network Science & Visualisation
    IFC – Bank Indonesia International Workshop and Seminar on “Big Data for Central Bank Policies / Building Pathways for Policy Making with Big Data” Bali, Indonesia, 23-26 July 2018 Introduction to network science & visualisation1 Kimmo Soramäki, Financial Network Analytics 1 This presentation was prepared for the meeting. The views expressed are those of the author and do not necessarily reflect the views of the BIS, the IFC or the central banks and other institutions represented at the meeting. FNA FNA Introduction to Network Science & Visualization I Dr. Kimmo Soramäki Founder & CEO, FNA www.fna.fi Agenda Network Science ● Introduction ● Key concepts Exposure Networks ● OTC Derivatives ● CCP Interconnectedness Correlation Networks ● Housing Bubble and Crisis ● US Presidential Election Network Science and Graphs Analytics Is already powering the best known AI applications Knowledge Social Product Economic Knowledge Payment Graph Graph Graph Graph Graph Graph Network Science and Graphs Analytics “Goldman Sachs takes a DIY approach to graph analytics” For enhanced compliance and fraud detection (www.TechTarget.com, Mar 2015). “PayPal relies on graph techniques to perform sophisticated fraud detection” Saving them more than $700 million and enabling them to perform predictive fraud analysis, according to the IDC (www.globalbankingandfinance.com, Jan 2016) "Network diagnostics .. may displace atomised metrics such as VaR” Regulators are increasing using network science for financial stability analysis. (Andy Haldane, Bank of England Executive
    [Show full text]
  • Evolving Networks and Social Network Analysis Methods And
    DOI: 10.5772/intechopen.79041 ProvisionalChapter chapter 7 Evolving Networks andand SocialSocial NetworkNetwork AnalysisAnalysis Methods and Techniques Mário Cordeiro, Rui P. Sarmento,Sarmento, PavelPavel BrazdilBrazdil andand João Gama Additional information isis available atat thethe endend ofof thethe chapterchapter http://dx.doi.org/10.5772/intechopen.79041 Abstract Evolving networks by definition are networks that change as a function of time. They are a natural extension of network science since almost all real-world networks evolve over time, either by adding or by removing nodes or links over time: elementary actor-level network measures like network centrality change as a function of time, popularity and influence of individuals grow or fade depending on processes, and events occur in net- works during time intervals. Other problems such as network-level statistics computation, link prediction, community detection, and visualization gain additional research impor- tance when applied to dynamic online social networks (OSNs). Due to their temporal dimension, rapid growth of users, velocity of changes in networks, and amount of data that these OSNs generate, effective and efficient methods and techniques for small static networks are now required to scale and deal with the temporal dimension in case of streaming settings. This chapter reviews the state of the art in selected aspects of evolving social networks presenting open research challenges related to OSNs. The challenges suggest that significant further research is required in evolving social networks, i.e., existent methods, techniques, and algorithms must be rethought and designed toward incremental and dynamic versions that allow the efficient analysis of evolving networks. Keywords: evolving networks, social network analysis 1.
    [Show full text]
  • Network Science, Homophily and Who Reviews Who in the Linux Kernel? Working Paper[+] › Open-Access at ECIS2020-Arxiv.Pdf
    Network Science, Homophily and Who Reviews Who in the Linux Kernel? Working paper[P] Open-access at http://users.abo.fi/jteixeir/pub/linuxsna/ ECIS2020-arxiv.pdf José Apolinário Teixeira Åbo Akademi University Finland # jteixeira@abo. Ville Leppänen University of Turku Finland Sami Hyrynsalmi arXiv:2106.09329v1 [cs.SE] 17 Jun 2021 LUT University Finland [P]As presented at 2020 European Conference on Information Systems (ECIS 2020), held Online, June 15-17, 2020. The ocial conference proceedings are available at the AIS eLibrary (https://aisel.aisnet.org/ecis2020_rp/). Page ii of 24 ? Copyright notice ? The copyright is held by the authors. The same article is available at the AIS Electronic Library (AISeL) with permission from the authors (see https://aisel.aisnet.org/ ecis2020_rp/). The Association for Information Systems (AIS) can publish and repro- duce this article as part of the Proceedings of the European Conference on Information Systems (ECIS 2020). ? Archiving information ? The article was self-archived by the rst author at its own personal website http:// users.abo.fi/jteixeir/pub/linuxsna/ECIS2020-arxiv.pdf dur- ing June 2021 after the work was presented at the 28th European Conference on Informa- tion Systems (ECIS 2020). Page iii of 24 ð Funding and Acknowledgements ð The rst author’s eorts were partially nanced by Liikesivistysrahasto - the Finnish Foundation for Economic Education, the Academy of Finland via the DiWIL project (see http://abo./diwil) project. A research companion website at http://users.abo./jteixeir/ECIS2020cw supports the paper with additional methodological details, additional data visualizations (plots, tables, and networks), as well as high-resolution versions of the gures embedded in the paper.
    [Show full text]
  • Network Science
    This is a preprint of Katy Börner, Soma Sanyal and Alessandro Vespignani (2007) Network Science. In Blaise Cronin (Ed) Annual Review of Information Science & Technology, Volume 41. Medford, NJ: Information Today, Inc./American Society for Information Science and Technology, chapter 12, pp. 537-607. Network Science Katy Börner School of Library and Information Science, Indiana University, Bloomington, IN 47405, USA [email protected] Soma Sanyal School of Library and Information Science, Indiana University, Bloomington, IN 47405, USA [email protected] Alessandro Vespignani School of Informatics, Indiana University, Bloomington, IN 47406, USA [email protected] 1. Introduction.............................................................................................................................................2 2. Notions and Notations.............................................................................................................................4 2.1 Graphs and Subgraphs .........................................................................................................................5 2.2 Graph Connectivity..............................................................................................................................7 3. Network Sampling ..................................................................................................................................9 4. Network Measurements........................................................................................................................11
    [Show full text]
  • Deep Generative Modeling in Network Science with Applications to Public Policy Research
    Working Paper Deep Generative Modeling in Network Science with Applications to Public Policy Research Gavin S. Hartnett, Raffaele Vardavas, Lawrence Baker, Michael Chaykowsky, C. Ben Gibson, Federico Girosi, David Kennedy, and Osonde Osoba RAND Health Care WR-A843-1 September 2020 RAND working papers are intended to share researchers’ latest findings and to solicit informal peer review. They have been approved for circulation by RAND Health Care but have not been formally edted. Unless otherwise indicated, working papers can be quoted and cited without permission of the author, provided the source is clearly referred to as a working paper. RAND’s R publications do not necessarily reflect the opinions of its research clients and sponsors. ® is a registered trademark. CORPORATION For more information on this publication, visit www.rand.org/pubs/working_papers/WRA843-1.html Published by the RAND Corporation, Santa Monica, Calif. © Copyright 2020 RAND Corporation R® is a registered trademark Limited Print and Electronic Distribution Rights This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited. Permission is given to duplicate this document for personal use only, as long as it is unaltered and complete. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial use. For information on reprint and linking permissions, please visit www.rand.org/pubs/permissions.html. The RAND Corporation is a research organization that develops solutions to public policy challenges to help make communities throughout the world safer and more secure, healthier and more prosperous.
    [Show full text]
  • Supply Network Science: Emergence of a New Perspective on a Classical Field
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Apollo Supply network science: emergence of a new perspective on a classical field Alexandra Brintrup*, Anna Ledwoch Institute for Manufacturing, Department for Engineering, University of Cambridge, Charles Babbage Road, CB1 3DD, Cambridge *[email protected] Abstract Supply networks emerge as companies procure goods from one another to produce their own products. Due to a chronic lack of data, studies on these emergent structures have long focussed on local neighbourhoods, assuming simple, chain like structures. However studies conducted since 2001 have shown that supply chains are indeed complex networks that exhibit similar organisational patterns to other network types. In this paper, we present a critical review of theoretical and model based studies which conceptualise supply chains from a network science perspective, showing that empirical data does not always support theoretical models that were developed, and argue that different industrial settings may present different characteristics. Consequently, a need that arises is the development and reconciliation of interpretation across different supply network layers such as contractual relations, material flow, financial links, and co-patenting, as these different projections tend to remain in disciplinary siloes. Other gaps include a lack of null models that show whether observed properties are meaningful, a lack of dynamical models that can inform how layers evolve and adopt to changes, and a lack of studies that investigate how local decisions enable emergent outcomes. We conclude by asking the network science community to help bridge these gaps by engaging with this important area of research.
    [Show full text]
  • Applications of Network Science to Education Research: Quantifying Knowledge and the Development of Expertise Through Network Analysis
    education sciences Commentary Applications of Network Science to Education Research: Quantifying Knowledge and the Development of Expertise through Network Analysis Cynthia S. Q. Siew Department of Psychology, National University of Singapore, Singapore 117570, Singapore; [email protected] Received: 31 January 2020; Accepted: 3 April 2020; Published: 8 April 2020 Abstract: A fundamental goal of education is to inspire and instill deep, meaningful, and long-lasting conceptual change within the knowledge landscapes of students. This commentary posits that the tools of network science could be useful in helping educators achieve this goal in two ways. First, methods from cognitive psychology and network science could be helpful in quantifying and analyzing the structure of students’ knowledge of a given discipline as a knowledge network of interconnected concepts. Second, network science methods could be relevant for investigating the developmental trajectories of knowledge structures by quantifying structural change in knowledge networks, and potentially inform instructional design in order to optimize the acquisition of meaningful knowledge as the student progresses from being a novice to an expert in the subject. This commentary provides a brief introduction to common network science measures and suggests how they might be relevant for shedding light on the cognitive processes that underlie learning and retrieval, and discusses ways in which generative network growth models could inform pedagogical strategies to enable meaningful long-term conceptual change and knowledge development among students. Keywords: education; network science; knowledge; learning; expertise; development; conceptual representations 1. Introduction Cognitive scientists have had a long-standing interest in quantifying cognitive structures and representations [1,2]. The empirical evidence from cognitive psychology and psycholinguistics demonstrates that the structure of cognitive systems influences the psychological processes that operate within them.
    [Show full text]
  • Social Network Analysis in Practice
    Social Network Analysis in Practice Projektseminar WiSe 2018/19 Tim Repke, Dr. Ralf Krestel Social Network Analysis in Practice Tim Repke 17.10.2018 2 https://digitalpresent.tagesspiegel.de/afd About Me Information Extraction and Study abroad Data Visualisation Natural Language Processing and Business Text Mining Communication Work in London Analysis Write master‘s Write bachelor‘s Enrol at HUB thesis (Web IE) thesis (TOR) Begin PhD at HPI Social Network Analysis in Practice Tim Repke git log --since=2010 17.10.2018 Student Receive M.Sc. Computer Science assistant Receive B.Sc. 3 Computer Science What You Learn in this Seminar ■ Overview of topics in Social Network Analysis ■ Deep understanding of one aspect ■ Explore heterogeneous data ■ How to find, implement, and reproduce research papers ■ How to write a research abstract Social Network Analysis in Practice Tim Repke 17.10.2018 4 Schedule Paper Exploration Write and “Data Science” Review Paper February & March Overview Talks December SNA in Practice October & June January & February Social Seminar Project Network Analysis in Practice Tim Repke 17.10.2018 5 Schedule Paper Exploration Write and “Data Science” Review Paper February & March Overview Talks December SNA in Practice October & June January & February Social Seminar Project Network Analysis in Practice Tim Repke 17.10.2018 6 Schedule (pt. 1) Overview Talks Date Topics 17.10.2018 Introduction 24.10.2018 31.10.2018 Introduction to Social Network Data Reformationstag 07.11.2018 . Data acquisition and preparation . Data interpretation Introduction to Measuring Graphs . Node/dyad-level metrics 14.11.2018 . Dyad/group-level metrics . Interpreting metrics Introduction to Advanced Analysis Social .
    [Show full text]
  • Social Network Analysis: Homophily
    Social Network Analysis: homophily Donglei Du ([email protected]) Faculty of Business Administration, University of New Brunswick, NB Canada Fredericton E3B 9Y2 Donglei Du (UNB) Social Network Analysis 1 / 41 Table of contents 1 Homophily the Schelling model 2 Test of Homophily 3 Mechanisms Underlying Homophily: Selection and Social Influence 4 Affiliation Networks and link formation Affiliation Networks Three types of link formation in social affiliation Network 5 Empirical evidence Triadic closure: Empirical evidence Focal Closure: empirical evidence Membership closure: empirical evidence (Backstrom et al., 2006; Crandall et al., 2008) Quantifying the Interplay Between Selection and Social Influence: empirical evidence—a case study 6 Appendix A: Assortative mixing in general Quantify assortative mixing Donglei Du (UNB) Social Network Analysis 2 / 41 Homophily: introduction The material is adopted from Chapter 4 of (Easley and Kleinberg, 2010). "love of the same"; "birds of a feather flock together" At the aggregate level, links in a social network tend to connect people who are similar to one another in dimensions like Immutable characteristics: racial and ethnic groups; ages; etc. Mutable characteristics: places living, occupations, levels of affluence, and interests, beliefs, and opinions; etc. A.k.a., assortative mixing Donglei Du (UNB) Social Network Analysis 4 / 41 Homophily at action: racial segregation Figure: Credit: (Easley and Kleinberg, 2010) Donglei Du (UNB) Social Network Analysis 5 / 41 Homophily at action: racial segregation Credit: (Easley and Kleinberg, 2010) Figure: Credit: (Easley and Kleinberg, 2010) Donglei Du (UNB) Social Network Analysis 6 / 41 Homophily: the Schelling model Thomas Crombie Schelling (born 14 April 1921): An American economist, and Professor of foreign affairs, national security, nuclear strategy, and arms control at the School of Public Policy at University of Maryland, College Park.
    [Show full text]
  • Networks in Cognitive Science
    1 Networks in Cognitive Science Andrea Baronchelli1,*, Ramon Ferrer-i-Cancho2, Romualdo Pastor-Satorras3, Nick Chater4 and Morten H. Christiansen5,6 1 Laboratory for the Modeling of Biological and Socio-technical Systems, Northeastern University, Boston, MA 02115, USA 2 Complexity & Quantitative Linguistics Lab, TALP Research Center, Departament de Llenguatges i Sistemes Informàtics. Universitat Politècnica de Catalunya, Campus Nord, Edifici Omega, E-08034 Barcelona, Spain 3 Departament de Física i Enginyeria Nuclear, Universitat Politècnica de Catalunya, Campus Nord B4, E-08034 Barcelona, Spain 4Behavioural Science Group, Warwick Business School, University of Warwick, Coventry, CV4 7AL, UK 5 Department of Psychology, Cornell University, Uris Hall, Ithaca, NY 14853, USA 6 Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA *Corresponding author: email [email protected] 2 Abstract Networks of interconnected nodes have long played a key role in cognitive science, from artificial neural networks to spreading activation models of semantic memory. Recently, however, a new Network Science has been developed, providing insights into the emergence of global, system-scale properties in contexts as diverse as the Internet, metabolic reactions or collaborations among scientists. Today, the inclusion of network theory into cognitive sciences, and the expansion of complex systems science, promises to significantly change the way in which the organization and dynamics of cognitive and behavioral processes are understood. In this paper, we review recent contributions of network theory at different levels and domains within the cognitive sciences. 3 Humans have more than 1010 neurons and between 1014 and 1015 synapses in their nervous system [1]. Together, neurons and synapses form neural networks, organized into structural and functional sub-networks at many scales [2].
    [Show full text]
  • TELCOM2125: Network Science and Analysis
    School of Information Sciences University of Pittsburgh TELCOM2125: Network Science and Analysis Konstantinos Pelechrinis Spring 2015 Figures are taken from: M.E.J. Newman, “Networks: An Introduction” CHAPTER 1 INTRODUCTION A short introduction to networks and why we study them A NETWORK is, in its simplest form, a collection of points joined together in pairs by lines. In the jargon of the field the points are referred to as vertices1 or nodes and the lines are referred to as What is a network?edges . Many objects of interest in the physical, biological, and social sciences can be thought of as networks and, as this book aims to show, thinking of them in this way can often lead to new and useful insights. We begin, in this introductory chapter, with a discussion of why we are interested in networks l A collection of pointsand joined a brief description together of some in specpairsific networks by lines of note. All the topics in this chapter are covered in greater depth elsewhere in the book. § Points that will be joined together depends on the context ü Points à Vertices, nodes, actors … ü Lines à Edges l There are many systems of interest that can be modeled as networks A small network composed of eight vertices and ten edges. § Individual parts linked in some way ü Internet o A collection of computers linked together by data connections ü Human societies o A collection of people linked by acquaintance or social interaction 2 Why are we interested in networks? l While the individual components of a system (e.g., computer machines, people etc.) as well as the nature of their interaction is important, of equal importance is the pattern of connections between these components § These patterns significantly affect the performance of the underlying system ü The patterns of connections in the Internet affect the routes packets travel through ü Patterns in a social network affect the way people obtain information, form opinions etc.
    [Show full text]
  • EDITORIAL What Is Network Science?
    Network Science 1 (1): 1–15, 2013. c Cambridge University Press 2013 1 doi:10.1017/nws.2013.2 EDITORIAL What is network science? ULRIK BRANDES Department of Computer and Information Science, University of Konstanz, Germany (e-mail:)[email protected]) GARRY ROBINS Melbourne School of Psychological Sciences, University of Melbourne, Australia (e-mail:)[email protected]) ANN Mc CRANIE Department of Sociology, Indiana University Bloomington, USA (e-mail:)[email protected]) STANLEY WASSERMAN Departments of Psychological and Brain Science and Statistics, Indiana University Bloomington, USA (e-mail:)[email protected]) Abstract This is the beginning of Network Science. The journal has been created because network science is exploding. As is typical for a field in formation, the discussions about its scope, contents, and foundations are intense. On these first few pages of the first issue of our new journal, we would like to share our own vision of the emerging science of networks. 1 Is there a science of networks? Whether there is or is not a science of networks, and what should be the focus of such a science, may appear to be academic questions. We use this prominent place to argue the following three statements: • These questions are more relevant than they seem at first glance. • The answers are less obvious than have been suggested. • The future of network science is bright. Of course, the editors of a new scientific journal are going to predict that the future of their science is bright. Cambridge University Press is betting that it is. We do also think, however, that the field is still in formation and that its essential shaping is taking place right now.
    [Show full text]