NEST Desktop
Total Page:16
File Type:pdf, Size:1020Kb
NEST Conference 2019 A Forum for Users & Developers Conference Program Norwegian University of Life Sciences 24-25 June 2019 NEST Conference 2019 24–25 June 2019, Ås, Norway Monday, 24th June 11:00 Registration and lunch 12:30 Opening (Dean Anne Cathrine Gjærde, Hans Ekkehard Plesser) 12:50 GeNN: GPU-enhanced neural networks (James Knight) 13:25 Communication sparsity in distributed Spiking Neural Network Simulations to improve scalability (Carlos Fernandez-Musoles) 14:00 Coffee & Posters 15:00 Sleep-like slow oscillations induce hierarchical memory association and synaptic homeostasis in thalamo-cortical simulations (Pier Stanislao Paolucci) 15:20 Implementation of a Frequency-Based Hebbian STDP in NEST (Alberto Antonietti) 15:40 Spike Timing Model of Visual Motion Perception and Decision Making with Reinforcement Learning in NEST (Petia Koprinkova-Hristova) 16:00 What’s new in the NEST user-level documentation (Jessica Mitchell) 16:20 ICEI/Fenix: HPC and Cloud infrastructure for computational neuroscientists (Jochen Eppler) 17:00 NEST Initiative Annual Meeting (members-only) 19:00 Conference Dinner Tuesday, 25th June 09:00 Construction and characterization of a detailed model of mouse primary visual cortex (Stefan Mihalas) 09:45 Large-scale simulation of a spiking neural network model consisting of cortex, thalamus, cerebellum and basal ganglia on K computer (Jun Igarashi) 10:10 Simulations of a multiscale olivocerebellar spiking neural network in NEST: a case study (Alice Geminiani) 10:30 NEST3 Quick Preview (Stine Vennemo & Håkon Mørk) 10:35 Coffee 10:50 NEST3 Hands-on Session (Håkon Mørk & Stine Vennemo) 11:40 A NEST CPG for pythonic motor control: Servomander (Harry Howard) 12:05 NEST desktop: A web-based GUI for NEST simulator (Sebastian Spreizer) 12:30 Lunch & Posters 13:30 Tools for the Visual Analysis of Simulation Datasets (Óscar David Robles) 13:50 Neural Network Simulation Code for Extreme Scale Hybrid Systems (Kristina Kapanova) Accelerating spiking network simulations with NEST 14:10 (Susanne Kunkel) 14:30 NESTML Tutorial (Charl Linssen) 15:30 NEST Desktop Tutorial (Sebastian Spreizer) 16:30 Closing (Markus Diesmann) List of poster presentations Understanding the NEST community (Steffen Graber) Existence and Detectability of High Frequency Oscillations in Spiking Network Models (Runar Helin) Large scale modeling of the mouse brain dynamics (Lionel Kusch) Reconstruction and simulation of the cerebellar microcircuit: a scaffold strategy to embed different levels of neuronal details (Elisa Marenzi) Computing the Electroencephalogram (EEG) from Point-Neuron Networks (Torbjørn V. Ness) Improving NEST network construction performance with BlockVector and Spreadsort (Håkon Mørk) Multi-area spiking network models of macaque and human cortices (Jari Pronold) Connectivity Concepts for Neuronal Networks (Johanna Senk) Modeling cortical and thalamocortical mechanisms involved in alpha generation (Renan O Shinomura) Clopath and Urbanczik-Senn plasticity: New learning rules in NEST (Jonas Stapmanns, Jan Hahne) Presenting new NEST benchmarks (Stine B. Vennemo) NEST Conference 2019 Abstracts Oral presentations NEST Conference 24–25 June 2019 GeNN: GPU-enhanced neural networks James Knight1, Thomas Nowotny1 1 Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom Email:[email protected] Spiking neural network models tend to be developed and simulated on computers or clusters of computers with standard CPU architectures. However, over the last decade, GPU accelerators have not only become a common fixture in many workstations, but NVIDIA GPUs are now used in 50% of the Top 10 super computers worldwide. GeNN [1] is an open source, cross-platform library written in C++ for generating optimized CUDA code from high-level descriptions of spiking neuron networks to run on NVIDIA GPUs. GeNN allows user-defined neuron, synapse and connectivity models to be specified in C-like code strings making GeNN highly extensible. Additionally, in GeNN, the user is free to provide their own C++ simulation loop around the simulation kernels GeNN provides. However, while this allows a lot of flexibility and tight integration with visualization tools and closed-loop robotics, when combined with the use of C++ for model definition, this can make using GeNN a somewhat daunting prospect for users more used to higher-level simulators. To address this GeNN can also be used as a simulation backend for PyNN (via PyNNGeNN [2]), Brian (via Brian2GeNN [3]) and SpineML [4]. In our recent paper [5] we demonstrated how, using GeNN, a single compute node equipped with a high-end GPU could simulate a highly-connected model of a cortical column [6] – consisting of 80×103 spiking neurons and 0.3×109 synapses – faster and at a lower energy cost than was possible using a CPU-based supercomputer system. Combined with the comparatively low cost and wide availability of NVIDIA GPU accelerators, this makes GeNN an ideal tool for accelerating computational neuroscience simulations as well as spiking neural network research in machine learning. References 1. Yavuz E, el al. GeNN: a code generation framework for accelerated brain simulations. Scientific Reports. 6(November 2015), 18854. doi: 10.1038/srep18854 2. PyNN GeNN [https://github.com/genn-team/pynn_genn] 3. Stimberg M, et al. Brian2GeNN : a system for accelerating a large variety of spiking neural networks with graphics hardware. doi: 10.1101/448050 4. Richmond P, et al. From model specification to simulation of biologically constrained networks of spiking neurons. Neuroinformatics. 12(2), 307–323. doi: 10.1007/s12021-013-9208-z 5. Knight J, et al. GPUs Outperform Current HPC and Neuromorphic Solutions in Terms of Speed and Energy When Simulating a Highly-Connected Cortical Model. Frontiers in Neuroscience, 12(December), 1–19. doi: 10.3389/fnins.2018.00941 6. Potjans TC, et al. The Cell-Type Specific Cortical Microcircuit: Relating Structure and Activity in a Full- Scale Spiking Network Model. Cerebral Cortex, 24(3), 785–806. doi: 10.1093/cercor/bhs358 Copyright 2019 James Knight and Thomas Nowotny under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019 Communication sparsity in distributed Spiking Neural Network Simulations to improve scalability Carlos Fernandez-Musoles1, Daniel Coca1, Paul Richmond2 1 Automatic Control and Systems Engineering, The University Of Sheffield, Sheffield, UK. 2 Computer Science, The University Of Sheffield, Sheffield, UK. Email: [email protected] In the last decade there has been a surge in the number of big science projects interested in achieving a comprehensive understanding of the functions of the brain, using Spiking Neuronal Network (SNN) simulations to aid discovery and experimentation. Such an approach increases the computational demands on SNN simulators: if natural scale brain-size simulations are to be realised, it is necessary to use parallel and distributed models of computing. Communication is recognised as the dominant part of distributed SNN simulations [1,2]. As the number of computational nodes increases, the proportion of time the simulation spends in useful computing (computational efficiency) is reduced and therefore applies a limit to scalability. This work targets the three phases of communication to improve overall computational efficiency in distributed simulations: implicit synchronisation, process handshake and data exchange. We introduce a connectivity-aware allocation of neurons to compute nodes by modelling the SNN as a hypergraph. Partitioning the hypergraph to reduce interprocess communication increases the sparsity of the communication graph. We propose dynamic sparse exchange [3] as an improvement over simple point-to-point exchange on sparse communications. Results show a combined gain when using hypergraph-based allocation and dynamic sparse communication, increasing computational efficiency by up to 40.8 percentage points and reducing simulation time by up to 73%. The findings are applicable to other distributed complex system simulations in which communication is modelled as a hypergraph network. Acknowledgements The authors would like to thank Dr. Andrew Turner from EPCC for his support in supplying extra computational resources in ARCHER, and the review comments from Dr. Hans Ekkehard Plesser and Dr. Ivan Raikov wich greatly increased the quality of our paper. References 1. Brette, R., & Goodman, D. (2012). Simulating spiking neural networks on GPU. Network Computation in Neural Systems, 23(4), 167–182. https://doi.org/10.3109/0954898X.2012.730170 2. Zenke, F., & Gerstner, W. (2014). Limits to high-speed simulations of spiking neural networks using general- purpose computers. Frontiers in Neuroinformatics, 8(September), 76. https://doi.org/10.3389/fninf.2014.00076 3. Hoefler, T., Siebert, C., & Lumsdaine, A. (2010). Scalable communication protocols for dynamic sparse data exchange. In Proceedings of the 15th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (Vol. 45, p. 159). Bangalore (India). https://doi.org/10.1145/1837853.1693476 Copyright 2019 Fernandez-Musoles, Coca, Richmond under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019 Sleep-like slow oscillations induce hierarchical memory association and synaptic homeostasis in thalamo-cortical simulations Capone Cristiano1, Elena Pastorelli1,2, Bruno Golosio3,4, Pier Stanislao1 1 INFN Sezione di Roma 2 PhD