Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware
Total Page:16
File Type:pdf, Size:1020Kb
Large-Scale simulations of plastic neural networks on neuromorphic hardware Article (Published Version) Knight, James C, Tully, Philip J, Kaplan, Bernhard A, Lansner, Anders and Furber, Steve B (2016) Large-Scale simulations of plastic neural networks on neuromorphic hardware. Frontiers in Neuroanatomy, 10. a37. ISSN 1662-5129 This version is available from Sussex Research Online: http://sro.sussex.ac.uk/id/eprint/69140/ This document is made available in accordance with publisher policies and may differ from the published version or from the version of record. If you wish to cite this item you are advised to consult the publisher’s version. Please see the URL above for details on accessing the published version. Copyright and reuse: Sussex Research Online is a digital repository of the research output of the University. Copyright and all moral rights to the version of the paper presented here belong to the individual author(s) and/or other copyright owners. To the extent reasonable and practicable, the material made available in SRO has been checked for eligibility before being made available. Copies of full text items generally can be reproduced, displayed or performed and given to third parties in any format or medium for personal research or study, educational, or not-for-profit purposes without prior permission or charge, provided that the authors, title and full bibliographic details are credited, a hyperlink and/or URL is given for the original metadata page and the content is not changed in any way. http://sro.sussex.ac.uk ORIGINAL RESEARCH published: 07 April 2016 doi: 10.3389/fnana.2016.00037 Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware James C. Knight 1*, Philip J. Tully 2, 3, 4, Bernhard A. Kaplan 5, Anders Lansner 2, 3, 6 and Steve B. Furber 1 1 Advanced Processor Technologies Group, School of Computer Science, University of Manchester, Manchester, UK, 2 Department of Computational Biology, Royal Institute of Technology, Stockholm, Sweden, 3 Stockholm Brain Institute, Karolinska Institute, Stockholm, Sweden, 4 Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh, UK, 5 Department of Visualization and Data Analysis, Zuse Institute Berlin, Berlin, Germany, 6 Department of Numerical analysis and Computer Science, Stockholm University, Stockholm, Sweden SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of Edited by: biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Wolfram Schenck, University of Applied Sciences Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework Bielefeld, Germany for modeling the interaction of different plasticity mechanisms using spiking neurons. Reviewed by: However, it can be computationally expensive to simulate large networks with BCPNN Guy Elston, learning since it requires multiple state variables for each synapse, each of which needs Centre for Cognitive Neuroscience, Australia to be updated every simulation time-step. We discuss the trade-offs in efficiency and Thomas Pfeil, accuracy involved in developing an event-based BCPNN implementation for SpiNNaker Ruprecht-Karls-University Heidelberg, Germany based on an analytical solution to the BCPNN equations, and detail the steps taken *Correspondence: to fit this within the limited computational and memory resources of the SpiNNaker James C. Knight architecture. We demonstrate this learning rule by learning temporal sequences of neural [email protected] activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1× 107 plastic synapses: the largest plastic neural network ever to Received: 30 November 2015 Accepted: 18 March 2016 be simulated on neuromorphic hardware. We also run a comparable simulation on a Published: 07 April 2016 Cray XC-30 supercomputer system and find that, if it is to match the run-time of our Citation: SpiNNaker simulation, the super computer system uses approximately 45× more power. Knight JC, Tully PJ, Kaplan BA, Lansner A and Furber SB (2016) This suggests that cheaper, more power efficient neuromorphic systems are becoming Large-Scale Simulations of Plastic useful discovery tools in the study of plasticity in large-scale brain models. Neural Networks on Neuromorphic Hardware. Front. Neuroanat. 10:37. Keywords: SpiNNaker, learning, plasticity, digital neuromorphic hardware, Bayesian confidence propagation doi: 10.3389/fnana.2016.00037 neural network (BCPNN), event-driven simulation, fixed-point accuracy Frontiers in Neuroanatomy | www.frontiersin.org 1 April 2016 | Volume 10 | Article 37 Knight et al. Plastic Neural Networks on Neuromorphic Hardware 1. INTRODUCTION in advancing the simulation can be handled within a single simulation time step. Motor, sensory and memory tasks are composed of sequential This flexibility has allowed the SpiNNaker system to be used elements and are therefore thought to rely upon the generation for the simulation of large-scale cortical models with up to of temporal sequences of neural activity (Abeles et al., 1995; 5.0 × 104 neurons and 5.0 × 107 synapses (Sharp et al., 2012, Seidemann et al., 1996; Jones et al., 2007). However it 2014); and various forms of synaptic plasticity (Jin et al., 2010; remains a major challenge to learn such functionally meaningful Diehl and Cook, 2014; Galluppi et al., 2015; Lagorce et al., dynamics within large-scale models using biologically plausible 2015). In the most recent of these papers, Galluppi et al. synaptic and neural plasticity mechanisms. Using SpiNNaker, (2015) and Lagorce et al. (2015) demonstrated that Sheik et al.’s a neuromorphic hardware platform for simulating large-scale (2012) model of the learning of temporal sequences from audio spiking neural networks, and BCPNN, a plasticity model based data can be implemented on SpiNNaker using a voltage-gated on Bayesian inference, we demonstrate how temporal sequence STDP rule. However, this model only uses a small number learning could be achieved through modification of recurrent of neurons and Kunkel et al.’s (2011) analysis suggests that cortical connectivity and intrinsic excitability in an attractor STDP alone cannot maintain the multiple, interconnected stable memory network. attractors that would allow spatio-temporal sequences to be Spike-Timing-Dependent Plasticity (Bi and Poo, learnt within more realistic, larger networks. This conclusion 1998) (STDP) inherently reinforces temporal causality which adds to growing criticism of simple STDP rules regarding their has made it a popular choice for modeling temporal sequence failure to generalize over experimental observations (see e.g., learning (Dan and Poo, 2004; Caporale and Dan, 2008; Markram Lisman and Spruston, 2005, 2010; Feldman, 2012 for reviews). et al., 2011). However, to date, all large-scale neural simulations We address some of these issues by implementing spike-based using STDP (Morrison et al., 2007; Kunkel et al., 2011) have been BCPNN (Tully et al., 2014)—an alternative to phenomenological run on large cluster machines or supercomputers, both of which plasticity rules which exhibits a diverse range of mechanisms consume many orders of magnitude more power than the few including Hebbian, neuromodulated, and intrinsic plasticity— watts required by the human brain. Mead (1990) suggested that all of which emerge from a network-level model of probabilistic the solution to this huge gap in power efficiency was to develop inference (Lansner and Ekeberg, 1989; Lansner and Holst, 1996). an entirely new breed of “neuromorphic” computer architectures BCPNN can translate correlations at different timescales into inspired by the brain. Over the proceeding years, a number of connectivity patterns through the use of locally stored synaptic these neuromorphic architectures have been built with the aim traces, enabling a functionally powerful framework to study of reducing the power consumption and execution time of large the relationship between structure and function within cortical neural simulations. circuits. In Sections 2.1–2.3, we describe how this learning rule Large-scale neuromorphic systems have been constructed can be combined with a simple point neuron model as the using a number of approaches: NeuroGrid (Benjamin et al., basis of a simplified version of Lundqvist et al.’s (2006) cortical 2014) and BrainScaleS (Schemmel et al., 2010) are built using attractor memory model. In Sections 2.4, 2.5, we then describe custom analog hardware; True North (Merolla et al., 2014) is how this model can be simulated efficiently on SpiNNaker built using custom digital hardware and SpiNNaker (Furber using an approach based on a recently proposed event-driven et al., 2014) is built from software programmable ARM implementation of BCPNN (Vogginger et al., 2015). We then processors. compare the accuracy of our new BCPNN implementation with Neuromorphic architectures based around custom hardware, previous non-spiking implementations (Sandberg et al., 2002) especially the type of sub-threshold analog systems which Mead and demonstrate how the attractor memory network can