The Brian Simulator

Total Page:16

File Type:pdf, Size:1020Kb

The Brian Simulator FOCUSED REVIEW published: 15 September 2009 doi: 10.3389/neuro.01.026.2009 The Brian simulator Dan F. M. Goodman1,2* and Romain Brette1,2* 1 Laboratoire Psychologie de la Perception, CNRS and Université Paris Descartes, Paris, France 2 Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France “Brian” is a simulator for spiking neural networks (http://www.briansimulator.org). The focus is on making the writing of simulation code as quick and easy as possible for the user, and on fl exibility: new and non-standard models are no more diffi cult to defi ne than standard ones. This allows scientists to spend more time on the details of their models, and less on their implementation. Neuron models are defi ned by writing differential equations in standard mathematical notation, facilitating scientifi c communication. Brian is written in the Python programming language, and uses vector-based computation to allow for effi cient simulations. It is particularly useful for neuroscientifi c modelling at the systems level, and for teaching Edited by: computational neuroscience. Jan G. Bjaalie, International Neuroinformatics Coordination Facility, Keywords: Python, spiking neural networks, simulation, teaching, systems neuroscience Sweden; University of Oslo, Norway Reviewed by: Eilif Muller, Ecole Polytechnique Fédérale de Lausanne, Switzerland INTRODUCTION takes considerably more time to write the code Nicholas T. Carnevale, Yale Computational modelling of neural networks than to run the simulations. Minimising learn- University School of Medicine, USA plays an increasing role in neuroscience research. ing and development time rather than simulation Örjan Ekeberg, Royal Institute of Technology, Sweden Writing simulation code from scratch (typically time implies different design choices, putting more in Matlab or C) can be very time-consuming and emphasis on fl exibility, readability, and simplicity * Correspondence: in all but the simplest cases requires expertise in of the syntax. programming and knowledge of neural simula- There are various projects underway to address tion algorithms. This can discourage researchers these issues. All of the major simulation pack- from investigating non-standard models or more ages now include Python interfaces (Eppler et al., complex network structures. 2008; Hines et al., 2009) and the PyNN project Several successful neural simulators already exist (Davison et al., 2008) is working towards provid- Dan F. M. Goodman obtained a degree (Brette et al., 2007), such as Neuron (Carnevale and ing a unifi ed interface to them. Because it is both in Mathematics from the University of Hines, 2006) and Genesis (Bower and Beeman, easy and powerful, Python is rapidly becoming Cambridge and a Ph.D. in Mathematics from the University of Warwick. 1998) for compartmental modelling, and NEST the standard high-level language for the fi eld of He is currently doing post-doctoral (Gewaltig and Diesmann, 2007) for large scale computational neuroscience, and for scientifi c research in Theoretical and Computational network modelling. These simulators implement computing more generally (Bassi, 2007). Neuroscience at the Ecole Normale computationally effi cient algorithms and are We took those approaches one step further by Supérieure in Paris. His research widely used for large-scale modelling and complex developing a new neural simulator, Brian, which interests are in the role of spike-timing based coding and computation, biophysical models. However, computational effi - is an extension package for the Python program- and neural simulation technologies. ciency is not always the main limiting factor when ming language (Goodman and Brette, 2008). At the moment, he is working on simulating neural models. Efforts have been made A simulation using Brian is a Python program developing spiking neural network in several simulators to make it as easy as possible, either executed from a script or interactively from models of sound localisation such as Neuron’s NMODL language described in a Python shell. The primary focus is on making and GPU-based parallel processing algorithms for Brian. Hines and Carnevale (2000) and graphical user the development of a model by the user as rapid [email protected] interface. In many practical cases; however, it still and fl exible as possible. For that purpose, the Frontiers in Neuroscience www.frontiersin.org September 2009 | Volume 3 | Issue 2 | 192 Goodman and Brette The Brian simulator models are defi ned directly in the main script in language that is both intuitive and well established, their mathematical form (differential equations and secondly, to let users defi ne models in a form and discrete events). This design choice addresses that is as close as possible to their mathemati- the three issues mentioned earlier: fl exibility, as the cal defi nitions. As in other simulation projects, user can change the model by changing the equa- we identifi ed Python as an ideal choice for the tions; readability, as equations are unambiguous programming language. Python is a well estab- and do not require any specifi c knowledge of the lished language that is intuitive, easy to learn Brian simulator to understand them; and simplic- and benefi ts from a large user community and ity of the syntax, as models are expressed in their many extension packages (in particular for sci- original mathematical form, with little syntax to entifi c computing and visualisation). While other learn that is specifi c to Brian. Computational effi - simulators use Python as an interface to lower ciency is achieved using vector-based computa- level simulation code, the Brian simulator itself tion techniques. is written in Python. The most original aspect We expect that the availability of this new tool of Brian is the emphasis on defi ning models as will have the strongest impact in two areas: fi rstly directly as possible by providing their mathemati- in systems neuroscience, by making it easy to cal defi nition, consisting of differential equations explore spiking models with custom architectures and discrete events (the effect of spikes) written in and properties, and secondly in teaching com- standard mathematical notation. One of the sim- putational neuroscience, by making it possible plest examples is a leaky integrate-and-fi re neu- to write the code for a functional neural model ron, which has a single variable V which decays within a single tutorial, and without strong pro- to a resting potential V0 over time according to τ − − gramming skills required. In this review, we start the equation dV/dt = (V V0). If this variable > by describing the core principles of Brian, then we reaches a threshold V VT the neuron fi res a discuss a few concrete examples for which Brian is spike and then resets to a value VR. If neuron i is particularly useful. Finally, we outline directions connected to neuron j with a synaptic weight Wij + for future research. then neuron i fi ring causes Vj to jump to Vj Wij. Brian can be downloaded from http:// This is represented in Brian with the following briansimulator.org. It is open source and freely code for a group of N neurons with all-to-all available under the CeCILL license (compatible connectivity: with the GPL license). equations = ‘‘’ THE SPIRIT OF BRIAN dV/dt = -(V-V0)/tau : volt The main goal of the Brian project is to minimise ‘’’ the development time for a neural model, and in G = NeuronGroup(N, equations, particular the time spent writing code, so that sci- threshold=’V>VT’, entists can spend their time on the details of their reset=’V=VR’) model rather than the details of its implementa- C = Connection(G, G, ‘V’) * Correspondence: tion. The development time also includes the time spent learning how to use the simulator. Ease of As can be seen in this code, the thresholding learning is important because it opens the pos- condition and the reset operation are also given sibility for people who would not previously have in standard mathematical form. In fact it is pos- considered computational modelling to try it out. sible to give arbitrary Python expressions for It is also helpful for teaching (see Teaching). More these, including calling user-defi ned functions. generally, it is desirable to minimise the time it This gives greater generality, but using only math- Romain Brette obtained a Ph.D. takes to read and understand someone else’s code. ematical expressions improves clarity. In addition in Computational Neuroscience from the Paris VI University in France. It helps peers to verify and evaluate research based to the mathematical defi nitions, the physical units He did post-doctoral studies in Alain on computational modelling, and it makes it eas- are specifi ed explicitly for all variables, enforc- Destexhe’s group in Gif-sur-Yvette ier for researchers to share their models. Finally, in ing dimensional consistency (here variable V has (France) and Wulfram Gerstner’s group order to minimise the time spent on writing code, units of volts). In order to avoid ambiguities and in Lausanne (Switzerland). the translation of the model defi nition into code make the code more readable, there is no stand- He is now an Assistant Professor of Computational Neuroscience should be as direct as possible. Execution speed ard scale for physical quantities (e.g. mV for volt- at Ecole Normale Supérieure, Paris. and memory usage of the code is also an issue, but ages) and the user provides the units explicitly. His group investigates spike-based only inasmuch as it puts a constraint on what can For example, the potential of the neurons is set to neural computation, theory and be done with it. It is therefore most important in −70 mV by writing G.V=-70*mV or equivalently simulation of spiking neuron models, G.V=-.07*volt. with a special focus on the auditory the case of large neural networks. system. These goals led us to make the following two Making differential equations the basis of [email protected] choices: fi rstly, to use a standard programming models in Brian may seem like a fundamental Frontiers in Neuroscience www.frontiersin.org September 2009 | Volume 3 | Issue 2 | 193 Goodman and Brette The Brian simulator Python restriction.
Recommended publications
  • Modeling Neuron–Glia Interactions with the Brian 2 Simulator
    bioRxiv preprint doi: https://doi.org/10.1101/198366; this version posted October 5, 2017. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY 4.0 International license. Modeling neuron{glia interactions with the Brian 2 simulator Marcel Stimberg Sorbonne Universit´es,UPMC Univ Paris 06, INSERM, CNRS, Institut de la Vision, Paris, France Dan F. M. Goodman Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom Romain Brette Sorbonne Universit´es,UPMC Univ Paris 06, INSERM, CNRS, Institut de la Vision, Paris, France Maurizio De Pitt`a Depts. of Statistics and Neurobiology, The University of Chicago, Chicago, USA Team BEAGLE, INRIA Rh^one-Alpes, Villeurbanne, France October 5, 2017 Abstract Despite compelling evidence that glial cells could crucially regulate neural network ac- tivity, the vast majority of available neural simulators ignores the possible contribution of glia to neuronal physiology. Here, we show how to model glial physiology and neuron-glia interactions in the Brian 2 simulator. Brian 2 offers facilities to explicitly describe any model in mathematical terms with limited and simple simulator-specific syntax, automati- cally generating high-performance code from the user-provided descriptions. The flexibility of this approach allows us to model not only networks of neurons, but also individual glial cells, electrical coupling of glial cells, and the interaction between glial cells and synapses. We therefore conclude that Brian 2 provides an ideal platform to efficiently simulate glial physiology, and specifically, the influence of astrocytes on neural activity.
    [Show full text]
  • Brian 2: an Intuitive and Efficient Neural Simulator
    bioRxiv preprint doi: https://doi.org/10.1101/595710; this version posted April 1, 2019. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY 4.0 International license. Brian 2: an intuitive and efficient neural simulator Marcel Stimberg1*, Romain Brette1, and Dan F. M. Goodman2 1Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France 2Department of Electrical and Electronic Engineering, Imperial College London, UK Abstract To be maximally useful for neuroscience research, neural simulators must make it possible to define original models. This is especially important because a computational experiment might not only need descriptions of neurons and synapses, but also models of interactions with the environment (e.g. muscles), or the environment itself. To preserve high performance when defining new models, current simulators offer two options: low-level programming, or mark-up languages (and other domain specific languages). The first option requires time and expertise, is prone to errors, and contributes to problems with reproducibility and replicability. The second option has limited scope, since it can only describe the range of neural models covered by the ontology. Other aspects of a computational experiment, such as the stimulation protocol, cannot be expressed within this framework. “Brian” 2 is a complete rewrite of Brian that addresses this issue by using runtime code generation with a procedural equation-oriented approach. Brian 2 enables scientists to write code that is particularly simple and concise, closely matching the way they conceptualise their models, while the technique of runtime code generation automatically transforms high level descriptions of models into efficient low level code tailored to different hardware (e.g.
    [Show full text]
  • Brian 2, an Intuitive and Efficient Neural Simulator Marcel Stimberg1*, Romain Brette1†, Dan FM Goodman2†
    TOOLS AND RESOURCES Brian 2, an intuitive and efficient neural simulator Marcel Stimberg1*, Romain Brette1†, Dan FM Goodman2† 1Sorbonne Universite´, INSERM, CNRS, Institut de la Vision, Paris, France; 2Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom Abstract Brian 2 allows scientists to simply and efficiently simulate spiking neural network models. These models can feature novel dynamical equations, their interactions with the environment, and experimental protocols. To preserve high performance when defining new models, most simulators offer two options: low-level programming or description languages. The first option requires expertise, is prone to errors, and is problematic for reproducibility. The second option cannot describe all aspects of a computational experiment, such as the potentially complex logic of a stimulation protocol. Brian addresses these issues using runtime code generation. Scientists write code with simple and concise high-level descriptions, and Brian transforms them into efficient low-level code that can run interleaved with their code. We illustrate this with several challenging examples: a plastic model of the pyloric network, a closed-loop sensorimotor model, a programmatic exploration of a neuron model, and an auditory model with real-time input. DOI: https://doi.org/10.7554/eLife.47314.001 *For correspondence: Introduction [email protected] Neural simulators are increasingly used to develop models of the nervous system, at different scales and in a variety of contexts (Brette et al., 2007). These simulators generally have to find a trade-off †These authors contributed equally to this work between performance and the flexibility to easily define new models and computational experi- ments.
    [Show full text]
  • Brian 2, an Intuitive and Efficient Neural Simulator Marcel Stimberg1*, Romain Brette1†, Dan FM Goodman2†
    Brian 2, an intuitive and efficient neural simulator Marcel Stimberg, Romain Brette, Dan Goodman To cite this version: Marcel Stimberg, Romain Brette, Dan Goodman. Brian 2, an intuitive and efficient neural simulator. eLife, eLife Sciences Publication, 2019, 8, pp.e47314. 10.7554/eLife.47314. hal-02325389 HAL Id: hal-02325389 https://hal.sorbonne-universite.fr/hal-02325389 Submitted on 22 Oct 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. TOOLS AND RESOURCES Brian 2, an intuitive and efficient neural simulator Marcel Stimberg1*, Romain Brette1†, Dan FM Goodman2† 1Sorbonne Universite´, INSERM, CNRS, Institut de la Vision, Paris, France; 2Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom Abstract Brian 2 allows scientists to simply and efficiently simulate spiking neural network models. These models can feature novel dynamical equations, their interactions with the environment, and experimental protocols. To preserve high performance when defining new models, most simulators offer two options: low-level programming or description languages. The first option requires expertise, is prone to errors, and is problematic for reproducibility. The second option cannot describe all aspects of a computational experiment, such as the potentially complex logic of a stimulation protocol.
    [Show full text]
  • Spike-Based Models of Neural Computation
    Spike-based models of neural computation Romain Brette December 2009 M´emoired'Habilitation `aDiriger des Recherches Jury Nicolas Brunel (Universit´eParis V) Vincent Hakim (Ecole Normale Sup´erieure,Paris) Claude Meunier (Universit´eParis V) Walter Senn (Universit´ede Berne, Suisse) Simon Thorpe (Universit´ePaul Sabatier, Toulouse) Abstract Neurons compute mainly with action potentials or \spikes", which are stereotypical electrical impulses. Over the last century, the operating func- tion of neurons has been mainly described in terms of firing rates, with the timing of spikes bearing little information. More recently, experimental ev- idence and theoretical studies have shown that the relative spike timing of inputs has an important effect both on computation and learning in neurons. This evidence has triggered considerable interest for spiking neuron models in computational neuroscience, but the theory of computation in those models is sparse. Spiking neuron models are hybrid dynamical systems, combining differ- ential equations and discrete events. I have developed specific theoretical approaches to study this particular type of models. In particular, two spe- cific properties seem to be relevant for computation: spiking models can encode time-varying inputs into trains of precisely timed spikes, and they are more likely fire to when input spike trains are tightly correlated. To simulate spiking models efficiently, we have developed specific techniques, which can now be used in an open source simulator (Brian). These theoretical and methodological investigations now allow us to ad- dress spike-based modeling at a more global and functional level. Since the mechanisms of synaptic plasticity tend to favor synchronous inputs, I pro- pose to investigate computational mechanisms based on neural synchrony in sensory modalities.
    [Show full text]
  • Code Generation in Computational Neuroscience: a Review of Tools and Techniques
    Code generation in computational neuroscience: a review of tools and techniques Article (Published Version) Blundell, Inga, Brette, Romain, Cleland, Thomas A, Close, Thomas G, Coca, Daniel, Davison, Andrew P, Diaz-Pier, Sandra, Fernandez Musoles, Carlos, Gleeson, Padraig, Goodman, Dan F M, Hines, Michael, Hopkins, Michael W, Kumbhar, Pramod, Lester, David R, Marin, Boris et al. (2018) Code generation in computational neuroscience: a review of tools and techniques. Frontiers in Neuroinformatics, 12 (68). pp. 1-35. ISSN 1662-5196 This version is available from Sussex Research Online: http://sro.sussex.ac.uk/id/eprint/79306/ This document is made available in accordance with publisher policies and may differ from the published version or from the version of record. If you wish to cite this item you are advised to consult the publisher’s version. Please see the URL above for details on accessing the published version. Copyright and reuse: Sussex Research Online is a digital repository of the research output of the University. Copyright and all moral rights to the version of the paper presented here belong to the individual author(s) and/or other copyright owners. To the extent reasonable and practicable, the material made available in SRO has been checked for eligibility before being made available. Copies of full text items generally can be reproduced, displayed or performed and given to third parties in any format or medium for personal research or study, educational, or not-for-profit purposes without prior permission or charge, provided that the authors, title and full bibliographic details are credited, a hyperlink and/or URL is given for the original metadata page and the content is not changed in any way.
    [Show full text]
  • NEURON + Threads Simulations on Multicore Desktops
    The NEURON Simulation Environment Didactic Presentations NEURON + Threads Simulations on multicore desktops. Copyright © 1998-2016 N.T. Carnevale and M.L. Hines, all rights reserved Page 161 Didactic Presentations The NEURON Simulation Environment Thread style in NEURON Join run() { while (t < tstop) { multithread_job(step) plot() } } void* step(NrnThread* nt) { ... nt−>id ... } We never use. Condition Wait multithread_job(run) run(NrnThread* nt) { while(t < tstop) { step(nt) barrier() if (nt−>id == 0) { plot() } barrier() } } Reminiscent of MPI Fixed step: t −> t + dt STRBUG setup triangreduce bksub update cond solve gates SBUGT SUGTB ST R B UG Global var dt y’ = f() dy’/dy 27 ||Vector operations Page 162 Copyright © 1998-2016 N.T. Carnevale and M.L. Hines, all rights reserved The NEURON Simulation Environment Didactic Presentations Ideal cache efficiency ... ... CPU 1 ... ... cache line 8 or 16 doubles ... ... CPU 2 Runtime (s) Cache Efficiency Threads Off On 1 4.92 0.45 2 1.14 0.23 10000 passive compartments 4 core 3GHz x86_64 4 0.29 0.12 8 0.23 0.09 Copyright © 1998-2016 N.T. Carnevale and M.L. Hines, all rights reserved Page 163 Didactic Presentations The NEURON Simulation Environment Page 164 Copyright © 1998-2016 N.T. Carnevale and M.L. Hines, all rights reserved The NEURON Simulation Environment Didactic Presentations Copyright © 1998-2016 N.T. Carnevale and M.L. Hines, all rights reserved Page 165 Didactic Presentations The NEURON Simulation Environment Page 166 Copyright © 1998-2016 N.T. Carnevale and M.L. Hines, all rights reserved
    [Show full text]