<<

FOCUSED REVIEW published: 15 September 2009 doi: 10.3389/neuro.01.026.2009

The Brian simulator

Dan F. M. Goodman1,2* and Romain Brette1,2*

1 Laboratoire Psychologie de la Perception, CNRS and Université Paris Descartes, Paris, France 2 Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France

“Brian” is a simulator for spiking neural networks (http://www.briansimulator.org). The focus is on making the writing of simulation code as quick and easy as possible for the user, and on fl exibility: new and non-standard models are no more diffi cult to defi ne than standard ones. This allows scientists to spend more time on the details of their models, and less on their implementation. models are defi ned by writing differential equations in standard mathematical notation, facilitating scientifi c communication. Brian is written in the Python programming language, and uses vector-based computation to allow for effi cient simulations. It is particularly useful for neuroscientifi c modelling at the systems level, and for teaching Edited by: computational neuroscience. Jan G. Bjaalie, International Neuroinformatics Coordination Facility, Keywords: Python, spiking neural networks, simulation, teaching, systems neuroscience Sweden; University of Oslo, Norway Reviewed by: Eilif Muller, Ecole Polytechnique Fédérale de Lausanne, Switzerland INTRODUCTION takes considerably more time to write the code Nicholas T. Carnevale, Yale Computational modelling of neural networks than to run the simulations. Minimising learn- University School of Medicine, USA plays an increasing role in neuroscience research. ing and development time rather than simulation Örjan Ekeberg, Royal Institute of Technology, Sweden Writing simulation code from scratch (typically time implies different design choices, putting more in Matlab or C) can be very time-consuming and emphasis on fl exibility, readability, and simplicity * Correspondence: in all but the simplest cases requires expertise in of the syntax. programming and knowledge of neural simula- There are various projects underway to address tion algorithms. This can discourage researchers these issues. All of the major simulation pack- from investigating non-standard models or more ages now include Python interfaces (Eppler et al., complex network structures. 2008; Hines et al., 2009) and the PyNN project Several successful neural simulators already exist (Davison et al., 2008) is working towards provid- Dan F. M. Goodman obtained a degree (Brette et al., 2007), such as Neuron (Carnevale and ing a unifi ed interface to them. Because it is both in Mathematics from the University of Hines, 2006) and Genesis (Bower and Beeman, easy and powerful, Python is rapidly becoming Cambridge and a Ph.D. in Mathematics from the University of Warwick. 1998) for compartmental modelling, and NEST the standard high-level language for the fi eld of He is currently doing post-doctoral (Gewaltig and Diesmann, 2007) for large scale computational neuroscience, and for scientifi c research in Theoretical and Computational network modelling. These simulators implement computing more generally (Bassi, 2007). Neuroscience at the Ecole Normale computationally effi cient algorithms and are We took those approaches one step further by Supérieure in Paris. His research widely used for large-scale modelling and complex developing a new neural simulator, Brian, which interests are in the role of spike-timing based coding and computation, biophysical models. However, computational effi - is an extension package for the Python program- and neural simulation technologies. ciency is not always the main limiting factor when ming language (Goodman and Brette, 2008). At the moment, he is working on simulating neural models. Efforts have been made A simulation using Brian is a Python program developing spiking neural network in several simulators to make it as easy as possible, either executed from a script or interactively from models of sound localisation such as Neuron’s NMODL language described in a Python shell. The primary focus is on making and GPU-based parallel processing algorithms for Brian. Hines and Carnevale (2000) and graphical user the development of a model by the user as rapid [email protected] interface. In many practical cases; however, it still and fl exible as possible. For that purpose, the

Frontiers in Neuroscience www.frontiersin.org September 2009 | Volume 3 | Issue 2 | 192 Goodman and Brette The Brian simulator

models are defi ned directly in the main script in language that is both intuitive and well established, their mathematical form (differential equations and secondly, to let users defi ne models in a form and discrete events). This design choice addresses that is as close as possible to their mathemati- the three issues mentioned earlier: fl exibility, as the cal defi nitions. As in other simulation projects, user can change the model by changing the equa- we identifi ed Python as an ideal choice for the tions; readability, as equations are unambiguous programming language. Python is a well estab- and do not require any specifi c knowledge of the lished language that is intuitive, easy to learn Brian simulator to understand them; and simplic- and benefi ts from a large user community and ity of the syntax, as models are expressed in their many extension packages (in particular for sci- original mathematical form, with little syntax to entifi c computing and visualisation). While other learn that is specifi c to Brian. Computational effi - simulators use Python as an interface to lower ciency is achieved using vector-based computa- level simulation code, the Brian simulator itself tion techniques. is written in Python. The most original aspect We expect that the availability of this new tool of Brian is the emphasis on defi ning models as will have the strongest impact in two areas: fi rstly directly as possible by providing their mathemati- in systems neuroscience, by making it easy to cal defi nition, consisting of differential equations explore spiking models with custom architectures and discrete events (the effect of spikes) written in and properties, and secondly in teaching com- standard mathematical notation. One of the sim- putational neuroscience, by making it possible plest examples is a leaky integrate-and-fi re neu- to write the code for a functional neural model ron, which has a single variable V which decays

within a single tutorial, and without strong pro- to a resting potential V0 over time according to τ − − gramming skills required. In this review, we start the equation dV/dt = (V V0). If this variable > by describing the core principles of Brian, then we reaches a threshold V VT the neuron fi res a

discuss a few concrete examples for which Brian is spike and then resets to a value VR. If neuron i is

particularly useful. Finally, we outline directions connected to neuron j with a synaptic weight Wij + for future research. then neuron i fi ring causes Vj to jump to Vj Wij. Brian can be downloaded from http:// This is represented in Brian with the following briansimulator.org. It is open source and freely code for a group of N neurons with all-to-all available under the CeCILL license (compatible connectivity: with the GPL license). equations = ‘‘’ THE SPIRIT OF BRIAN dV/dt = -(V-V0)/tau : volt The main goal of the Brian project is to minimise ‘’’ the development time for a neural model, and in G = NeuronGroup(N, equations, particular the time spent writing code, so that sci- threshold=’V>VT’, entists can spend their time on the details of their reset=’V=VR’) model rather than the details of its implementa- C = Connection(G, G, ‘V’) * Correspondence: tion. The development time also includes the time spent learning how to use the simulator. Ease of As can be seen in this code, the thresholding learning is important because it opens the pos- condition and the reset operation are also given sibility for people who would not previously have in standard mathematical form. In fact it is pos- considered computational modelling to try it out. sible to give arbitrary Python expressions for It is also helpful for teaching (see Teaching). More these, including calling user-defi ned functions. generally, it is desirable to minimise the time it This gives greater generality, but using only math- Romain Brette obtained a Ph.D. takes to read and understand someone else’s code. ematical expressions improves clarity. In addition in Computational Neuroscience from the Paris VI University in France. It helps peers to verify and evaluate research based to the mathematical defi nitions, the physical units He did post-doctoral studies in Alain on computational modelling, and it makes it eas- are specifi ed explicitly for all variables, enforc- Destexhe’s group in Gif-sur-Yvette ier for researchers to share their models. Finally, in ing dimensional consistency (here variable V has (France) and Wulfram Gerstner’s group order to minimise the time spent on writing code, units of volts). In order to avoid ambiguities and in Lausanne (Switzerland). the translation of the model defi nition into code make the code more readable, there is no stand- He is now an Assistant Professor of Computational Neuroscience should be as direct as possible. Execution speed ard scale for physical quantities (e.g. mV for volt- at Ecole Normale Supérieure, Paris. and memory usage of the code is also an issue, but ages) and the user provides the units explicitly. His group investigates spike-based only inasmuch as it puts a constraint on what can For example, the potential of the neurons is set to neural computation, theory and be done with it. It is therefore most important in −70 mV by writing G.V=-70*mV or equivalently simulation of spiking neuron models, G.V=-.07*volt. with a special focus on the auditory the case of large neural networks. system. These goals led us to make the following two Making differential equations the basis of [email protected] choices: fi rstly, to use a standard programming models in Brian may seem like a fundamental

Frontiers in Neuroscience www.frontiersin.org September 2009 | Volume 3 | Issue 2 | 193 Goodman and Brette The Brian simulator

Python restriction. For example, we may want to defi ne equations = ‘’’ A high-level programming language. an α post-synaptic current by g(t) = (t/τ)e1 − t/τ dV/dt = (V0-V)/tau : volt Programs can be run from a script or interactively from a shell (as in rather than a differential equation. However, dVT/dt = (VT0-VT)/tau_t : volt Matlab). It is often used for providing this is just the integrated formulation of the ‘’’ a high-level interface to low-level code. associated kinetic model: G = NeuronGroup(N, equations, The Python community has developed where presynaptic spikes act on y (that is, cause threshold=’V>VT’, reset=’’’ a large number of third party packages, an instantaneous increase in y). More generally, V = V0 including NumPy and SciPy, which are commonly used for effi cient biophysical neural models can almost always VT += delta’’’) numerical and scientifi c computation. be expressed as differential equations, whose Interface solutions are the post-synaptic potentials or Two things can be noted in this code: the A mechanism for interacting with currents. threshold condition can be any boolean condi- , often simplifi ed and using In the same way, short-term plasticity (Tsodyks tion on the variables of the model (not only a fi xed abstractions, such as a graphical and Markram, 1997) and spike-timing-dependent threshold), and the reset can be any sequence of user interface (GUI) for interacting directly with the user, or an application plasticity (STDP, Song et al., 2000) can also be Python instructions. programming interface (API) for defi ned by differential equations with discrete interacting via code, possibly written events. In fact, although STDP is often described INVESTIGATING PLASTICITY MECHANISMS in another language. by the modifi cation of the synaptic weight for a The mechanisms of spike-timing-dependent plas- High-level language given pair of presynaptic and postsynaptic spikes, ticity are an active line of research, so that many A programming language providing this phenomenological description is ambiguous different models coexist and we might expect new data abstraction and hiding low-level implementation details specifi c because it does not specify how pairs interact (e.g. models to emerge in the near future. Figure 1 shows to the CPU, memory management, all-to-all or nearest neighbours). These considera- a Brian script replicating some of the results of and other technical details. tions motivated our choice to defi ne all models Song et al. (2000), where a neuron receives random Package unambiguously with differential equations and inputs through plastic synapses. The STDP rule A self-contained collection of objects discrete events. is exponential with all-to-all interactions between and functions, often written by Brian emphasises this approach more strongly spike pairs and results in a bimodal stationary dis- a third party to provide easily usable implementations of various algorithms. than any other simulator package, and it has tribution of synaptic weights. Although the expo- some crucial benefi ts. Foremost, it separates the nential STDP rule can be most simply used in Brian Script A fi le containing a sequence defi nition of the model from its implementa- with the ExponentialSTDP object, the most of statements in a high-level language tion, which makes the code highly reproducible general way to defi ne such a mechanism in Brian which are run as a whole rather and readable. The equations defi ne the models is to give its equations. As is shown in Figure 1, than interactively. unambiguously, independently of the simula- a synaptic plasticity model is defi ned by three Shell tor itself, which makes it easy to share the code components: differential equations for synaptic A computer environment or program with other researchers, even though they might variables (here A and A ); events at presynaptic in which single statements can be pre post entered interactively, where the results use a different simulator or no simulator at all. spike times; and events at postsynaptic spike times. are evaluated immediately and are Second, it minimises the amount of simulator This formalism encompasses a very broad class of then available for use in subsequently specifi c syntax that needs to be learnt. For exam- plasticity rules. For example, triplet-based STDP entered statements. ple, users do not need to look up the names of rules (Pfi ster and Gerstner, 2006) can be imple- Vector-based computation the model parameters since they defi ne them mented by inserting a third differential equation; A technique for achieving themselves. Third, simulating custom models is and nearest- neighbour rules can be obtained by computational effi ciency in high-level languages. It consists of replacing as simple as simulating standard models. In the replacing the presynaptic code A_pre+=dA_pre repeated operations by single following, we illustrate these features in a few by A_pre=dA_pre and the postsynaptic code operations on vectors (e.g. arithmetic concrete cases. A_post+=dA_post by A_post=dA_post. operations) that are implemented in a dedicated effi cient package CASE STUDIES MODELLING A FUNCTIONAL NEURAL SYSTEM (e.g. NumPy for Python). In this section, we discuss several examples that Figure 2 shows a Brian script adapted from Stürzl illustrate the key features of Brian. et al. (2000), which models the neural mechanisms of prey localisation by a sand scorpion. This model INVESTIGATING A NEW NEURON MODEL is fairly complex and includes in particular noise Several recent experimental (Badel et al., 2008) and and delays, which would make equivalent code in theoretical studies (Deneve, 2008) have suggested Matlab or C very long, whereas the full script takes that the threshold for spike initiation should increase only about 20 lines with Brian (plus the defi nition after each spike and decay to a stationary value. A of parameter values). The script illustrates the fact simple model is given by a differential equation that the code is close to the mathematical defi ni- τ − for the threshold VT: tdVT/dt = VT0 VT, and an tion of the model, which makes it relatively easy ← + δ instantaneous increase at spike time: VT VT . to understand. Many (if not most) neurocompu- This model can be simulated directly with Brian, tational studies at the systems level are not pub- even though it is not a predefi ned model: lished along with the code, which might be because

Frontiers in Neuroscience www.frontiersin.org September 2009 | Volume 3 | Issue 2 | 194 Goodman and Brette The Brian simulator

Figure 1 | Implementing spike-timing-dependent plasticity with Brian. The source code in panel (C) shows the complete implementation of the network shown in panel (D), adapted from Song et al. (2000). A set of N = 1000 neurons (the object named input in the code) fi ring spikes at random times with Poisson statistics are connected via excitatory synapses (the object synapses) to a leaky integrate-and-fi re neuron (the object neuron evolving according to the differential equations in the string eqs_neuron). The initially random synaptic weights are modifi ed according to the STDP rule (the object stdp) Δ in panel (E): for all pairs of pre- and post-synaptic fi ring times tpre, tpost, the weight of the synapse is modifi ed by an amount w Δ − Δ Δ > depending on the difference in spike times t = tpost tpre. The equation for w is when t 0 or when Δ < t 0. This is equivalent to the following system of equations. Each synapse has two variables Apre, Apost evolving according τ − τ − to the equations predApre/dt = Apre and postdApost/dt = Apost (the eqs string in the STDP declaration). A presynaptic spike Δ causes Apre to increase by pre and the weight w to jump by Apost (the pre string), and a postsynaptic spikes causes Apost Δ to increase by post and w by Apre (the post string). The STDP rule causes the population fi ring rate shown in panel (A) to adapt, and the fi nal distribution of synaptic weights shown in panel (B) to tend towards a bimodal distribution.

the authors fi nd their code less informative and a computational neuroscience course which usable for readers than the article describing it. included a weekly two hour tutorial on comput- Code readability has several benefi ts: fi rstly, the ers (http://www.di.ens.fr/∼brette/coursneurones/ code can be provided to the reviewers, who can coursneurones.html). A single tutorial was run it or at least understand precisely how the dedicated to learning Python and Brian, while authors produced the fi gures; secondly, the code every subsequent one illustrated the lecture can be published along with the article and provide which preceded it. For example, in one tutorial information to readers even though they might the students implemented a spiking version of not be familiar with the simulator; and thirdly, the sound source localisation model of Jeffress readers can work from the authors’ code without (1948) and another sound localisation model rewriting everything from scratch. that was recently proposed for small mammals (McAlpine et al., 2001). In another tutorial, the TEACHING students implemented the synfi re chain model We end this section with the experience of one described in Diesmann et al. (1999). The year of the authors (Romain Brette) on teaching before, when students used Scilab for simula-

Frontiers in Neuroscience www.frontiersin.org September 2009 | Volume 3 | Issue 2 | 195 Goodman and Brette The Brian simulator

Figure 2 | Brian implementation of a model of prey localisation in the sand scorpion, adapted from Stürzl et al. (2000). The movement of the prey causes a surface wave [function wave in the code in panel (C)] which is detected by mechanoreceptors [the red points in panel (A)] at the ends of each of the scorpion’s legs. The mechanoreceptors are modelled by noisy, leaky integrate-and-fi re neurons with an input current defi ned by the surface wave displacement at the ends of the legs (object legs in the code, defi ned by the equations eqs_legs. These neurons send an excitatory signal (the object synapses_ex) to corresponding command neurons (the blue points) modelled by leaky integrate-and- fi re neurons (object neurons with equations eqs_neuron), which also receive delayed inhibitory signals (the object synapses_inh) from the three legs on the other side (the for loop). A wave arriving fi rst at one side of the scorpion will cause an excitatory signal to be sent to the command neurons on that side causing them to fi re, and an inhibitory signal to the command neurons on the other side, stopping them from fi ring when the wave reaches the other side. The result is that command neurons are associated to the directions of the corresponding legs, fi ring at a high rate if the prey is in that direction. Panel (B) shows the fi ring rates for the eight command neurons in a polar plot for a prey at an angle of 60 degrees relative to the scorpion.

tions (a free equivalent of Matlab), the synfi re ibility, readability, and simplicity of the syntax. chain model was a 2-month student project. This Two choices followed from those goals: fi rstly, experience illustrates the fact that choosing to Brian is an extension package of Python, a simple minimise the time spent on learning and writ- and popular programming language, and sec- ing code has qualitative implications: what was ondly Brian is equation-oriented, that is models before a full project can now be done by students are written in a form that is as close as possible to in a single tutorial, allowing for a much richer their mathematical expression, with differential course content. equations and discrete events. This makes Brian a convenient simulation tool for exploring new DISCUSSION spiking neural models, modelling complex neural With the Brian project, we chose a different models at the systems level, and teaching compu- emphasis from previous simulation projects: fl ex- tational neuroscience.

Frontiers in Neuroscience www.frontiersin.org September 2009 | Volume 3 | Issue 2 | 196 Goodman and Brette The Brian simulator

Interpreted language Python is an interpreted language, and at much lower cost. Early work in the case of A language in which code is translated although it is fast there is an overhead for every auditory fi ltering and the solution of nonlinear into machine code continuously Python operation. Brian can achieve good per- differential equations gives an indication of the as the program runs rather than being entirely translated into machine code formance by using the technique of vectorisation, speed improvements to be gained from using once before running (compiled). similar to the same technique familiar to Matlab GPUs. Applying a bank of 6000 auditory fi lters Interpreted languages are more fl exible users, which makes the interpretation overhead (approximately the number of cochlear fi lters than compiled ones but at the cost o small relative to overall simulation times for in the human auditory system) was around f being slower. large networks. Brian is typically much faster 7 times faster with a GPU than with a CPU, and than simulations coded in Matlab, and a little for an even larger bank of fi lters approached slower than simulations coded in C (Goodman 10–12 times faster (GPUs generally perform bet- and Brette, 2008). Brian is proportionally slower ter as the data set gets larger). For solving non- for small networks, which seems like a reason- linear differential equations, a key step in the able trade off given that smaller networks tend simulation of many biophysically realistic neuron to take less time to run in absolute terms than models, GPU performance was between 2 and larger networks. 60 times better than the CPU depending on the Currently, the main limitation of Brian is that equations and the number of neurons. it cannot run distributed simulations (although We initiated the Brian project with the goal independent simulations can be run with job of providing a simple simulation tool to explore scheduling software such as Condor (Frey et al., spiking neural models without spending too 2002). Our plan for the future is to include much time on their implementation. Our hope some support for parallel processing with the is that our efforts in developing Brian will make same goals of simplicity and accessibility. For spiking neuron models more widely accessible for that purpose, we are focusing on using graphics systems neuroscience research. cards with general purpose graphics processing units (GPUs), an inexpensive piece of hardware ACKNOWLEDGMENTS (around several hundred dollars) consisting of This work was partially supported by the French a large number of parallel processing cores (in ANR (ANR HIT), the CNRS and the Ecole the hundreds for the latest models (Luebke et al., Normale Supérieure. The authors would like to 2004)). Using these cards gives the equivalent thank all those who have been involved in testing of a small parallel cluster in a single machine Brian and making suggestions for improving it.

REFERENCES Davison, A. P., Brüderle, D., Eppler, J., Hines, M. L., and Carnevale, N. T. (2000). arachnid prey localization. Phy. Rev. Badel, L., Lefort, S., Brette, R., Kremkow, J., Muller, E., Pecevski, D., Expanding NEURON’s repertoire of Lett. 84, 5668–5671. Petersen, C. C. H., Gerstner, W., Perrinet, L., and Yger, P. (2008). mechanisms with NMODL. Neural Tsodyks, M. V., and Markram, H. (1997). and Richardson, M. J. E. (2008). PyNN: a common interface for neu- Comput. 12, 995–1007. The neural code between neocorti- Dynamic I–V curves are reliable ronal network simulators. Front. Hines, M. L., Davison, A. P., and Muller, E. cal pyramidal neurons depends on predictors of naturalistic pyramidal- Neuroinformatics 2, 11. (2009). NEURON and Python. Front. neurotransmitter release probabil- neuron voltage traces. J. Neurophysiol. Deneve, S. (2008). Bayesian spiking neu- Neuroinformatics 3, 1. ity. Proc. Natl. Acad. Sci. U. S. A. 94, 99, 656–66. rons I: inference. Neural Comput. 20, Jeffress, L. A. (1948). A place theory of 719–723. Bassi, S. (2007). A primer on Python for 91–117. sound localization. J. Comp. Physiol. life science researchers. PLoS Comput. Diesmann, M., Gewaltig, M., and Psychol. 41, 35–9. Conflict of Interest Statement: The Biol. 3, e199. Aertsen, A. (1999). Stable propagation Luebke, D., Harris, M., Krüger, J., authors declare that the research was Bower, J. M., and Beeman, D. (1998). The of synchronous spiking in cortical neu- Purcell, T., Govindaraju, N., conducted in the absence of any com- Book of GENESIS: Exploring Realistic ral networks. Nature 402, 529–533. Buck, I., Woolley, C., and Lefohn, A. mercial or financial relationships that Neural Models with the GEneral Eppler, J. M., Helias, M., Muller, E., (2004). GPGPU: General Purpose could be construed as a potential confl ict NEural SImulation System, 2nd Edn. Diesmann, M., and Gewaltig, M. Computation on Graphics Hardware. of interest. New York, Springer. (2008). PyNEST: a convenient inter- Los Angeles, CA, ACM, p. 33. Brette, R., Rudolph, M., Carnevale, T., Hines, face to the NEST simulator. Front. McAlpine, D., Jiang, D., and Received: 22 April 2009; paper pending M., Beeman, D., Bower, J. M., Diesmann, Neuroinformatics 2, 12. Palmer, A. R. (2001). A neural code for published: 26 June 2009; accepted: 08 July M., Morrison, A., Goodman, P. H., Frey, J., Tannenbaum, T., Livny, M., low-frequency sound localization in 2009; published: 15 September 2009. Harris, F. C., Zirpe, M., Natschläger, T., Foster, I., and Tuecke, S. (2002). mammals. Nat. Neurosci. 4, 396–401. Citation: Front. Neurosci. (2009) 3, 2: 192- Pecevski, D., Ermentrout, B., Djurfeldt, Condor-G: a computation manage- Pfi ster, J., and Gerstner, W. (2006). Triplets 197. doi: 10.3389/neuro.01.026.2009 M., Lansner, A., Rochel, O., Vieville, T., ment agent for multi-institutional of spikes in a model of spike timing- Muller, E., Davison, A. P., Boustani, S. E., grids. Cluster Comput. 5, 237–246. dependent plasticity. J. Neurosci. 26, Copyright © 2009 Goodman and Brette. and Destexhe, A. (2007). Simulation of Gewaltig, O., and Diesmann, M. (2007). 9673–9682. This is an open-access article subject to networks of spiking neurons: a review NEST (NEural Simulation Tool). Song, S., Miller, K. D., and Abbott, L. F. an exclusive license agreement between of tools and strategies. J. Comput. Scholarpedia 2, 1430. (2000). Competitive Hebbian learning the authors and the Frontiers Research Neurosci. 23, 349–398. Goodman, D., and Brette, R. (2008). through spike-timing-dependent synap- Foundation, which permits unrestricted Carnevale, N. T., and Hines, M. L. (2006). Brian: a simulator for spiking tic plasticity. Nat. Neurosci. 3, 919–926. use, distribution, and reproduction in any The NEURON Book. Cambridge, neural networks in Python. Front. Stürzl, W., Kempter, R., and van medium, provided the original authors and Cambridge University Press. Neuroinformatics 2, 5. Hemmen, J. L. (2000). Theory of source are credited.

Frontiers in Neuroscience www.frontiersin.org September 2009 | Volume 3 | Issue 2 | 197