A Code Generation Approach to Neural Simulations on Parallel Hardware
Total Page:16
File Type:pdf, Size:1020Kb
ORIGINAL RESEARCH published: 31 July 2015 doi: 10.3389/fninf.2015.00019 ANNarchy: a code generation approach to neural simulations on parallel hardware Julien Vitay 1*, Helge Ü. Dinkelbach 1 and Fred H. Hamker 1, 2 1 Department of Computer Science, Chemnitz University of Technology, Chemnitz, Germany, 2 Bernstein Center for Computational Neuroscience, Charité University Medicine, Berlin, Germany Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural Edited by: simulator. This information is used to generate C++ code that will efficiently perform the Andrew P. Davison, Centre National de la Recherche simulation on the chosen parallel hardware (multi-core system or graphical processing Scientifique, France unit). Several numerical methods are available to transform ordinary differential equations Reviewed by: into an efficient C++code. We compare the parallel performance of the simulator to Mikael Djurfeldt, existing solutions. Royal Institute of Technology, Sweden Marcel Stimberg, Keywords: neural simulator, Python, rate-coded networks, spiking networks, parallel computing, code generation École Normale Supérieure, France Thomas Glyn Close, Okinawa Institute of Science and Technology Graduate University, 1. Introduction Japan The efficiency and flexibility of neural simulators becomes increasingly important as the size and *Correspondence: complexity of the models studied in computational neuroscience grows. Most recent efforts focus Julien Vitay, Fakultät für Informatik, Professur on spiking neurons, either of the integrate-and-fire or Hodgkin-Huxley type (see Brette et al., Künstliche Intelligenz, Technische 2007, for a review). The most well-known examples include Brian (Goodman and Brette, 2008; Universität Chemnitz, Straße der Stimberg et al., 2014), NEST (Gewaltig and Diesmann, 2007), NEURON (Hines and Carnevale, Nationen 62, D-09107 Chemnitz, 1997), GENESIS (Bower and Beeman, 2007), Nengo (Bekolay et al., 2014), or Auryn (Zenke and Germany Gerstner, 2014). These neural simulators focus on the parallel simulation of neural networks on [email protected] shared memory systems (multi-core or multi-processor) or distributed systems (clusters) using either OpenMP (open multi-processing) or MPI (message parsing interface). Recent work address Received: 31 March 2015 the use of general-purpose graphical processing cards (GPU) through the CUDA or OpenCL Accepted: 13 July 2015 1 Published: 31 July 2015 frameworks (see Brette and Goodman, 2012, for a review). The neural simulators GeNN, NCS (Thibeault et al., 2011), NeMo (Fidjeland et al., 2009), and CARLsim (Carlson et al., 2014) provide Citation: in particular support for the simulation of spiking and compartmental models on single or multiple Vitay J, Dinkelbach HÜ and Hamker FH (2015) ANNarchy: a code GPU architectures. generation approach to neural A common approach to most of these neural simulators is to provide an extensive library of simulations on parallel hardware. neuron and synapse models which are optimized in a low-level language for a particular computer Front. Neuroinform. 9:19. doi: 10.3389/fninf.2015.00019 1The GeNN project, http://sourceforge.net/projects/genn/. Frontiers in Neuroinformatics | www.frontiersin.org 1 July 2015 | Volume 9 | Article 19 Vitay et al. ANNarchy neural simulator architecture. These models are combined to form the required domain also benefits from a wide range of biologically realistic network by using a high-level interface, such as a specific learning rules—such as the Bienenstock-Cooper-Munro (BCM) scripting language (as in NEST or NEURON) or an interpreted rule (Bienenstock et al., 1982) or the Oja learning rule (Oja, programming language (e.g., Python). As these interfaces 1982). Synaptic plasticity in spiking networks, including spike- are simulator-specific, the PyNN interface has been designed timing dependency plasticity (STDP), is an active research field to provide a common Python interface to multiple neural and the current implementations can be hard to parameterize. simulators, allowing a better exchange of models between Except in cases where synchronization mechanisms take place researchers (Davison et al., 2008). The main drawback of or where precise predictions at the single-cell level are required, this approach is that a user is limited to the neuron and rate-coded networks can provide a valid approximation of the synapse models provided by the simulator: if one wants to even brain’s dynamics at the functional level, see for example models of marginally modify the equations of a model, one has to write a reinforcement learning in the basal ganglia (O’Reilly and Frank, plugin in a low-level language without breaking the performance 2006; Dranias et al., 2008; Schroll et al., 2014), models of visual of the simulator. This can be particularly tedious, especially for attention (Zirnsak et al., 2011; Beuth and Hamker, 2015) or CUDA code on GPUs. models of gain normalization (Carandini and Heeger, 2012). A notable exception is the Brian simulator, which allows the Another reason why rate-coded networks should not be user to completely define the neuron and synapse models using a neglected by neural simulators is that advances in computational simple mathematical description of the corresponding equations. neuroscience allow to aim at complete functional models of the Brian uses a code generation approach to transform these brain which could be implemented in simulated agents or robots descriptions into executable code (Goodman, 2010), allowing (e.g., Eliasmith et al., 2012). However, spiking networks may the user to implement any kind of neuron or synapse model. not yet be able to perform all the required functions, especially The first version of Brian executes the code in Python directly when in a learning context. Hybrid architectures, combining rate- (although some code portions can be generated in a lower-level coded and spiking parts, may prove very useful to achieve this language) using vectorized computations (Brette and Goodman, goal. We consider there is a need for a parallel neural simulator 2011), making the simulation relatively slow and impossible to which should: (1) be flexible for the definition of neuron and run in parallel on shared memory systems. The second version synapse models, (2) allow the definition of rate-coded, spiking in development (Brian 2, Stimberg et al., 2014) proposes a and hybrid networks, (3) be computationally efficient on CPU- complete code generation approach where the simulation can be and GPU-based hardware and (4) be easy to interface with implemented in different languages or parallel frameworks. This external programs or devices (such as robots). approach is promising as it combines flexibility in model design This article presents the neural simulator ANNarchy with efficient and parallel simulation performance. (Artificial Neural Networks architect) which allows to simulate Rate-coded networks, however, do not benefit much from rate-coded, spiking as well as hybrid neural networks. It proposes the advances of spiking simulators. Rate-coded neurons do a high-level interface in Python directly inspired from PyNN not communicate through discrete spike events but through for the global structure and Brian for the definition of neuron instantaneous firing rates (real values computed at each step and synapse models. It uses a C++ code generation approach of the simulation). Rate-coded simulators are either restricted to perform the simulation in order to avoid the costs of an to classical neural networks (static neurons learning with interpreted language such as Python. Furthermore, rate-coded the backpropagation algorithm) or optimized for particular and spiking networks raise different problems for parallelization structures such as convolutional networks. To our knowledge, no (Dinkelbach et al., 2012), so code generation ensures the required rate-coded simulator provides a flexibility similar to what Brian computations are adapted to the parallel framework. ANNarchy proposes. The Emergent simulator (Aisa et al., 2008) provides is released under the version 2 of the GNU Public License. Its some features—including parallel computing—and is used in a source code and documentation2 are freely available. number of models in computational neuroscience (e.g., O’Reilly and Frank, 2006) but is restricted to a set of neuron and synapse 2. Interface of the Simulator models provided by the Leabra library. Topographica (Bednar, 2009) and CNS (Cortical Network Simulator, Mutch et al., 2010) 2.1. Structure of a Network primarily focus on convolutional networks. DANA (Distributed, The interface of ANNarchy focuses on the definition of Asynchronous, Numerical and Adaptive computing framework, populations of neurons and their interconnection through Rougier and Fix, 2012) is a generic solver for distributed projections. Populations are defined as homogeneous sets of equations which can