NEST Conference 2019 A Forum for Users & Developers

Conference Program

Norwegian University of Life Sciences 24-25 June 2019

NEST Conference 2019 24–25 June 2019, Ås, Norway

Monday, 24th June

11:00 Registration and lunch

12:30 Opening (Dean Anne Cathrine Gjærde, Hans Ekkehard Plesser) 12:50 GeNN: GPU-enhanced neural networks (James Knight) 13:25 Communication sparsity in distributed Simulations to improve scalability (Carlos Fernandez-Musoles) 14:00 Coffee & Posters 15:00 Sleep-like slow oscillations induce hierarchical memory association and synaptic homeostasis in thalamo-cortical simulations (Pier Stanislao Paolucci) 15:20 Implementation of a Frequency-Based Hebbian STDP in NEST (Alberto Antonietti) 15:40 Spike Timing Model of Visual Motion Perception and Decision Making with Reinforcement Learning in NEST (Petia Koprinkova-Hristova) 16:00 What’s new in the NEST user-level documentation (Jessica Mitchell) 16:20 ICEI/Fenix: HPC and Cloud infrastructure for computational neuroscientists (Jochen Eppler) 17:00 NEST Initiative Annual Meeting (members-only)

19:00 Conference Dinner

Tuesday, 25th June

09:00 Construction and characterization of a detailed model of mouse primary visual cortex (Stefan Mihalas) 09:45 Large-scale simulation of a spiking neural network model consisting of cortex, thalamus, and basal ganglia on K computer (Jun Igarashi) 10:10 Simulations of a multiscale olivocerebellar spiking neural network in NEST: a case study (Alice Geminiani) 10:30 NEST3 Quick Preview (Stine Vennemo & Håkon Mørk) 10:35 Coffee 10:50 NEST3 Hands-on Session (Håkon Mørk & Stine Vennemo) 11:40 A NEST CPG for pythonic motor control: Servomander (Harry Howard) 12:05 NEST desktop: A web-based GUI for NEST simulator (Sebastian Spreizer) 12:30 Lunch & Posters 13:30 Tools for the Visual Analysis of Simulation Datasets (Óscar David Robles) 13:50 Neural Network Simulation Code for Extreme Scale Hybrid Systems (Kristina Kapanova) Accelerating spiking network simulations with NEST 14:10 (Susanne Kunkel) 14:30 NESTML Tutorial (Charl Linssen) 15:30 NEST Desktop Tutorial (Sebastian Spreizer) 16:30 Closing (Markus Diesmann)

List of poster presentations

Understanding the NEST community (Steffen Graber)

Existence and Detectability of High Frequency Oscillations in Spiking Network Models (Runar Helin)

Large scale modeling of the mouse brain dynamics (Lionel Kusch)

Reconstruction and simulation of the cerebellar microcircuit: a scaffold strategy to embed different levels of neuronal details (Elisa Marenzi)

Computing the Electroencephalogram (EEG) from Point- Networks (Torbjørn V. Ness)

Improving NEST network construction performance with BlockVector and Spreadsort (Håkon Mørk)

Multi-area spiking network models of macaque and human cortices (Jari Pronold)

Connectivity Concepts for Neuronal Networks (Johanna Senk)

Modeling cortical and thalamocortical mechanisms involved in alpha generation (Renan O Shinomura)

Clopath and Urbanczik-Senn plasticity: New learning rules in NEST (Jonas Stapmanns, Jan Hahne)

Presenting new NEST benchmarks (Stine B. Vennemo)

NEST Conference 2019

Abstracts

Oral presentations NEST Conference 24–25 June 2019

GeNN: GPU-enhanced neural networks

James Knight1, Thomas Nowotny1 1 Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom

Email:[email protected]

Spiking neural network models tend to be developed and simulated on computers or clusters of computers with standard CPU architectures. However, over the last decade, GPU accelerators have not only become a common fixture in many workstations, but NVIDIA GPUs are now used in 50% of the Top 10 super computers worldwide. GeNN [1] is an open source, cross-platform library written in C++ for generating optimized CUDA code from high-level descriptions of spiking neuron networks to run on NVIDIA GPUs.

GeNN allows user-defined neuron, synapse and connectivity models to be specified in C-like code strings making GeNN highly extensible. Additionally, in GeNN, the user is free to provide their own C++ simulation loop around the simulation kernels GeNN provides. However, while this allows a lot of flexibility and tight integration with visualization tools and closed-loop robotics, when combined with the use of C++ for model definition, this can make using GeNN a somewhat daunting prospect for users more used to higher-level simulators. To address this GeNN can also be used as a simulation backend for PyNN (via PyNNGeNN [2]), (via Brian2GeNN [3]) and SpineML [4].

In our recent paper [5] we demonstrated how, using GeNN, a single compute node equipped with a high-end GPU could simulate a highly-connected model of a cortical column [6] – consisting of 80×103 spiking neurons and 0.3×109 synapses – faster and at a lower energy cost than was possible using a CPU-based system. Combined with the comparatively low cost and wide availability of NVIDIA GPU accelerators, this makes GeNN an ideal tool for accelerating computational neuroscience simulations as well as spiking neural network research in machine learning.

References 1. Yavuz E, el al. GeNN: a code generation framework for accelerated brain simulations. Scientific Reports. 6(November 2015), 18854. doi: 10.1038/srep18854 2. PyNN GeNN [https://github.com/genn-team/pynn_genn] 3. Stimberg M, et al. Brian2GeNN : a system for accelerating a large variety of spiking neural networks with graphics hardware. doi: 10.1101/448050 4. Richmond P, et al. From model specification to simulation of biologically constrained networks of spiking neurons. . 12(2), 307–323. doi: 10.1007/s12021-013-9208-z 5. Knight J, et al. GPUs Outperform Current HPC and Neuromorphic Solutions in Terms of Speed and Energy When Simulating a Highly-Connected Cortical Model. Frontiers in Neuroscience, 12(December), 1–19. doi: 10.3389/fnins.2018.00941 6. Potjans TC, et al. The Cell-Type Specific Cortical Microcircuit: Relating Structure and Activity in a Full- Scale Spiking Network Model. Cerebral Cortex, 24(3), 785–806. doi: 10.1093/cercor/bhs358

Copyright 2019 James Knight and Thomas Nowotny under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

Communication sparsity in distributed Spiking Neural Network Simulations to improve scalability

Carlos Fernandez-Musoles1, Daniel Coca1, Paul Richmond2 1 Automatic Control and Systems Engineering, The University Of Sheffield, Sheffield, UK. 2 Computer Science, The University Of Sheffield, Sheffield, UK.

Email: [email protected]

In the last decade there has been a surge in the number of big science projects interested in achieving a comprehensive understanding of the functions of the brain, using Spiking Neuronal Network (SNN) simulations to aid discovery and experimentation. Such an approach increases the computational demands on SNN simulators: if natural scale brain-size simulations are to be realised, it is necessary to use parallel and distributed models of computing. Communication is recognised as the dominant part of distributed SNN simulations [1,2]. As the number of computational nodes increases, the proportion of time the simulation spends in useful computing (computational efficiency) is reduced and therefore applies a limit to scalability. This work targets the three phases of communication to improve overall computational efficiency in distributed simulations: implicit synchronisation, process handshake and data exchange.

We introduce a connectivity-aware allocation of neurons to compute nodes by modelling the SNN as a hypergraph. Partitioning the hypergraph to reduce interprocess communication increases the sparsity of the communication graph. We propose dynamic sparse exchange [3] as an improvement over simple point-to-point exchange on sparse communications. Results show a combined gain when using hypergraph-based allocation and dynamic sparse communication, increasing computational efficiency by up to 40.8 percentage points and reducing simulation time by up to 73%. The findings are applicable to other distributed complex system simulations in which communication is modelled as a hypergraph network.

Acknowledgements The authors would like to thank Dr. Andrew Turner from EPCC for his support in supplying extra computational resources in ARCHER, and the review comments from Dr. Hans Ekkehard Plesser and Dr. Ivan Raikov wich greatly increased the quality of our paper.

References 1. Brette, R., & Goodman, D. (2012). Simulating spiking neural networks on GPU. Network Computation in Neural Systems, 23(4), 167–182. https://doi.org/10.3109/0954898X.2012.730170 2. Zenke, F., & Gerstner, W. (2014). Limits to high-speed simulations of spiking neural networks using general- purpose computers. Frontiers in Neuroinformatics, 8(September), 76. https://doi.org/10.3389/fninf.2014.00076 3. Hoefler, T., Siebert, C., & Lumsdaine, A. (2010). Scalable communication protocols for dynamic sparse data exchange. In Proceedings of the 15th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (Vol. 45, p. 159). Bangalore (India). https://doi.org/10.1145/1837853.1693476

Copyright 2019 Fernandez-Musoles, Coca, Richmond under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

Sleep-like slow oscillations induce hierarchical memory association and synaptic homeostasis in thalamo-cortical simulations Capone Cristiano1, Elena Pastorelli1,2, Bruno Golosio3,4, Pier Stanislao1 1 INFN Sezione di Roma 2 PhD Program in Behavioural Neuroscience, ``Sapienza'' University of Rome 3 Dipartimento di Fisica, Università di Cagliari 4 INFN Sezione di Cagliari

Email: [email protected]

The occurrence of sleep passed through the evolutionary sieve and is widespread in animal species [1]. Sleep is known to be beneficial to cognitive and mnemonic tasks, while chronic sleep deprivation is detrimental. Despite the importance of the phenomenon [2], a theoretical and computational approach demonstrating the underlying mechanisms is still lacking.

We implemented in NEST a minimal thalamo-cortical model (see Fig.1A) and trained it with examples drawn from the MNIST set of handwritten digits. During a training phase, spike-timing- dependent-plasticity (STDP) sculptures a pre-sleep synaptic matrix which associates training examples with well separated groups of cortical neurons and creates top-down synapses toward the thalamus. Then, the network is induced to produce cortically generated deep-sleep-like slow oscillations (SO), while being disconnected from sensory and lateral stimuli and driven by its internal activity. During sleep up-states, thalamic cells are activated by top-down predictive stimuli produced by cortical neural groups and respond with a forward feedback, recruiting other cortical neurons.

If spike-timing-dependent-plasticity (STDP) is active during slow oscillations, a differential homeostatic process is observed. It is characterized by both a specific enhancement of connections among groups of neurons associated to instances of the same class (digit) and a simultaneous down- regulation of stronger synapses created by the training. This is reflected in a hierarchical organization of post-sleep internal representations. Such effects favor higher performance in retrieval and classification tasks and create hierarchies of categories in integrated representations. The model leverages on the coincidence of top-down contextual information with bottom-up sensory flow during the training phase and on the integration of top-down predictions and bottom- up thalamo-cortical pathways during deep-sleep-like slow oscillations. Also, such mechanism hints at possible applications to artificial learning systems.

Acknowledgements This work has been supported by the European Union Horizon 2020 Research and Innovation program under the FET Flagship (SGA2 grant agreement SGA2 n. 785907), System and Cognitive Neuroscience subproject, WaveScalES experiment. References 1. Tononi G. & Cirelli C., From synaptic and cellular homeostasis to memory consolidation and integration. Neuron 81, 12-34 (2014) 2. Network Homeostasis and State Dynamics of Neocortical Sleep" Watson et al., (2016), Neuron 90, 839–852

Copyright 2019 Capone, Pastorelli, Golosio and Paolucci under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

Implementation of a Frequency-Based Hebbian STDP in NEST

Alberto Antonietti1, Vasco Orza1, Claudia Casellato2, Egidio D’Angelo2, Alessandra Pedrocchi1 1 Department of Electronics, Information and Bioengineering; Politecnico di Milano, Milano, Italy 2 Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy

Email: [email protected]

The brain is provided with an enormous computing capability and uses neural plasticity to store and elaborate complex information. One of the multiple mechanisms that neural circuits express is the Spike-Timing-Dependent Plasticity (STDP), a form of long-term synaptic plasticity exploiting the time relationship between pre- and post-synaptic neuron spikes. It has been found that, in certain cases, the plasticity is not only driven by the timing of the spikes, but also by the oscillation frequency of the inputs. In a recent work [1], experimental findings confirmed that plasticity at the input stage of the cerebellum, between Mossy Fibers and Granular Cells (MF-GrC), occurs through a specific form of frequency-dependent STDP. In fact, the STDP happens only if the oscillations frequency of the input is constrained within a narrow frequency band (1-10 Hz), centered in theta band (3-8 Hz), but it disappears for higher and lower frequencies.

In this work, we have exploited the experimental results on the MF-GrC synaptic plasticity to develop an advanced synapse model of this specific form of frequency-dependent STDP in a well- established neural network simulator, NEST [2,3]. Namely, the synapse model computes the instantaneous input (i.e., MF) frequency, it calculates the Fast Fourier Transform (FFT) and identifies in the maximum peak of the FFT the modulation frequency of the input firing rate. Having optimized the parameters of the plasticity, we tested the robustness of the algorithm when the input was not exactly stable but with some degree of variability and with the superimposition of spiking background noise. In this way, we were able to study the synapse ability to perform a correct frequency analysis and a subsequent proper modification of the synaptic weight in noisy and variable conditions, similar to the biological settings.

This advanced plasticity model will be useful to upgrade the current spiking neural network models of the cerebellum [4].

Acknowledgments This research has received funding from the EU Horizon2020 under SGA 720270 and 785907 (Human Brain Project SGA1 and SGA2) and by the HBP Partnering Project (CerebNEST).

References 1. Sgritta M, et al. (2017) Hebbian Spike-Timing Dependent Plasticity at the Cerebellar Input Stage. J Neurosci. 37(11):2809–2823. doi: 10.1523/JNEUROSCI.2079-16.2016 2. NEST Simulator [www.nest-simulator.org] 3. Peyser A, et al. (2017) NEST 2.14.0 (Version 2.14.0). Zenodo. doi: 10.5281/zenodo.882971 4. Antonietti A, et al. (2016) Spiking Neural Network with Distributed Plasticity Reproduces Cerebellar Learning in Eye Blink Conditioning Paradigms. IEEE TBME. 63(1):210-219. doi: 10.1109/TBME.2015.2485301

Copyright 2019 Antonietti, Orza, Casellato, D’Angelo, Pedrocchi under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

Spike Timing Model of Visual Motion Perception and Decision Making with Reinforcement Learning in NEST

Petia Koprinkova-Hristova1, Nadejda Bocheva2 1 Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, Sofia, Bulgaria 2 Institute of Neurobiology, Bulgarian Academy of Sciences, Sofia, Bulgaria

Email: [email protected]

The paper presents a hierarchical spike timing neural network model developed in NEST simulator [5] that aims to reproduce performance of visual tasks with reinforcement learning by humans. It consists of two sub-structures: first one involved in visual information processing and perceptual decision making and second one that biases taken decisions according to received external reinforcement signal. The first sub-model [3] includes multiple layers corresponding to subsequent brain areas involved in visual information processing, perception and decision making, starting from retinal ganglion cells (RGC) through thalamic relay including lateral geniculate nucleus (LGN) and thalamic reticular nucleus (TRN), visual cortex (V1), middle temporal (MT) and medial superior temporal (MTS) areas up to lateral intraparietal cortex (LIP). The second sub-model [4] mimics a group of subcortical nuclei - Basal ganglia (BG), including Striatum, Globus Pallidus externa (GPe), Subthalamic Nucleus (STN), Substantia Nigra pars reticulata (SNr) and Substantia Nigra pars compacta (SNc) - a structure that modulates the activity of Superior Colliculus (SC) which in turn modulates the LIP activity and thus the voluntary saccades execution. For this aim BG uses reward information from the environment. The neurotransmitter (dopamine) released by SNc is thought to represent Temporal Difference (TD) error signal influencing via dopamine-energetic synapses the Striatum activity and thus the BG reaction to external reward stimuli. The model connectivity is designed according to available literature sources [1, 2]. The effect of external reinforcement signal (generating input current to SNc layer) on the adaptation of dopamine-energetic as well as of anti- dopamine-energetic synapses in response to simulated visual stimuli (moving dots) is investigated.

Acknowledgements The reported work is a part of and was supported by the project No DN02/3/2016 “Modelling of voluntary saccadic eye movements during decision making” funded by the Bulgarian Science Fund.

References 1. Igarashi, J. et al. (2011) Real-time simulation of a spiking neural network model of the basal ganglia circuitry using general purpose computing on graphics processing units. Neural Networks 24: 950-960. doi: 10.1016/j.neunet.2011.06.008 2. Krishnan, R. et al. (2011) Modeling the role of basal ganglia in saccade generation: Is the indirect pathway the explorer?. Neural Networks 24: 801-813. doi: 10.1016/j.neunet.2011.06.002 3. Koprinkova-Hristova, P. et al. (2019) Spike Timing Neural Model of Motion Perception and Decision Making. Front. Comput. Neurosci.13: Article 20. doi: 10.3389/fncom.2019.00020 4. Koprinkova-Hristova, P. and Bocheva, N. (2018) Spike Timing Neural Model of Eye Movement with Reinforcement Learning. 13th Annual Meeting of the Bulgarian Section of SIAM (BGSIAM’2018) (under review). 5. Kunkel, S. et al. (2017) NEST 2.12.0 (Vesrion 2.16.0). Zenodo. doi: 10.5281/zenodo.259534

Copyright 2019 Koprinkova-Hristova and Bocheva under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

What’s new in the NEST user-level documentation

Jessica Mitchell1

1 Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, Jülich, Germany Email: [email protected]

Since the last NEST conference, we have made our first steps to improving NEST user-level documentation. Here we want to share what we’ve accomplished, including the new documentation website for NEST, updates to contribution guidelines and model documentation, as well as share what’s next for user-level documentation. We want to ensure that the documentation focus is on the community needs; We’ll describe how we want to encourage contributions and suggestions to improve NEST documentation.

Copyright 2019 Mitchell under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

ICEI/Fenix: HPC and Cloud infrastructure for computational neuroscientists

Jochen Martin Eppler1, Anne Carstensen1, Dirk Pleiter1 1 Institute for Advanced Simulation (IAS), Jülich Supercomputing Centre (JSC), Jülich Research Centre, Jülich, Germany

Email: [email protected]

The ICEI (Interactive Computing e-infrastructure for the Human Brain Project) project is funded by the European Commission under the Framework Partnership Agreement of the Human Brain Project (HBP) [1]. Five leading European Supercomputing Centres are working together to develop a set of e-infrastructure services that will be federated to form the Fenix Infrastructure [2]. The centres (BSC, CEA, CINECA, ETHZ/CSCS and JUELICH/JSC) committed to perform a coordinated procurement of equipment, licenses for software components and R&D services to realize elements of the e- infrastructure.

Fenix Infrastructure services include elastic to scalable and interactive compute resources and a federated data infrastructure. The distinguishing characteristic of this e-infrastructure is that data repositories and scalable supercomputing systems will be in close proximity and well integrated. User access is granted via a peer-review based allocation mechanism. The HBP is the initial prime and lead user community, guiding the infrastructure development in a use-case driven co-design approach. While the HBP is given a programmatic access to 25% of the resources, 15% are provided to European researchers at large via PRACE [3].

The Fenix Infrastructure will deliver federated compute and data services to European researchers by aggregating capacity from multiple resource providers (Fenix MoU parties) and enabling access from existing community platforms, like for the HBP. In order to achieve these goals, the federation needs to rely on a robust and reliable authentication and authorisation infrastructure (AAI), a trustworthy environment where users can be managed and granted to access resources securely and as seamlessly, as possible.

References 1. Human Brain Project [www.humanbrainproject.eu/en] 2. Fenix Research Infrastructure [www.fenix-ri.eu/] 3. Partnership for Advanced Computing in Europe [www.prace-ri.eu]

Copyright 2019 Eppler, Carstensen, Pleiter under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

Construction and characterization of a detailed model of mouse primary visual cortex

Yazan N. Billeh, Binghuang Cai, Jung Hoon Lee, Ramakrisnan Iyer, Sergey L. Gratiy, Kael Dai, Nathan W. Gouwens, Christof Koch, Anton Arkhipov, Stefan Mihalas1 1 Allen Institute for Brain Science, Seattle, Wa, USA

Email: [email protected] The complexity of the mammalian cortex is immense. Up to now we have characterized ~100 transcriptomic1 and 17 electrophysiological types and 38 morphological types2. We have characterized3 and modeled4 complicated long range and short range5 connection patterns. At the same time, in vivo recordings reveal relatively complex codes6. Is our knowledge of the details of the structure and characterization of components sufficient to understand in vivo properties? What level of resolution is needed for an accurate description of in vivo activity? What features of the in vivo code are robust to not getting some details right? One way to begin to address these questions and systematize the information from literature and our databases is to construct a model based on this information, and see how far we are from predicting additional features.

Using knowledge from previous smaller scale work7 we constructed a set of detailed models incorporating both literature and our databases at multiple levels of resolution: biophysically detailed and point neuron. We demonstrate how in the process of building these models, specific predictions emerge about structure-function relationships in the cortical circuit: e.g. functional specialization of connections between multiple cell types and the impact of the cortical retinotopic map on structure-function relationships. We characterized the responses of the model to multiple “out of class” stimuli: modeling the top-down input patterns observed in the connectivity data, computational optogenetic perturbation of individual cell types and individual neuron. While the model is not perfect, it reproduces a surprising number of findings for which it was not trained.

We plan to make the models, code, and all meta-data resources publicly available via the Allen Institute Modeling Portal (brain-map.org/explore/models). We hope that with the involvement of the community the continued improvement of such model would be possible.

References 1. Tasic, B. et al. Shared and distinct transcriptomic cell types across neocortical areas. Nature 563, 72–78 (2018). 2. Gouwens, N. W. et al. Classification of electrophysiological and morphological types in mouse visual cortex. bioRxiv 368456 (2018). doi:10.1101/368456 3. Harris, J. A. et al. The organization of intracortical connections by layer and cell class in the mouse brain. bioRxiv 292961 (2018). doi:10.1101/292961 4. Knox, J. E. et al. High-resolution data-driven model of the mouse . Netw. Neurosci. (Cambridge, Mass.) 3, 217–236 (2019). 5. Seeman, S. C. et al. Sparse recurrent excitatory connectivity in the microcircuit of the adult mouse and human cortex. Elife 7, (2018). 6. de Vries, S. E. J. et al. A large-scale, standardized physiological survey reveals higher order coding throughout the mouse visual cortex. bioRxiv 359513 (2018). doi:10.1101/359513 7. Arkhipov, A. et al. Visual physiology of the layer 4 cortical circuit in silico. PLOS Comput. Biol. 14, e1006535 (2018). Copyright 2019 Yazan N. Billeh, Binghuang Cai, Jung Hoon Lee, Ramakrisnan Iyer, Sergey L. Gratiy, Kael Dai, Nathan W. Gouwens, Christof Koch, Anton Arkhipov, Stefan Mihalas, Creative Commons Attribution License (CC BY 4.0) NEST Conference 24–25 June 2019

Large-scale simulation of a spiking neural network model consisting of cortex, thalamus, cerebellum and basal ganglia on K computer

Carlos Gutierrez1, Zhe Sun2, Hiroshi Yamaura3, Heidarinejad Morteza2, Jun Igarashi2, Tadashi Yamazaki3, Markus Diesmann4, Jean Lienard1, Benoit Girard5, Gordon Arbuthnott1, Hans Ekkerhard Plesser6, Kenji Doya1 1 Neural Computation Unit,Okinawa Institute of Science and Technology Graduate University, Okiniwa, Japan 2 Computational Engineering Applications Unit, Head Office for Information Systems and Cybersecurity RIKEN, Saitama, Japan 3 Department of Computer and Network Engineering, The University of Electro-Communications, Tokyo, Japan 4 Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany 5 Institut des Systèmes Intelligents et de Robotique, Sorbonne Université and CNRS, Paris, France 6 Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway

Email: [email protected]

Parallel loop circuits consisting of the cortex, thalamus, cerebellum, and basal ganglia play essential roles in information processing of motor and cognitive behaviors. Although the modeling study may be a useful way to investigate how the different regions interact, it is necessary to establish a way to realize the large-scale neural network simulation using high-performance computing.

To test the computational performance of the large-scale simulation and investigate neural behavior, we developed a spiking neural network model consisting of cortex [1], thalamus, cerebellum [2], and basal ganglia [3, 4, 5] using NEST [6], and evaluated the computational performance on the K computer. We performed the simulation using hybrid parallelization with MPI and OpenMP.

First, we performed a weak scaling performance test. Although the scaling performance was not ideal, the model size expanded to a half size of the mouse brain with 50 millions of neurons using 4900 compute nodes. In a strong scaling performance test, the elapsed time decreased with increase in the numbers of compute nodes. The amounts of memory consumption increased with an increase in the model size irrelevant of the numbers of compute nodes, which limited the size of the model.

In conclusion, we were able to expand the model to a half size of mouse brain using NEST and K computer, which may become a useful tool to understand interactions in the brain. Acknowledgements This work was supported by the MEXT as “Exploratory Challenge on Post-K Computer. References 1. Igarashi J., et al. (2014)Abstract of 44th Annual Meeting of the SfN 2. Yamazaki T., Nagao. (2012) PLoS ONE 7(3) 3. Lienard, Jean, and Benoit Girard.; Journal of computational neuroscience 36.3 (2014): 445-468. 4. Lienard et al., (2018). SBDM/SfN. 5. Gutierrez et al., (2018). AINI. 6. Gewaltig, Marc-Oliver, and Markus Diesmann. ;Nest (neural simulation tool).; Scholarpedia 2.4 (2007): 1430.

Copyright 2019 Carlos Gutierrez, Zhe Sun, Hiroshi Yamaura, Heidarinejad Morteza, Jun Igarashi, Tadashi Yamazaki, Markus Diesmann, Jean Lienard, Benoit Girard, Gordon Arbuthnott, Hans Ekkerhard Plesser, Kenji Doya under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

Simulations of a multiscale olivocerebellar spiking neural network in NEST: a case study

Alice Geminiani1, Claudia Casellato2, Alessandra Pedrocchi1, Egidio D’Angelo2 1 NEARLab, Dept. of Electronics, Information and Bioengineering, Politecnico di Milano, Milan 20133, Italy 2 Department of Brain and Behavioral Sciences, University of Pavia, Pavia 27100, Italy

Email: [email protected] Computational models of the cerebellum have become fundamental to investigate cerebellar circuit functioning and motor learning [1]. Specifically, Spiking Neural Networks (SNNs) have the potential to shed light on the underlying mechanisms of cerebellum-driven tasks, achieving the best compromise between biological plausibility at different scales and limited computational load. To this aim, we implemented a multiscale olivocerebellar SNN in NEST including realistic modelling elements at multiple scales. Single units in the SNN were modelled as Extended- Generalized Leaky Integrate and Fire (E-GLIF) point neurons, able to generate complex spiking patterns that are fundamental for cerebellar circuit functioning [2]. E-GLIF model was optimized to reproduce the electroresponsiveness of the main olivocerebellar neurons, obtaining a cell-specific parameter set for each neuron type. The SNN including the most important cerebellar neural populations, also interneurons in the granular, molecular and nuclear layers. Network connectivity was organized based on neuron morphologies, according to a cerebellar scaffold [3], integrated with a network model of the inferior olive. The SNN was subdivided into two microcircuits, representing two cerebellar cortical microzones with their corresponding nuclear and olivary sub-nuclei. The network was endowed with ad hoc plasticity rules, including standard cortical and nuclear plasticity sites, and a novel molecular layer plasticity site, derived from recent experimental evidence [1]. The obtained multiscale olivocerebellar network was embedded in a closed-loop control system and challenged in simulations of eyeblink classical conditioning, a standard cerebellum-driven task. NESTML and the C++ NEST core were used to develop the E-GLIF, and SNN simulations were implemented in PyNEST, using High-Performance Computing resources. The results demonstrate how associative motor learning emerges from the interaction of multiple mechanisms in the olivocerebellar circuit at different scales, which can be elucidated thanks to the use of realistic computational models. The present work can be a fundamental example for the NEST community on how simplified spiking models can be exploited for multi-scale analysis of the brain, providing a reference case study where realistic single neuron dynamics, network topology and activity, and synaptic plasticity are linked to sensorimotor behaviors. Acknowledgements This project has been developed within CerebNEST Partnering Project of the Human Brain Project and has received funding from Human Brain Project SGA1 and SGA2. High-Performance Computing resources were provided by CINECA (Italian Supercomputing center) within the ISCRA-C project NEST-EBC. References 1. E. D’Angelo et al., (2016) Modeling the cerebellar microcircuit: New strategies for a long-standing issue. Front. Cell. Neurosci., 10: 1–29, 2016. doi: 10.3389/neuro.03.005.2009. 2. A. Geminiani et al., (2018) Complex dynamics in simplified neuronal models: reproducing Golgi cell electroresponsiveness. Front. Neuroinform., 12(88): 1–19. doi: 10.3389/fninf.2018.00088. 3. S. Casali et al., (2019) Reconstruction and Simulation of a Scaffold Model of the Cerebellar Network. bioRxiv. doi: https://doi.org/10.1101/532515.

Copyright 2019 Geminiani, Casellato, Pedrocchi, D’Angelo under Creative Commons Attribution License (CC BY 4.0) NEST Conference 24–25 June 2019

NEST 3 quick preview

Håkon Mørk1, Stine Brekke Vennemo1, Hans E Plesser1,2 1 Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway 2 Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany Email: [email protected], [email protected]

NEST 3 is the next major version update of NEST. With it, big changes are made not only to the user interface, but also to the inner workings of NEST. In the PyNEST interface, new concepts are introduced, such as GIDCollections, and Parameters. The PyNEST Topology module is integrated into the standard PyNEST package, and creation and connection of layers are now performed by calling the standard functions. Subnets are completely removed, both from the interface, and from the NEST kernel.

NEST 3 has been in the making for a long time, but now most of the planned features have been implemented, and are ready to be tested by users. Here we will give a quick preview of the new features and changes.

Acknowledgements This project has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under Specific Grant Agreement No. 720270 (Human Brain Project SGA1) and No. 785907 (Human Brain Project SGA2).

Copyright 2019 Mørk, Vennemo, Plesser under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

A NEST CPG for pythonic motor control: Servomander

Harry Howard1 1 Tulane University, New Orleans, LA

Email: [email protected]

Singh et al. [1] recently implemented a finite-state automaton in NEST via PyNN for control of the arm of a Robotis Bioloid robot from a BeagleBone Black processor in Python 2.7 for picking up an object and placing it elsewhere. Given the recent appearance of Adafruit's CircuitPython [2], we explore a direct pythonic link between NEST and the control of the small servo motors typically used in robotic applications.

A servo contains a small DC motor that turns 180° around its shaft, 90° in either direction. CircuitPython exposes two interfaces for achieving this rotation. The one that is more intuitive for the human programmer simply rotates the motor clockwise or counterclockwise in degrees. The other that is more intuitive for networks of spiking neurons varies the width of a pulse that is sent to the motor's internal control every 20 ms. The length of the pulse determines how far the motor turns: the minimum pulse of 1 ms turns the shaft counterclockwise all the way to its 0° position; the maximum pulse of 2 ms turns the shaft clockwise all the way to its 180° position. The intermediate widths correspond to intermediate positions, with 1.5 ms landing at 0°. Such pulse-width modulation (PWM) occupies from 5% to 10% of the 20 ms timing cycle, known as the motor's duty cycle.

The goal for a spiking neuron is therefore to manipulate its input and parameters (if necessary) to create a proportional duty cycle and simply count spikes to be input into CircuitPython's PWM method. We would like to do this in as biologically realistic way as possible, and are drawn to mainstay of biological motor control, the central pattern generator or CPG. A central pattern generator produces a rhythmic output in the absence of rhythmic input for the control of activities such as chewing, breathing or locomotion. Harischandra et al. [3] develop a CPG for salamander locomotion in NEST using a conductance based leaky integrate-and-fire neuron model. (iaf_cond_exp_sfa_rr) but test it on a virtual model of the salamander body. We replicate Harischandra's network with servos representing a salamander's legs.

References 1. Singh, Nishant, et al. (2017) Neuron-Based Control Mechanisms for a Robotic Arm and Hand. International Journal of Computer, Electrical, Automation, Control and Information Engineering 11.2:221-29. 2. Adafruit CircuitPython [https://circuitpython.readthedocs.io/en/3.x/] 3. Harischandra, Nalin, et al. (2011) Sensory Feedback Plays a Significant Role in Generating Walking Gait and in Gait Transition in Salamanders: A Simulation Study. Frontiers in 5:3. https://doi.org/10.3389/fnbot.2011.00003

Copyright 2019 Harry Howard under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

NEST desktop: A web-based GUI for NEST simulator

Sebastian Spreizer1,2, Stefan Rotter1, Benjamin Weyers3, Hans E Plesser4, Markus Diesmann2 1 Bernstein Center Freiburg, Faculty of Biology, University of Freiburg, Freiburg, Germany 2 Institute of Neuroscience and Medicine (INM-6) Jülich Research Centre, Jülich, Germany 3 Department IV - Computer Science, Human-Computer Interaction, University of Trier, Trier, Germany 4 Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway Email: [email protected]

In the past few years, we have developed a web-based graphical user interface (GUI) for the NEST simulation code: NEST desktop [1]. This GUI enables the rapid construction, parametrization, and instrumentation of neuronal network models typically used in computational neuroscience. The primary objective was to create a tool of classroom strength that allows users to rapidly explore neuroscience concepts without the need to learn a simulator control language at the same time.

Currently, NEST desktop requires a full NEST installation on the user's machine, limiting uptake by a non-expert audience, and limiting the networks studied to such that can be simulated on a laptop or desktop machine. To ease the use of the app and the range of simulations possible with NEST desktop, we want to separate the GUI from the simulation kernel, rendering the GUI in the web browser of the user, while the simulation kernel is running on a centrally maintained server.

NEST desktop has a high potential to be successful as a widely used GUI for the NEST simulator. To achieve this goal, in a first step all tools have to agree on a communication scheme and data format (using JSON) used to interact with a server-side running NEST instance in a session management, e.g. Docker or Singularity containers. Next, previously developed tools, namely the NEST Instrumentation App [2] and VIOLA [3], will be integrated as a plugin into the app to extend visual modeling and analysis functionalities. In the course of this work, the use of an in situ pipeline [4] developed for neuronal network simulators will be considered to also enable the app to receive larger data sets from NEST during a running simulation, enhancing the interactivity of the app also for large simulations on HPC facilities.

We plan to develop and maintain NEST desktop sustainably. Therefore, we intend to integrate NEST desktop into the HBP infrastructure. Additionally, the open-source code of NEST desktop will be published as a standalone distribution for teaching and training.

Acknowledgements This project has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under Specific Grant Agreement No. 785907 (Human Brain Project SGA2)

References 1. NEST desktop [https://github.com/babsey/nest-desktop] 2. NEST InstrumentationApp [https://github.com/compneuronmbu/NESTInstrumentationApp] 3. Senk J, et al. (2018) VIOLA—A Multi-Purpose and Web-Based Visualization Tool for Neuronal-Network Simulation Output. Front. Neuroinform. doi: 10.3389/fninf.2018.00075 4. Oehrl et al. (2018) Streaming Live Neuronal Simulation Data into Visualization and Analysis. Conference Paper doi: 10.1007/978-3-030-02465-9_18

Copyright 2019 Spreizer under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

Tools for the Visual Analysis of Simulation Datasets

Sergio Galindo1,2, Juan José García-Cantero1,2, Cristian Rodríguez1,2, Pablo Toharia2,3, Juan Pedro Brito2,3, Óscar David Robles1,2, Susana Mata1,2, Luis Pastor1,2 1 Department of Computer Engineering, Universidad Rey Juan Carlos, Madrid, Spain 2 Center for Computational Simulation, Universidad Politécnica de Madrid, Madrid, Spain 3 Universidad Politécnica de Madrid

Email: [email protected]

The use of computational models in neuroscience is often the best (or only) option for many studies, such as for analyzing the effects of modifying network parameters or structure at different levels of detail. Like in any simulation environment, the more complex the modeled network or phenomenon under study are, the more difficult the process of result analysis becomes too. In these cases, counting with the adequate analysis tools can be essential for achieving the desired research goals.

This presentation describes three tools, currently available, designed for the analysis of computational neuroscience simulation results: ViSimpl [1], MSPViz [2] and ConGen. ViSimpl is a set of visual tools developed for facilitating the analysis of simulation results, aiming precisely at the cases where complexity is an added problem. Regarding its most salient features, ViSimpl provides 3D particle- based renderings that allow visualizing simulation data embedded within their associated spatial and temporal environments. MSPViz, the second one, is a visualization tool specifically developed for models of structural plasticity, supplying views at different levels of abstraction. Last, ConGen provides a framework for the multiscale edition and visualization of neuron networks, allowing the definition of cell populations and the establishment of connections among them. Additionally, ConGen allows grouping neuron populations into super-populations for creating hierarchical structures that allow the inspection of selected elements at finer detail while displaying the rest of the hierarchy at a coarser level. In the HBP context [3], these tools will allow neuroscientists to analyze the data generated within simulations while they are still running, using for that purpose the in-situ pipeline [4] being developed in this project.

Acknowledgements This work has received funding from the Spanish Ministry of Economy and Competitiveness under grants C080020-09 (Cajal , Spanish partner of the Blue Brain Project initiative from EPFL), TIN2014-574811 and TIN2017-83132 as well as from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2). The authors want to thank the Simlab neuroscience: Multiscale Simulation and Architectures Team at the Forschungszentrum Jülich for their ideas and contributions.

References 1. Galindo Sergio E, et al. (2016) ViSimpl: Multi-View Visual Analysis of Brain Simulation Data. Frontiers in Neuroinformatics, 10. doi:10.3389/finf.2016.00044 2. Diaz-Pier S., et al. (2016) Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity. Frontiers in Neuroanatomy, 10. doi: 10.3389/fnana.2016.00057 3. Markram H, (2012) The Human Brain Project. Scientific American, 306. doi: 10.1038/scientificamerican0612-50 4. Oehrl S. et al. (2018) Streaming Live Neuronal Simulation Data into Visualization and Analysis. High Performance Computing, Springer: 258-272. doi: 10.1007/978-3-030-02465-9_18

Copyright 2019 Galindo, García-Cantero, Rodríguez, Toharia, Brito, Robles, Mata, Pastor under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

Neural Network Simulation Code for Extreme Scale Hybrid Systems

Kristina G. Kapanova1, Stoyan Markov1 1 National Center for Supercomputing Applications, Acad. G. Bonchev 25A, Sofia 1113, Bulgaria

Email: [email protected]

In this talk we present several new contributions within a simulator of neural networks in order to reduce computational and communication times, as well as the significant memory demands. First, we develop a new representation of the network, where the cell data is defined such that synapses are axonal as relating to their sending neurons and are functioning as dendrites as relating to their receiving neurons. Consequently, we express each cell as vectors of all incoming spike arrival times and spike generation times. Each incoming signal represents a local change of the electric characteristics of the neuronal membrane, computed through the Hodgkin–Huxley model, and is linearly combined with its respective synaptic weight. To form the global potential of each cell, the local potentials, which are displaced in time, are summed. The time displacement for each signal is defined as the exact time of arrival of the signal combined with the delay of the signal transmission from the presynaptic to post-synaptic cell. Therefore, the cell is described as an independent object. Secondly, the simulation time is divided into equal periods, each being 300ms, during which all spikes are generated and computed. At the completion of each period, all generated new spikes are represented as the respective cells incoming signal vectors, with the state of connections being refreshed every 300ms period, thus maintaining a continuous process. This allows us to each cell’s membrane reaction independently and simultaneously, practically with no communication between them. The only data movement of spike times is during the refresh period – realizing many-to-many communication. Thirdly, the entire network is collapsed into multiple smaller subnetworks, in a manner where each thread during the simulation periods takes a certain number of neurons, without exceeding more than half of the cache L2 size of the respective processor. Once a thread finishes its work, it processes the next batch of neurons part of the subnetwork for that processors, which are ready for computing. During the computing of global potentials, each spike is temporarily stored in the L3 cache, sent to the memory only once the cache is near fulfilment. This procedure works with all types of networks, including the ability to compute for recurrent outside the subnetwork connections. Finally, within the previously introduced pipelined computational process consisting of several stages, running in parallel and synchronized via a global clock we represent a method to compute all types of incoming signals within the 300ms period. We utilize multi-threaded programming and non-blocking communications to optimize the overlap between the computation and communication of the simulation. To validate our approach several numerical experiments have been carried out.

Acknowledgements The work was supported by PRACE-5IP

Copyright 2019 Kapanova, Markov under Creative Commons Attribution License (CC BY 4.0).

NEST Conference 24–25 June 2019

Accelerating spiking network simulations with NEST

Jari Pronold1, Susanne Kunkel2 1 Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany 2 Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway Email: [email protected]

The 5th generation simulation kernel of NEST (NEST 5g) [1], which was released with NEST 2.16 [2], achieves perfect weak scaling in terms of memory usage and good weak scaling in terms of runtime on modern . However, total simulation times still exceed the actual simulated time by orders of magnitude. Analysis of the contributions of the three different phases of NEST simulations: neuron update, communication of spikes, and delivery of spikes to their thread- local targets, reveals that spike delivery takes up most of the total simulation time.

Spike delivery in NEST is rather complex and involves different steps. Each thread reads the entire MPI buffer containing the most recent spikes and first checks whether the currently considered spike has a thread-local target. If this is the case, the thread needs to access its private division of the connection infrastructure and then the Connector storing all thread-local synapses that are of the same type as the target synapse. The spike is then delivered to the target synapse, which relays it to the target neuron, and eventually the target neuron adds the spike to the correct position of its spike ring buffer. If the synapse exhibits some type of synaptic plasticity, it performs an update of its state variables. In case the spike has further thread-local targets, it is successively passed on to these targets. As target synapses and neurons differ from one spike delivery to the next, access to memory during spike delivery is unpredictable, which results in frequent cache misses and hence degrades the performance.

In my talk I will present our recent work on accelerating the spike delivery in NEST. We have investigated techniques of decoupling the different steps of the spike-delivery phase yielding an assembly-line style processing of spikes, which in turn enables applying strategies to improve cache performance such as software prefetching. I will present preliminary benchmarking results that demonstrate the efficiency of the new techniques.

Acknowledgements Part of the work presented in this talk is performed in collaboration with Jakob Jordan, Brian Wylie, Itaru Kitayama, Mitsuhisa Sato, and Markus Diesmann. Access to the HOKUSAI system was made possible by Jun Igarashi.

References 1. Jordan J, et al. (2018) Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers. Front. Neuroinform. 12:2. doi: 10.3389/fninf.2018.00002 2. Linssen C, et al. (2018) NEST 2.16.0 (Version 2.16.0). Zenodo. doi: zenodo.1400175

Copyright 2019 Pronold, Kunkel under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

NESTML: An extensible modeling language for biologically plausible neural networks

Charl Linssen1,2, Jochen M. Eppler1, Abigail Morrison1-3 1. Simulation Lab Neuroscience, Jülich Supercomputer Centre, Institute for Advanced Simulation, Jülich-Aachen Research Alliance, Forschungszentrum Jülich GmbH 2. Institute for Neuroscience and Medicine INM-6 / INM-10, Jülich-Aachen Research Alliance, Institute for Advanced Simulation IAS-6, Forschungszentrum Jülich GmbH 3. Institute of Cognitive Neuroscience, Ruhr-University Bochum, Germany Email: [email protected]

NESTML [1, 2] was developed to address the maintainability issues that follow from an increasing number of models, model variants, and an increased model complexity in computational neuroscience. Our aim is to ease the modelling process for neuroscientists both with and without prior training in computer science. This is achieved without compromising on performance by automatic source-code generation, allowing the same model file to target different hardware or software platforms by changing a single command-line parameter. While originally developed in the context of the NEST Simulator [3], the language itself as well as the associated toolchain are lightweight, modular and extensible, by virtue of using a parser generator and internal abstract syntax tree (AST) representation, which can be operated on using well-known patterns such as visitors and rewriting. A typical workflow consists of the following steps: Initially, a model of interest is identified. This model might describe the dynamical behaviour of a single neuron, or the plasticity rules concerning a synapse. The model description is typically in mathematical or textual form, and needs to be converted by the neuroscientist into a format following the NESTML syntax. It is then processed by invoking the toolchain, which generates optimised code for the target platform (e.g. NEST running on a high-performance computing cluster). That code is then dynamically loaded or compiled as part of the simulation framework (in this case, NEST). The model is now ready for use in the simulator, and can be instantiated within a simulation script, written e.g. using the PyNEST API [4], before starting the simulation and performing subsequent analysis. NESTML is open sourced under the terms of the GNU General Public License v2.0 and is publicly available at https://github.com/nest/nestml. Extensive documentation and automated testing are in place, both for the language itself as well as the associated processing toolchain. Active user support is provided via the GitHub issue tracker and the NEST user mailing list. Acknowledgements This project has received funding from the Helmholtz Association through the Helmholtz Portfolio Theme “Supercomputing and Modeling for the Human Brain” and the European Union’s Horizon 2020 research and innovation programme under grant agreement No 720270 (HBP SGA1). and No 785907 (HBP SGA2). References 1. D. Plotnikov et al. (2016) Modellierung March 2-4 2016, Karlsruhe, Germany. 93–108. doi:10.5281/zenodo.1412345 2. K. Perun et al. (2018). Version 2.4, Zenodo. doi:10.5281/zenodo.1319653 3. M.-O. Gewaltig & M. Diesmann (2007) Scholarpedia 2(4), 1430. doi:10.4249/scholarpedia.1430 4. Y.V. Zaytsev & A. Morrison (2014) Front. Neuroinform. 8:23. doi:10.3389/fninf.2014.00023

Copyright 2019 Linssen, Eppler, Morrison under Creative Commons Attribution License (CC BY 4.0).

NEST Conference 2019

Abstracts

Poster presentations NEST Conference 24J25 June 2199

Understanding the NEST community wteffen 6r5;ercala üessiO5 :itOhellc

1Fnstitute of @eurosOienOe 5n3 :e3iOine 0F@:bv1 5n3 Fnstitute for S3v5nOe3 wimul5tion 0FSwbv1 5n3 üS]SbFnstitute fr5in wtruOtureb-unOtion ]el5tionships 0F@:bcM1a üüliOh ]ese5rOh Jentrea üüliOha 6erm5ny l-orsOhungszentrum üüliOha wimul5tion E5; @eurosOienOea üüliOh wuperOomputing Jentrea Fnstitute for S3v5nOe3 wimul5tiona üüliOh S5Ohen ]ese5rOh Slli5nOea üüliOha 6erm5ny

Nb:5ilL segr5;er)fzbjueliOhe3e winOe @Nwq xcD first 5ppe5re3 in pu;liOa we h5ve he5r3 the s5me questions 5ske3L Fs there goo3 3oOument5tion. Fs it e5sy to inst5ll. Who uses @Nwq. Row often is @Nwq use3. Row m5ny pu;liO5tions 5re there. WhiOh qu5ntit5tive in3iO5tors show the rimp5Otr of @Nwq. 5n3 how O5n these ;e 3oOumente3. JonferenOesa workshopsa h5Ok5thonsa institution5l T]a open vi3eo meetingsa m5iling listsa 6itRu; 5s 5n open 3evelopment pl5tforma wphinx 5n3 ]e53the3oOs 5s 3oOument5tion toolsa we;sitesa pu;liO5tions 5n3 Oit5tions b 5ll these 5Otivities 05n3 more1 form the sourOes of 35t5 th5t tell us the Ourrent st5tus 5n3 sOientifiO signifiO5nOe of @Nwqe qhese 35t5 provi3e the justifiO5tion for the further 3evelopment of @Nwq ;ut 5lso gui3e the optimiz5tion of @Nwq for the ;enefit of the Oommunitye -or ex5mplea to improve the user experienOeawe h5ve reOently 3evelope3 new @Nwq inst5ll5tion options for k;untua Jon35 5n3 IoOkera whiOh help new users get st5rte3 more quiOkly with @Nwqe qhe poster illustr5tes the st5tus of some 35t5 metriOs 5n3 provi3es insight into the 3evelopments 5roun3 the inst5ll5tion options of @Nwqe 65thering reli5;le 35t5 on the us5ge of @Nwq 5n3 prep5ring r5tion5l 3eOisions on the improvement of its imp5Ot still rem5ins surprisingly 3iffiOulte St the poster we woul3 like to 3isOuss how we O5n ;etter un3erst5n3 the nee3s of the sOientifiO Oommunitye

Acknowledgements -un3ing w5s reOeive3 from the Nurope5n knion’s Rorizon lMlM -r5mework Trogr5mme for ]ese5rOh 5n3 Fnnov5tion un3er gr5nt 5greements mlMlmM 0RfT w6Sc1a msdAMm 0RfT w6Sl1( the Relmholtz SssoOi5tion Fniti5tive 5n3 @etworking -un3 wjbAMl 0SJS1e

References ce 6ew5ltig :bja Iiesm5nn :( NEST iNeural Simulation ToolmaScholarpedial0n1LcnuMa lMMm

Jopyright lMcA 6r5;era :itOhell un3er Jre5tive Jommons Sttri;ution EiOense 0JJ f" neM1e NEST Conference 24–25 June 2019

Existence and Detectability of High Frequency Oscillations in Spiking Network Models

Runar Helin1, Simon Essink2, Markus Diesmann2, Sonja Grün2, Hans Ekkehard Plesser1 1 Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway 2 Institute of Neuroscience and Medicine (INM-6) Jülich Research Centre, Jülich, Germany

Email: [email protected]

Very fast oscillations with a frequency of a few hundred Hz can be observed in spiking network models. The origin of these oscillations is not known, and they have not been observed in experimental recordings. Here I use the microcircuit model representing a network in the visual cortex proposed by Potjans et. al [1] in the analysis of the fast oscillations in order to increase our understanding of this phenomenon. For details, see [2].

Here I first show that the oscillations are not caused by artifacts in the simulation. One way to see the oscillatory activity in the network is as peaks in the population averaged power spectrum. I do a subsampling of neurons from each population and show that the peaks in the power spectra are visible in a subsample of around 100 neurons or some populations when the power spectrum is averaged over several trials.

Using the same approach as Bos et. al [3], the analytical power spectra are calculated for the microcircuit model. Here, I am using different delay distributions to see the how the oscillating activity changes. The results show that the high frequency oscillations disappear when using an exponential distribution. It is also shown that the oscillations are very sensitive to the choice of parameters of the delay distribution.

Finally, the activity from simulations of the more complex multi-area model [4] is shown together with results from the original microcircuit model and the microcircuit model with exponential delay distribution. The dynamics of the multi-area model does not show oscillations at high frequencies and considerably less oscillations at low frequencies compared to the microcircuit model. The use of an exponential delay distribution also removes the high frequency oscillations and are in agreement with the analytical results.

References 1. Potjans T, et al. (2014) The cell-type specific cortical microcircuit: Relating structure and activity in a full- scale spiking network model. Cerebral Cortex, 24(3):785-806. doi: 10.1093/cercor/bhs358 2. Helin, R (2019) Existence and Detectability of High Frequency Oscillations in Spiking Network Models. M.Sc. thesis. 3. Bos H, et al. (2016) Identifying Anatomical Origins of Coexisting Oscillations in the Cortical Microcircuit. PLoS Computational Biology, 12(10):1-34. doi: 10.1371/journal.pcbi.1005132 4. Schmidt M, et al. (2018) Multi-scale account of the network structure of macaque visual cortex. Brain Struct Funct. 223(3):1409-1435. doi: 10.1007/s00429-017-1554-4

Copyright 2019 Helin, Essink, Diesmann, Grün, Plesser under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

Large scale modeling of the mouse brain dynamics

Lionel Kusch1, Spase Petkoski1, Viktor Jirsa1 1 Institut de Neurosciences des Systèmes, INSERM UMR 1106, Aix-Marseille Université, Marseille, France Email: [email protected]

Modeling the mouse whole-brain dynamics with spiking neural networks can be performed using bottom-up models such as the Blue Brain project [1]. Another strategy is using the paradigm of brain network models based on the structural connectivity of the mouse brain, i.e. its connectome. This approach is implemented in the neuroinformatic platform The Virtual Brain (TVB) [2], where the tracing data from Allen Institute [3] can be combined with a range of neural mass models [4]. The latter are better suited for study from dynamical systems viewpoint, while the complexity of the former makes them more physiological plausible.

In this work we bridge the two approaches, by building a connectome based large-scale brain network model, where each region contains a population or surface of spiking neurons, thus allowing direct link to neuroimaging data, while increasing the biological realism.

As a first step, we use our modeling paradigm to analyze the impact of heterogeneous connectivity on the network synchronisation. Similar analysis has already been performed using FitzHugh– Nagumo network on a torus [5]. We reproduce the results using adaptive exponential integrate and fire neurons, which is a more complex and realistic neuron model [6]. After this, we analyze the dynamics of the whole-brain model, and we compare the simulated activity with experimental results, with a focus on different metrics of functional connectivity. This allows us to link the results at the brain activity levels to the spiking neural networks, and to validate the model by using functional data. Hence, the new modelling approach allows bridging from neural mass model to spiking neural networks.

References 1. Blue brain [https://www.epfl.ch/research/domains/bluebrain/] 2. Sanz Leon, Paula, Stuart A. Knock, M. Marmaduke Woodman, Lia Domide, Jochen Mersmann, Anthony R. McIntosh, and Viktor Jirsa. The Virtual Brain: A Simulator of Primate Brain Network Dynamics. Frontiers in Neuroinformatics 7 (2013). https://doi.org/10.3389/fninf.2013.00010. 3. Oh, Seung Wook, Julie A. Harris, Lydia Ng, Brent Winslow, Nicholas Cain, Stefan Mihalas, Quanxin Wang, et al. A Mesoscale Connectome of the Mouse Brain Nature 508, no. 7495 (April 2014): 207–14. https://doi.org/10.1038/nature13186. 4. Melozzi, Francesca, Marmaduke M. Woodman, Viktor K. Jirsa, and Christophe Bernard The Virtual Mouse Brain: A Computational Neuroinformatics Platform to Study Whole Mouse Brain Dynamics ENEURO.0111-17.2017. https://doi.org/10.1523/ENEURO.0111-17.2017. 5. Viktor K. Jirsa and Roxana A. Stefanescu, Neural Population Modes Capture Biologically Realistic Large Scale Network Dynamics Bulletin of Mathematical Biology 73, no. 2 (February 2011): 325–43, 2011. http://doi.org/10.1007/s11538-010-9573-9 6. Touboul, Jonathan, and Romain Brette. Dynamics and Bifurcations of the Adaptive Exponential Integrate-and- Fire Model Biological Cybernetics 99, no. 4–5 (November 1, 2008): 319. https://doi.org/10.1007/s00422-008-0267- 4.

Copyright 2019 Kusch, Petkoski, Jirsa under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

Reconstruction and simulation of the cerebellar microcircuit: a scaffold strategy to embed different levels of neuronal details

Elisa Marenzi1, Stefano Casali1, Claudia Casellato1, Egidio D’Angelo1,2 1 Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy 2 IRCCS Mondino Foundation, Pavia, Italy

Email: [email protected] Computational models allow propagating microscopic phenomena into large-scale networks and inferencing causal relationships across scales. Here we reconstruct the cerebellar circuit by bottom- up modeling, reproducing the peculiar properties of this structure, which shows a quasi-crystalline geometrical organization well defined by convergence/divergence ratios of neuronal connections and by the anisotropic 3D orientation of dendritic and axonal processes [1]. Therefore, a cerebellum scaffold model has been developed and tested. It maintains scalability and can be flexibly handled to incorporate neuronal properties on multiple scales of complexity. The cerebellar scaffold includes all the canonical neuron types. Placement was based on density and encumbrance values, while connectivity on specific geometry of dendritic and axonal fields, and on distance-based probability. In the first release, spiking point-neuron models based on Integrate&Fire dynamics with exponential synapses were used. The network was run in the neural simulator pyNEST. Complex spatiotemporal patterns of activity, similar to those observed in vivo, emerged [2]. For a second release of the microcircuit model, an extension of the generalized Leaky Integrate&Fire model has been developed, optimized for each cerebellar neuron type and inserted into the built scaffold [3]. It could reproduce a rich variety of electroresponsive patterns with a single set of optimal parameters. After that, point-neurons have been replaced by detailed 3D multi-compartment neuron models. The network was run in the neural simulator pyNEURON. Further properties emerged, strictly linked to the morphology and the specific properties of each compartment. This multiscale tool with different levels of realism has the potential to summarize in a comprehensive way the electrophysiological intrinsic neural properties that drive network dynamics and high-level behaviors. Acknowledgements This research has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2). This research was supported by the HBP Brain Simulation Platform funded from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2). References 1. D'Angelo E, et al. (2016) Modeling the Cerebellar Microcircuit: New Strategies for a Long-Standing Issue. Front Cell Neurosci. 176(10):1-29. doi: 10.3389/fncel.2016.00176 2. Casali S, et al. (2019) Reconstruction and simulation of a Scaffold Model of the Cerebellar Network. Front Neuroinfor. 13(37):1-19. doi: https://doi.org/10.3389/fninf.2019.00037 3. Geminiani A, et al. (2018) Complex dynamics in simplified neuronal models: reproducing Golgi cell electroresponsiveness. Front Neuroinfor. 12:1-19. doi: 10.3389/fninf.2018.0008

Copyright 2019 Marenzi, Casali, Casellato, D’Angelo under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

Computing the Electroencephalogram (EEG) from Point- Neuron Networks

Pablo Martínez-Cañada1,2,3, Torbjørn V. Ness4, Tommaso Fellin1,2, Gaute T. Einevoll4,5, Stefano Panzeri2,3 1 Optical Approaches to Brain Function Laboratory, Istituto Italiano di Tecnologia, Genova, Italy 2 Neural Coding Laboratory, Istituto Italiano di Tecnologia, Genova and Rovereto, Italy 3 Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy 4 Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway 5 Department of Physics, University of Oslo, Oslo, Norway

Email: [email protected]

Electroencephalography (EEG) is a powerful technique for non-invasively measuring neuronal activity, with important applications both scientifically and in the clinic. To make full use of EEGs it is however necessary to better understand the link between the observed EEG signals and the underlying neural activity, but this is however highly non-trivial due to the large number of neurons that are contributing to the measured signal [1]. Large-scale simulations of neural activity, using simple point neurons is an important tool for studying and understanding neural network activity, however, such point neurons, though highly computationally efficient, are in general incapable of predicting electrical measurement signals like Local Field Potentials (LFPs) and EEGs. To remedy this, we used network modelling as a tool to bridge the gap between EEG and neuron dynamics. We asked the question whether a combination of variables available from simulation of a thoroughly analyzed point-neuron network model [2, 3] could accurately approximate the EEG, as successfully demonstrated before for the LFP [4]. We evaluated different candidate EEG proxies (e.g., firing rates, membrane potentials, synaptic currents) and compared their outputs with the EEG generated when the spikes produced by the point-neuron network serve as synaptic input to a population of mutually unconnected multicompartment model neurons with realistic morphologies (hybrid modelling scheme [5]). We found that a specific combination of AMPA and GABA synaptic currents provided the most accurate proxy for the EEG signal. In conclusion, our results can be used to calculate plausible EEG signals directly from point-neuron simulations.

Acknowledgements

This work received funding from the European Union Horizon 2020 Research and Innovation Programme under Grant Agreement 785907 (Human Brain Project (HBP) SGA2) and the Research Council of Norway (Notur, nn4661k).

References 1. Cohen (2017) Trends neurosci, 40(4), 208-218. 2. Brunel & Wang (2003) J neurophysiol, 90(1), 415-430. 3. Cavallari et al. (2014) Front neural circuits, 8, 12. 4. Mazzoni et al. (2015) PLoS comp biol, 11(12), e1004584. 5. Hagen et al. (2016) Cerebral Cortex, 1-36.

Copyright 2019 Martínez-Cañada and Ness under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

Improving NEST network construction performance with BlockVector and Spreadsort

Håkon Mørk1, Hans E Plesser1,2 1 Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway 2 Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany Email: [email protected]

Here we present two improvements on efficiency aimed at large-scale simulations. Firstly, when connecting large networks, memory management suffers when using standard contiguous arrays. We therefore present a new container that handles memory more efficiently. Secondly, simulation requires connections to be sorted, so in large-scale simulations it is important to use a stable and efficient algorithm. Therefore a way to use Boost’s sorting function has been developed.

The default option for storing connections is using contiguous arrays, such as std::vector. The issue with contiguous arrays is that whenever the memory reserved for it is exhausted, a larger chunk of memory needs to be allocated and all data copied, temporarily requiring memory for both old and new data. As individual containers grow large in size, this entails a significant memory overhead. We therefore introduce the BlockVector container. It is designed to avoid the memory issues associated with contiguous arrays, while preserving as much as possible the performance. Unlike contiguous arrays, BlockVector stores elements in fixed-size blocks, which are stored in a blockmap. The structure is thus similar to that of a double ended queue, but with a different implementation for more efficient random access to elements. This way BlockVector avoids having to relocate all the data to a larger chunk of memory, and limits redundancy in allocated memory.

Connections in NEST are sorted using a quicksort algorithm, where one array is sorted while performing the same permutations on another array. However quicksort has quadratic worst case behaviour, which we have observed in practice. We therefore created an adapter to enable using library sorting algorithms. Boost's [1] sorting algorithm, spreadsort [2], which has a worst case of nlog(n), can then be used. With it, sorting of large arrays is performed approximately twice as fast as using quicksort.

Acknowledgements The authors would like to thank Susanne Kunkel and Stine Vennemo for useful discussions. This project has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under Specific Grant Agreement No. 785907 (Human Brain Project SGA2).

References 1. Boost [www.boost.org] 2. Ross, S. J. (2002) The Spreadsort High-performance General-case Sorting Algorithm. PDPTA:1100-1106.

Copyright 2019 Mørk, Plesser under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

Multi-area spiking network models of macaque and human cortices

Jari Pronold 1, Alexander van Meegen1, Rembrandt Bakker1,2, Aitor Morales-Gregorio1, Sacha van Albada1 1 Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany 2 Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Netherlands Email: [email protected]

Understanding the wiring of the brain at the micro-, meso- and macroscale and its influence on neuronal activity is a fundamental problem in neuroscience. Here we present a multi-scale spiking network model of all vision related areas of macaque cortex [1] using the NEST simulator and outline how we aim to simulate human visual cortex.

The connectivity map in our model of the macaque visual cortex integrates data on cortical architecture and axonal tracing data into a consistent multi-scale framework and predicts the connection probability between any two neurons based on their types and locations within areas and layers [1]. Simulations using this connectivity map reveal a stable asynchronous irregular ground state with heterogeneous activity across areas, layers and populations [2]. The model of human visual cortex will make use of this framework, replacing neuron densities, laminar thicknesses, and cortico-cortical connectivity by estimates for the human brain. To set up the framework, we will first model a full cortical hemisphere using published data on cortical architecture [3]. Human- macaque homologies and DTI data will provide reference values for comparison of the cortico- cortical connectivity map. These models will help to elucidate how detailed connectivity of cortex shapes its dynamics on multiple scales and how prominent features of cortical activity can be explained by population-level connectivity.

Acknowledgements Supported by Priority Program 2041 (SPP 2041) "Computational Connectomics" of the German Research Foundation (DFG) and by the European Union's Horizon 2020 research and innovation program under HBP SGA1 (grant agreement no. 720270) and the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2). Simulations on the JURECA supercomputer at the Jülich Supercomputing Centre are enabled by Computation Time Grant JINB33.

References 1. Schmidt, M., Bakker, R., Hilgetag, C. C., Diesmann, M., & van Albada, S. J. (2018). Multi-scale account of the network structure of macaque visual cortex. Brain Structure and Function, 223(3), 1409-1435. doi: 10.1007/s00429-017-1554-4 2. Schmidt, M., Bakker, R., Shen, K., Bezgin, G., Diesmann, M. et al. (2018) A multi-scale layer-resolved spiking network model of resting-state dynamics in macaque visual cortical areas. PLoS Computational Biology, 14(10), e1006359. doi: 10.1371/journal.pcbi.1006359 3. von Economo, C. F., Koskinas, G. N., & Triarhou, L. C. (2008). Atlas of cytoarchitectonics of the adult human cerebral cortex. Basel: Karger.

Copyright 2019 Pronold, van Meegen, Bakker, Morales-Gregorio, van Albada under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

Connectivity Concepts for Neuronal Networks

Johanna Senk1, Birgit Kriener2, Espen Hagen3, Hannah Bos4, Hans E. Plesser1,5, Marc-Oliver Gewaltig6 , Markus Diesmann1,7,8, Mikael Djurfeldt9, Nicole Voges10, Sacha J. van Albada1 1 Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany 2 Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway 3 Department of Physics, University of Oslo, Oslo, Norway 4 Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA 5 Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway 6 Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland 7 Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen University, Aachen, Germany 8 Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany 9 PDC Center for High-Performance Computing, KTH Royal Institute of Technology, Stockholm, Sweden 10 INT UMR 7289, Aix-Marseille University, Marseille, France

Email: [email protected]

A statement like “Ns source neurons and Nt target neurons are connected randomly with connection probability p” may be used to describe the structure of a neuronal network model, but its interpretation is inherently ambiguous. A lacking detail is, for example, information on the distribution of in- and outgoing connections, resulting in substantial differences in network dynamics. For reproducible research, unambiguous network descriptions and corresponding algorithmic implementations are necessary [1]. Here, we review simulation software (e.g., NEST [2]), specification languages (e.g., CSA [3]), and published network models made available by the community in databases like ModelDB [4] and Open Source Brain [5]. We investigate the network structures computational neuroscientists use in their models and the terminology they use to describe these models. From this, we derive a set of connectivity concepts providing modelers with guidelines to specify connectivity in a complete and concise way. Furthermore, this work aims to guide the comprehensive and efficient implementation of connection routines in simulation software like NEST, thereby facilitating reproducible research on network models.

Acknowledgements Funding was received from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under grant agreements 720270 (HBP SGA1), 785907 (HBP SGA2), and 754304 (DEEP-EST); the Research Council of Norway (DigiBrain 248828 and CoBra 250128); Deutsche Forschungsgemeinschaft grant AL 2041/1-1 of the Priority Program (SPP 2041), and RTG 2416 ‘‘Multi-senses Multi-scales’’; the Helmholtz Association Initiative and Networking Fund SO-902 (ACA); VSR Computation Time Grant Brain-Scale Simulations JINB33. References 1. Nordlie E, et al. (2009) Towards Reproducible Descriptions of Neuronal Network Models. PLoS Comput Biol. 5(8): e1000456. doi:10.1371/journal.pcbi.1000456 2. Gewaltig M-O and Diesmann M (2007). NEST (NEural Simulation Tool). Scholarpedia. 2(4):1430, doi:10.4249/scholarpedia.1430 3. Djurfeldt M (2012) The Connection-set Algebra—A Novel Formalism for the Representation of Connectivity Structure in Neuronal Network Models. Neuroinform. 10:287–304. doi:10.1007/s12021-012-9146-1 4. ModelDB [https://senselab.med.yale.edu/modeldb] 5. Gleeson P, Cantarelli M, Marin B, . . . van Albada SJ, van Geit W, R Silver RA (in press) Open Source Brain: a collaborative resource for visualizing, analyzing, simulating and developing standardized models of neurons and circuits. Neuron.

Copyright 2019 Senk, Kriener, Hagen, Bos, Plesser, Gewaltig, Diesmann, Djurfeldt, Voges, van Albada under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

Modeling cortical and thalamocortical mechanisms involved in alpha generation

Renan O Shimoura1,2, Antonio C Roque1, Markus Diesmann2, Sacha J van Albada2 1 University of São Paulo, Department of Physics, Ribeirão Preto, Brazil 2 Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany Email: [email protected]

One of the most prominent features in waking electroencephalograms of a variety of mammals, mainly observed at eyes-closed rest, is the alpha rhythm around 10 Hz. Although alpha is strongly associated with reduced visual attention, it is also related to other functions such as the regulation of the timing and temporal resolution of perception, and transmission facilitation of predictions to visual cortex. Understanding how and where this rhythm is generated can elucidate its functions. In this regard, two possible alpha generators were studied: 1) Pyramidal cortical neurons of layer 5 (L5) producing rhythmic bursts close to 10 Hz after stimulation by a short current pulse [1] were introduced in a full-scale layered network of adaptive exponential integrate-and-fire neurons. 2) A thalamocortical loop delay around 100 ms previously proposed in mean-field models [2] was evaluated for different combinations of thalamocortical and corticothalamic delays. Cortical neurons in layers 4 (L4) and 6 (L6) received thalamocortical connections. In turn, L6 neurons sent feedback to thalamus. Moreover, the number of thalamocortical connections onto L4 neurons was recently found to be two times as high as usually described [3], which we took into account. All simulations were performed using the neural network simulator NEST [4]. The results show that intrinsically bursting neurons in L5 are able to generate slow network oscillations close to 10 Hz. Furthermore, alpha oscillations can emerge through the thalamocortical loop mechanism when the thalamocortical delay is sufficiently smaller than the corticothalamic one. Thus, both mechanisms potentially contribute to generating and sustaining the alpha rhythm in the thalamocortical network.

Acknowledgements This work is part of the activities of FAPESP RIDC for Neuromathematics (Grant 2013/07699-0, S. Paulo Research Foundation). ROS is recipient of FAPESP scholarships: 2017/07688-9 and 2018/08556-1. ACR is partially supported by the CNPq fellowship Grant 306251/2014-0. ACR is also part of the IRTG 1740/TRP 2015/50122-0, funded by DFG/FAPESP. Supported by the European Union’s Horizon 2020 Framework Programme for Research and Innovation (grant 785907, Human Brain Project SGA2), the Jülich-Aachen Research Alliance (JARA), and DFG SPP 2041 "Computational Connectomics".

References 1. Silva L, Amitai Y & Connors B (1991). Intrinsic oscillations of neocortex generated by layer 5 pyramidal neurons. Science, 251(4992), 432–435. 2. Roberts JA & Robinson PA (2008). Modeling absence seizure dynamics: implications for basic mechanisms and measurement of thalamocortical and corticothalamic latencies. J. Theor. Biol., 253(1), 189–201. 3. Garcia-Marin V, Kelly JG & Hawken MJ (2017). Major Feedforward Thalamic Input Into Layer 4C of Primary Visual Cortex in Primate. Cereb. Cortex, 29(1), 1–16. 4. Linssen C, et al. (2018) NEST 2.16.0 (Version 2.16.0). Zenodo. doi: zenodo.1400175

Copyright 2019 Shimoura, Roque, Diesmann, Albada under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

Clopath and Urbanczik-Senn plasticity: New learning rules in NEST

Jonas Stapmanns1,2, Jan Hahne3, David Dahmen1, Moritz Helias1,2, Matthias Bolten3, Markus Diesmann1,4,5 1 Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany 2 Institute for Theoretical Solid State Physics,RWTH Aachen University, 52074 Aachen, Germany 3 School of Mathematics and Natural Sciences, University of Wuppertal, Wuppertal, Germany 4 Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany 5 Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany

Email: [email protected]

In recent years, a new class of biologically inspired synaptic plasticity rules for spiking neural networks has been developed. In contrast to spike timing-dependent plasticity (STDP), these new learning rules involve the membrane potential of the postsynaptic neuron as an additional factor. The Clopath rule [1] can be seen as a prototypical example since in this rule the long-term potentiation depends on the presynaptic spike arrival and a filtered version of the postsynaptic membrane potential. This additional voltage dependency enables the Clopath rule to describe phenomena that are not covered by STDP but can be observed in experimental data, such as the complex frequency dependence of the synaptic weight changes in spike pairing experiments and the occurrence of strong bidirectional connections in networks. In this contribution we present a NEST implementation of the Clopath rule available in NEST 2.18.0. We show how dependencies of continuous quantities, like the postsynaptic membrane potential, can be integrated in the event- based synapse-updating scheme of NEST and present the general capabilities of the implementation. We further give an outlook on how the concepts of the implementation can be used to implement similar plasticity rules, like e.g. the Urbanczik-Senn rule [2] or e-prop [3]. These rules are more biologically plausible than machine-learning algorithms like back-propagation of errors and back-propagation through time and seem to be a promising approach to train deep and recurrent spiking neural networks.

Acknowledgements This research has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2). All spiking network simulations carried out with NEST (http://www.nest- simulator.org).

References 1. Clopath, Büsing, Vasilaki and Gerstner (2010) Connectivity reflects coding: a model of voltage-based STDP with homeostasis. Nature Neuroscience 13:3, 344-352, doi:10.1038/nn.2479 2. Urbanczik and Senn (2014) Learning by the dendritic prediction of somatic spiking. Neuron. 81(3):521-8. doi: 10.1016/j.neuron.2013.11.030. 3. Bellec, Scherr, Hajek, Salaj, Legenstein and Maass (2019) Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets. arXiv:1901.09049 [cs.NE]

Copyright 2019 under Creative Commons Attribution License (CC BY 4.0). NEST Conference 24–25 June 2019

Presenting new NEST benchmarks

Stine B. Vennemo1, , Håkon Mørk1, Hans E. Plesser1,2 1 Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway 2 Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany Email: [email protected]

Here we present some newly generated benchmarks for NEST run on the supercomputer Piz Daint. We have run a number of weak scale benchmarks, focusing on connection time for a wide range of large scale benchmarks showcasing NEST’s strengths, but also some of its weaknesses, guiding us in the direction we need to focus on in the future. The types of benchmarks include both realistic neurological models as well as synthetic ones, and were chosen so as to be able to examine as wide a range as possible.

Overall the benchmarks showed good weak scaling for large scale models, especially if we don’t call Connect too many times, but rather connect a lot of neurons at once. Furthermore we see that sorting the connections is a bit of a bottleneck and that communication between the ranks come in to play when we start to simulate. Unsurprisingly, the connection rules that are fully parallelized are the most efficient ones. When we look at memory consumption, we again see that the results scale satisfactorily.

Acknowledgements This project has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2).

Copyright 2019 Vennemo, Mørk, Plesser under Creative Commons Attribution License (CC BY 4.0).