NEST Conference 2020 — Program

NEST Conference 2020 — Program

NEST Conference 2020 — Program Monday, 29 June (all times CEST) – Morning – Afternoon 08:45 Registration 09:05 Welcome & Introduction Susanne Kunkel Session 1 Chair: Markus Diesmann Session 2 Chair: Abigail Morrison 09:25 NEST user-level documentation: Why, 13:15 Thalamo-cortical spiking model of how and what’s next? incremental learning combining Sara Konradi perception, context and NREM-sleep- 09:45 New Fenix resources at the Jülich mediated noise-resilience Supercomputing Centre Chiara De Luca Alexander Patronis 14:00 Minibreak 09:55 Minibreak 14:05 An Integrated Model of Visual 10:00 Toward a possible integration of Perception and Reinforcement NeuronGPU in NEST Petia Koprinkova-Hristova Bruno Golosio 14:25 Partial Information Decomposition 10:45 Coffee breakout Contextual Neurons in NEST Sepehr Mahmoudian 14:45 Coffee breakout Workshop 1 Chair: Håkon Mørk Session 3 Chair: Hans Ekkehard Plesser 11:15 NEST multiscale co-simulation 15:15 deNEST: a declarative frontend for Wouter Klijn specifying networks and running simulations in NEST 12:15 Lunch break Tom Bugnon 15:35 Primate perisylvian cortex in NEST: from reflex to small world Harry Howard 15:55 Minibreak 16:00 Modeling robust and efficient coding in the mouse primary visual cortex using computational perturbations Stefan Mihalas 16:45 Wrap-up Hans Ekkehard Plesser 17:00 Mingle Tuesday, 30 June (all times CEST) – Morning – Afternoon Session 4 Chair: Dennis Terhorst Session 6 Chair: Jochen Eppler 09:00 A spiking neural network builder for 13:30 Dendrites in NEST systematic data-to-model workflow Willem Wybo Carlos E Gutierrez 13:50 Just-in-time compilation for NESTML 09:20 Insite: A Generalized Pipeline for In- models in NEST Simulator transit Visualization and Analysis Charl Linssen Simon Oehrl 14:10 NEST Desktop: A web-based GUI for 09:40 Minibreak the NEST Simulator Sebastian Spreizer 09:45 NEST Component for Modeling Spiking Neuron-Astrocyte Networks 14:30 Minibreak Jugoslava Aćimović 14:40 Evolving interpretable plasticity rules 10:30 Coffee breakout using NEST Jakob Jordan 15:25 Coffee breakout Session 5 Chair: Tanguy Fardet Workshop 2 Chair: Stine Vennemo 11:00 Atlas-mapped reconstruction and 15:45 deNEST: a declarative frontend for simulation of the cerebellar Lingula in specifying networks and running NEST simulations in NEST Alice Geminiani Tom Bugnon & William Mayner 11:20 Cerebro-cerebellar loops for sensorimotor adaptation 16:45 Closing Remarks Alberto Antonietti Abigail Morrison 11:40 Minibreak 11:45 Modeling plasticity inducing stimulation protocols in recurrent neuronal networks with homeostatic structural plasticity Swathi Anil 12:30 Lunch break NEST Conference 2020 — Abstracts Sorted chronologically (by session/day). NEST Conference 2020 29–30 June 2020 NEST user-level documentation: Why, how and what’s next? Sara Konradi1, Steffen Graber1, Dennis Terhorst1 1 Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany Email: [email protected] Since the previous NEST Conference in 2019, we have made progress in improving the user-level documentation of NEST [1] with respect to completeness and accessibility towards the goal of operating NEST as part of the European research infrastructure EBRAINS from 2023. The concept of the user-level documentation has been worked out in a two-year process with the community. The talk summarizes the motivation behind high-quality documentation [2] and what we have achieved so far. We present how we use documentation-specific technology such as Sphinx [3] and Read the Docs [4] to produce user-level documentation that is easy to write, maintain, find, use and understand. An outcome of our analysis is that, for a community code, the user-level documentation also needs to be a community effort to ensure scalability and sustainability. Therefore, the talk discusses concepts of user-correctable documentation and its workflow. Finally, we outline the goals of the project for the next three years. Figure 1. Upgrade from previously static HTML to new documentation on Read the Docs. Acknowledgements Funding was received from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under grant agreements 720270 (HBP SGA1), 785907 (HBP SGA2); the Helmholtz Association Initiative and Networking Fund SO-902 (ACA). References 1. NEST Simulator (www.nest-simulator.org) 2. Knuth, D. E. (1984). Literate programming. The Computer Journal, 27(2), 97-111. 3. Sphinx (www.sphinx-doc.org) 4. Read the Docs (www.readthedocs.org) Copyright 2020 S. Konradi, S. Graber & D. Terhorst under Creative Commons Attribution License (CC BY-NC 4.0). NEST Conference 2020 29–30 June 2020 New Fenix resources at the Jülich Supercomputing Centre Alexander Patronis1, Dirk Pleiter1 1 Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich GmbH Email: [email protected], [email protected] Fenix is a distributed e-infrastructure providing HPC, cloud and storage resources, with the Human Brain Project being the prime and lead consumer. Access for other science and engineering communities is provided through PRACE. Initially, only the Swiss supercomputing centre CSCS was providing resources, with German, French and Spanish supercomputing centres (JSC, CEA and BSC, respectively) contributing more recently; the Italian centre CINECA is set to provide additional resources. In this contribution, we will provide a brief overview of the Fenix infrastructure and the resources which have recently become available at the Jülich Supercomputing Centre (JSC). The latest services to be offered at the JSC, in support of the Fenix infrastructure, are provided through JUSUF (Jülich Support for Fenix), a cluster comprising 205 nodes, each quippede with two of the latest-generation AMD EPYC (Rome) 64-core processors (offering 944.6 TFLOPs peak performance), 256 GB of main memory, and a local 1 TB NVMe device. The compute nodes are interconnected with a 100 Gb/s HDR InfiniBand fat-tree network. The majority of nodes are available for scalable or interactive computing within a HPC environment, and are suitable for the simulation of large NEST models, while a subset of nodes are reserved to serve virtual machines in a cloud environment. Copyright 2020 A. Patronis & D. Pleiter under Creative Commons Attribution License (CC BY-NC 4.0). NEST Conference 2020 29–30 June 2020 Toward a possible integration of NeuronGPU in NEST Bruno Golosio1,2, Chiara De Luca3, Elena Pastorelli3,4, Francesco Simula4, Gianmarco Tiddia1, Pier Stanislao Paolucci4 1 Dipartimento di Fisica, Università di Cagliari, Italy 2 Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Italy 3 Ph.D. Program in Behavioural Neuroscience, “Sapienza” University of Rome 4 Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Italy Email: [email protected] NeuronGPU is a library for MPI-GPU simulation of large scale networks of spiking neurons, written in the CUDA/C++ programming languages. Currently, this library includes different adaptive-exponential-integrate-and-fire (AdEx) multisynapse models, with current or conductance based synapses, together with user definable models, and several devices, as Poisson signal generators, spike generators, multimeters, spike recorders. Several connection rules are available. In addition to the standard synapses, the library includes STDP synapses. The numerical solution of the differential equations that describe the neuronal dynamics is based on a GPU implementation of the 5th order Runge-Kutta method with adaptive step-size control. On a single Nvidia GeForce RTX 2080 Ti GPU board it is possible to simulate the activity of 1 million multisynapse conductance-based AdEx neurons with 1000 synapse per neurons, for a total of 1 billion synapse, in about 71 seconds for each second of neural activity, with about 97 seconds of build time. Recently, having in mind the possibility of integrating this library into the well-known NEST simulator [1], NeuronGPU has been equipped with a Python interface, analogous to the one of the NEST simulator. The next step of this integration, which is currently in progress, is the implementation of an interface, either based on the MUSIC interface [2], or on more specialized functions, to manage a quick exchange of information between NEST and NeuronGPU in both directions. With this interface, the NEST capabilities will be further extended to include the possibility of simulation on GPU supports. Furthermore, this interface would allow the unique possibility to simulate hybrid networks, with a very large number of neurons and connections based on simpler models simulated on the GPU side, and a smaller number of neurons and connections based on more complex and detailed models simulated on the CPU side and used both for consistency checks and to shed light on the relationship between the dynamics of the network and that of the neuron internal variables. Acknowledgements This work has been supported by the European Union Horizon 2020 Research and Innovation program under the FET Flagship Human Brain Project (grant agreement SGA3 n. 945539 and grant agreement SGA2 n. 785907) and by the INFN APE Parallel/Distributed Computing laboratory. The implementation of the AdEx model of this library closely follows the aeif_cond_beta_multisynapse model present in the NEST simulator and implemented by one of the authors, B. Golosio, in collaboration with Prof. Hans Ekkehard Plesser and Dr. Tanguy Fardet. References 1. Fardet, Tanguy

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    26 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us