Research Collection Doctoral Thesis On randomness as a principle of structure and computation in neural networks Author(s): Weissenberger, Felix Publication Date: 2018 Permanent Link: https://doi.org/10.3929/ethz-b-000312548 Rights / License: In Copyright - Non-Commercial Use Permitted This page was generated automatically upon download from the ETH Zurich Research Collection. For more information please consult the Terms of use. ETH Library Felix Weissenberger On randomness as a principle of structure and com- putation in neural networks Diss. ETH No. 25298 2018 ONRANDOMNESSASAPRINCIPLEOFSTRUCTURE AND COMPUTATION IN NEURAL NETWORKS Diss. ETH No. 25298 On randomness as a principle of structure and computation in neural networks A thesis submitted to attain the degree of DOCTOROFSCIENCES of ETHZURICH (Dr. sc. ETH Zurich) presented by FELIXWEISSENBERGER MSc ETH in Theoretical Computer Science born on 04.08.1989 citizen of Germany accepted on the recommendation of Prof. Dr. Angelika Steger Prof. Dr. Jean-Pascal Pfister Dr. Johannes Lengler 2018 Contents Abstract iii Zusammenfassung v Thanks vii 1 Introduction 1 2 Emergence of synfire chains 19 3 Rate based learning with short stimuli 79 4 Mutual inhibition with few inhibitory cells 109 5 Lognormal synchrony in CA1 125 Bibliography 163 i Abstract This work examines the role of randomness in structure and informa- tion processing of biological neural networks and how it may improve our understanding of the nervous system. Our approach is motivated by the pragmatic observation that many components and processes in the brain are intrinsically stochastic. Therefore, probability theory and its methods are particularly well suited for its analysis and modeling. More profoundly, our approach is based on the hypothesis that the stochasticity of the nervous system is much more than just an artifact of a biological system. This hope stems from the experience in probability theory that random structures often have highly desirable properties and the theory of randomized algorithms, which impressively demonstrates that chance is extremely useful for the efficient computation of solutions to many problems. It is therefore not surprising that randomness has also been given a fundamental role in the structure and information processing of the nervous system. In this tradition, we study simple, mostly stochastic mathematical models of neurons, synapses and their interaction in neural networks and investigate emergent properties that can be proven mathematically, often with the help of discrete probability theory. The mathematical analysis allows the extraction of essential concepts that can ultimately be fully understood. Furthermore, we simulate more complex models to check whether the knowledge gained in this way generalizes. In this way, we can quickly examine, test and usually reject many hypotheses in purely theoretical considerations. In the case of useful ideas, these iii can inspire concrete biological experiments and predict their outcome or help to understand and interpret experiments already carried out. In this process, we often draw inspiration from the field of discrete probability theory, especially random graph theory and the theory of randomized algorithms. Concretely, we first show that the structure of biological neural networks favors the formation of so-called synfire chains since it re- sembles locally the structure of directed random graphs. Synfire chains are an established model of multi-stage signal transmission in neural networks. Second, we demonstrate how the efficiency of rate based synaptic plasticity can benefit from a dependence on the local membrane potential as the fluctuations of this potential contain more relevant information than individual action potentials. Third, we prove that random synaptic connectivity in combination with the nonlinear interaction of inhibitory synapses allows mutual inhibitory communication between excitatory neurons, even if the number of inhibitory neurons is much smaller than the number of excitatory neu- rons. Fourth, we provide a possible explanation for the experimental observation that the number of neurons firing during certain stereotyp- ical network activity in the hippocampus corresponds to a lognormal distribution: the synaptic transfer of normally distributed network activity from one area to the next leads to lognormally distributed activity there. iv Zusammenfassung Diese Arbeit betrachtet exemplarisch die Rolle des Zufalls in der Struktur und Informationsverarbeitung biologischer neuronaler Netze und wie wir diese ausnutzen können, um das zentrale Nervensystem besser zu verstehen. Motiviert ist unser Ansatz zunächst durch die pragmatische Beob- achtung, dass viele Komponenten und Prozesse des Gehirns intrin- sisch stochastisch sind. Daher eignet sich die Wahrscheinlichkeitstheo- rie und ihre Methoden zur Analyse und Modellierung in besonderem Masse. Tief greifender, beruht unser Ansatz auf der Hypothese, dass die Stochastizität des Nervensystems weit mehr ist als nur ein Artefakt eines biologischen Systems. Diese Hoffnung rührt aus der Erfahrung in der Wahrscheinlichkeitstheorie, dass zufällige Strukturen oft sehr wünschenswerte Eigenschaften haben und der Theorie randomisier- ter Algorithmen, die eindrucksvoll belegt, dass Zufall zur effizienten Berechnung von Lösungen vieler Probleme äusserst nützlich ist. Da- her erstaunt es nicht, dass dem Zufall auch eine grundlegende Rolle in der Struktur und Informationsverarbeitung des Nervensystems eingeräumt wurde. In dieser Tradition betrachten wir einfache, meist stochastische ma- thematische Modelle von Neuronen, Synapsen und deren Verbund in neuronalen Netzen und untersuchen emergente Eigenschaften, die sich mathematisch, oft mithilfe diskreter Wahrscheinlichkeitstheorie, beweisen lassen. Ein solcher Ansatz erlaubt die Reduktion auf we- sentliche Konzepte die schlussendlich vollständig verstanden werden können. Des Weiteren simulieren wir komplexere Modelle, um zu v prüfen, ob sich die so gewonnenen Erkenntnisse generalisieren las- sen. So können wir in rein theoretischen Betrachtungen schnell viele Thesen prüfen, testen und meist verwerfen. Im Fall brauchbarer Ide- en können diese konkrete biologische Experimente motivieren und deren Ausgang vorhersagen oder bereits vorgenommene Experimen- te verstehen und deuten. Inspiration schöpfen wir dabei häufig aus dem Gebiet der diskreten Wahrscheinlichkeitstheorie, vor allem der Zufallsgraphentheorie und der Theorie randomisierter Algorithmen. Konkret zeigen wir erstens, dass die Struktur biologischer neuro- naler Netze, die Formation sogenannter Synfire Ketten begünstigt, da sie lokal der Struktur gerichteter Zufallsgraphen ähnelt. Synfire Ketten sind ein etabliertes Modell mehrstufiger Signalübertragung in neuronalen Netzen. Zweitens demonstrieren wir wie die Effizienz synaptischer Plastizität von einer Einbeziehung des lokalen Membran- potentials profitieren kann, da die Fluktuationen dieses Potenzials mehr relevante Information enthält als einzelne Aktionspotentiale. Drittens beweisen wir, dass zufällige synaptische Verbindungen in Kombination mit nicht linearer Interaktion inhibitorischer Synapsen eine wechselseitige inhibitorische Kommunikation zwischen exzita- torischen Neuronen erlaubt, selbst wenn die Anzahl inhibitorischer Neuronen viel kleiner ist als die Anzahl exzitatorischer Neuronen. Viertens liefern wir eine mögliche Erklärung für die experimentelle Beobachtung, dass die Anzahl der Neuronen die während bestimm- ter stereotyper Netzwerk Aktivität im Hippocampus feuern, einer logarithmischen Normalverteilung entspricht: die synaptische Über- tragung normal verteilter Netzwerkaktivität von einem Bereich in den nächsten, führt dort zu log-normal verteilter Aktivität. vi Thanks Thank you to everybody who made my time at ETH so much fun! First off, to my supervisor, Angelika Steger. I am sincerely grateful for the opportunity to work in your group. The environment on the intersection of combinatorics, neuroscience and machine learning that you created is unique. Your trust, support and advice mean a lot to me. I could not imagine a better boss and more inspiring mentor. Thank you! Thank you to Johannes Lengler, for your help, patience and uplifting spirit. You have been incredibly supportive. To Jean-Pascal Pfister, for invaluable feedback, for letting me partici- pate in his group meetings, and for sacrificing his time to referee this thesis. I also want to thank my other collaborators who contributed to this thesis; much of what is written here must be largely attributed to you. I am further especially thankful to all past and current members of our group and the institute who shared the time of my PhD with me. I will miss having you around. Finally, I thank my family and friends for their love and support. I do not take this for granted. Thank you so very much. Zurich June, 2018 vii 1 Introduction The human brain is a fantastic computer. All our actions and thoughts, from simple movements to brilliant ideas, emanate from computations in our brains. This reductionist view allows a profound insight: the brain serves as a proof of concept for what human-designed computers should be capable of. Yet, it also shows us how poorly we understand information processing in the central nervous system right now. 1.1 The brain as an inherently probabilistic computer If we want to understand computation in the brain, it may be in- structive to compare the central nervous system to digital computers, which we actually understand. First, let
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages203 Page
-
File Size-