Methods for Building Network Models of Neural Circuits

Methods for Building Network Models of Neural Circuits

Methods for Building Network Models of Neural Circuits Brian DePasquale Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy under the Executive Committee of the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2016 c 2016 Brian DePasquale All Rights Reserved ABSTRACT Methods for Building Network Models of Neural Circuits Brian DePasquale Artificial recurrent neural networks (RNNs) are powerful models for understanding and mod- eling dynamic computation in neural circuits. As such, RNNs that have been constructed to perform tasks analogous to typical behaviors studied in systems neuroscience are useful tools for understanding the biophysical mechanisms that mediate those behaviors. There has been signif- icant progress in recent years developing gradient-based learning methods to construct RNNs. However, the majority of this progress has been restricted to network models that transmit infor- mation through continuous state variables since these methods require the input-output function of individual neuronal units to be differentiable. Overwhelmingly, biological neurons transmit information by discrete action potentials. Spiking model neurons are not differentiable and thus gradient-based methods for training neural networks cannot be applied to them. This work focuses on the development of supervised learning methods for RNNs that do not require the computation of derivatives. Because the methods we develop do not rely on the dif- ferentiability of the neural units, we can use them to construct realistic RNNs of spiking model neurons that perform a variety of benchmark tasks, and also to build networks trained directly from experimental data. Surprisingly, spiking networks trained with these non-gradient methods do not require significantly more neural units to perform tasks than their continuous-variable model counterparts. The crux of the method draws a direct correspondence between the dy- namical variables of more abstract continuous-variable RNNs and spiking network models. The relationship between these two commonly used model classes has historically been unclear and, by resolving many of these issues, we offer a perspective on the appropriate use and interpre- tation of continuous-variable models as they relate to understanding network computation in biological neural circuits. Although the main advantage of these methods is their ability to construct realistic spiking net- work models, they can equally well be applied to continuous-variable network models. An exam- ple is the construction of continuous-variable RNNs that perform tasks for which they provide performance and computational cost competitive with those of traditional methods that com- pute derivatives and outperform previous non-gradient-based network training approaches. Collectively, this thesis presents efficient methods for constructing realistic neural network mod- els that can be used to understand computation in biological neural networks and provides a unified perspective on how the dynamic quantities in these models relate to each other and to quantities that can be observed and extracted from experimental recordings of neurons. Table of Contents List of Figures iv CHAPTER 1—Introduction 1 Levels of abstraction in modeling and analysis...........................1 Overview of dissertation.........................................2 Neural coding................................................4 Firing rate codes..........................................4 Static population codes......................................8 Dynamic representation and heterogeneity of response................. 11 What are the continuous quantities of interest then?................... 13 Artificial network models........................................ 14 f-I curves and mean field theories of spiking neurons................... 15 Firing-rate models......................................... 17 Reinterpreting firing-rate models................................ 18 Building network models that perform tasks............................ 21 Our modeling goals........................................ 21 Recurrently connected networks................................ 23 Continuous-variable models................................... 24 Spiking models........................................... 25 Random connections........................................... 26 i Continuous-variable models................................... 27 Spiking models........................................... 29 Training recurrent networks...................................... 31 Training continuous-variable networks............................ 33 Training spiking networks.................................... 35 Connecting continuous-variable and spiking models....................... 36 Hybrid firing-rate/spiking network.............................. 36 The failure of reservoir approaches in LIF networks................... 38 CHAPTER 2—Building Functional Networks of Spiking Model Neurons 42 Introduction................................................. 43 Defining the input, output and network connections...................... 46 Driven networks.............................................. 48 Spike coding to improve accuracy................................... 51 Autonomous networks.......................................... 53 The connection to more general tasks................................ 58 Discussion.................................................. 59 Acknowledgments............................................. 61 CHAPTER 3—Using Firing-Rate Dynamics to Train Recurrent Networks of Spiking Model Neurons 62 Introduction................................................. 63 Results.................................................... 64 Network architecture and network training......................... 64 Using continuous-variable models to determine auxiliary target functions..... 68 ii Examples of trained networks.................................. 71 Generating EMG activity during reaching.......................... 74 Discussion.................................................. 81 Acknowledgments............................................. 83 CHAPTER 4—Full-FORCE Learning in Continuous-Variable Networks 84 Introduction................................................. 85 Network model and learning...................................... 86 FORCE learning.............................................. 87 Full-FORCE learning........................................... 89 Input driven periodic task........................................ 91 Singular values of J ............................................ 94 Comparing Full-FORCE networks to gradient-based networks................ 97 Discussion.................................................. 100 Acknowledgments............................................. 102 CHAPTER 5—Conclusion 103 Returning to the question of spikes.................................. 103 Revising the interpretation of rate models............................. 105 Encouraging abstract thinking in data analysis........................... 107 The role of randomness......................................... 108 Including additional biological realism in spiking networks.................. 109 Bibliography 111 iii List of Figures 1.1 The variable nature of spiking and recovery of the firing rate.............6 1.2 Population decoding in the motor system.......................... 10 1.3 Neural responses in motor cortex are heterogeneous................... 12 1.4 Neural tuning in motor cortex changes with time..................... 13 1.5 In vivo-like and LIF f-I curves.................................. 16 1.6 Input dependent suppression of chaos............................ 29 1.7 Irregular spiking in a balanced network........................... 30 1.8 FORCE learning.......................................... 34 1.9 Hybrid rate/spiking model................................... 38 1.10 Intuition into reservoir methods, and why they fail in LIF networks........ 39 1.11 Training LIF networks with Fourier bases and chaotic rate networks........ 40 2.1 Autonomous and driven networks............................... 45 2.2 Driven networks approximating a continuous target output.............. 47 2.3 Two autonomous integrator networks............................ 55 2.4 Autonomous networks solving a temporal XOR task.................. 57 3.1 Network architectures...................................... 65 3.2 Oscillation task........................................... 71 3.3 XOR task............................................... 74 3.4 EMG task............................................... 76 3.5 EMG population dynamics................................... 77 3.6 Oscillation task with constrained J.............................. 80 iv 4.1 Network structure and the learning problem........................ 88 4.2 Input, output and firing-rates.................................. 92 4.3 Performance of FORCE and Full-FORCE networks................... 92 4.4 Test error as a function of training time........................... 93 4.5 Singular values of learned connectivity............................ 95 4.6 Training time for Full-FORCE and gradient-based networks.............. 97 4.7 Dynamics of a Full-FORCE trained network and a Hessian-Free trained network 98 4.8 Using Full-FORCE to learn a Hessian-Free network................... 100 5.1 Training a spiking network with targets from a back-propagation trained network 104 v Acknowledgments One of the most satisfying components of being a scientist

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    138 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us