Towards Biologically Plausible Gradient Descent by Jordan

Towards Biologically Plausible Gradient Descent by Jordan

Towards biologically plausible gradient descent by Jordan Guerguiev A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Cell and Systems Biology University of Toronto c Copyright 2021 by Jordan Guerguiev Abstract Towards biologically plausible gradient descent Jordan Guerguiev Doctor of Philosophy Graduate Department of Cell and Systems Biology University of Toronto 2021 Synaptic plasticity is the primary physiological mechanism underlying learning in the brain. It is de- pendent on pre- and post-synaptic neuronal activities, and can be mediated by neuromodulatory signals. However, to date, computational models of learning that are based on pre- and post-synaptic activity and/or global neuromodulatory reward signals for plasticity have not been able to learn complex tasks that animals are capable of. In the machine learning field, neural network models with many layers of computations trained using gradient descent have been highly successful in learning difficult tasks with near-human level performance. To date, it remains unclear how gradient descent could be implemented in neural circuits with many layers of synaptic connections. The overarching goal of this thesis is to develop theories for how the unique properties of neurons can be leveraged to enable gradient descent in deep circuits and allow them to learn complex tasks. The work in this thesis is divided into three projects. The first project demonstrates that networks of cortical pyramidal neurons, which have segregated apical dendrites and exhibit bursting behavior driven by dendritic plateau potentials, can in theory leverage these physiological properties to approximate gradient descent through multiple layers of synaptic connections. The second project presents a theory for how ensembles of pyramidal neurons can multiplex sensory and learning signals using bursting and short-term plasticity, in order to approximate gradient descent and learn complex visual recognition tasks that previous biologically inspired models have struggled with. The final project focuses on the fact that machine learning models implementing gradient descent assume symmetric feedforward and feedback weights, and presents a theory for how the spiking properties of neurons can enable them to align feedforward and feedback weights in a network. As a whole, this work aims to bridge the gap between powerful algorithms developed in the machine learning field and our current understanding of learning in the brain. To this end, we develop novel theories into how neuronal circuits in the brain can coordinate the learning of complex tasks, and present a number of experimental predictions that are fruitful avenues for future experimental research. ii Acknowledgements I would like to extend my deep appreciation to my supervisor, Blake Richards, for putting his trust in me as one of his first Ph.D. students, and for providing me with boundless knowledge, support and encouragement that propelled me through this work. In addition, I am grateful for the help of my collaborators on the work presented here { Timothy Lillicrap, Alexandre Payeur, Friedemann Zenke, Richard Naud and Konrad Kording. I would also like to thank Thomas Mesnard, for his valuable help and advice along the way. I would also like to thank my lab mates and friends I have made throughout the years, including Matt, Annik, Kirthana, Danny, Colleen and Luke, for sharing this experience with me, and bringing me countless moments of comfort, joy and laughter. A special thanks to my committee members, Melanie Woodin, Frances Skinner, and Douglas Tweed, for giving me an abundance of valuable advice and suggestions that have helped me improve as a scien- tist, and shaped this body of work into what it is today. I would like to thank Mao for her endless love, positivity and encouragement, for which I am forever grateful. Finally, I want to thank my parents, for the many sacrifices they have made to get me to this moment, and my sister, for always supporting me and being my mentor in life. iii Contents 1 Introduction 1 1.1 Research contributions and thesis outline . 3 2 Background 5 2.1 Learning in the brain . 5 2.2 Biological neurons . 5 2.2.1 Inhibitory interneurons . 5 2.2.2 Pyramidal neurons . 6 2.3 Synaptic plasticity . 7 2.3.1 Short-term plasticity . 7 2.3.2 Long-term plasticity . 7 2.3.3 Hebbian plasticity, neuromodulation and synaptic tagging . 8 2.3.4 Spike timing dependent plasticity . 9 2.4 Machine learning . 9 2.4.1 Artificial neural networks . 10 2.4.2 Gradient descent . 10 2.4.3 Backpropagation of error (backprop) . 12 2.4.4 Convolutional neural networks . 12 2.5 Weight symmetry . 13 2.5.1 Feedback alignment . 13 2.5.2 Kolen-Pollack algorithm . 14 2.5.3 Weight mirroring . 14 2.6 Related models of biologically plausible gradient descent . 15 2.6.1 Contrastive Hebbian learning . 15 2.6.2 Equilibrium propagation . 15 2.6.3 Difference target propagation . 16 2.6.4 Dendritic prediction learning . 16 2.6.5 Dendritic error backpropagation . 17 2.6.6 Updated random feedback . 17 2.6.7 Burst ensemble multiplexing . 18 2.7 Project synopses . 18 2.7.1 Project 1: Towards deep learning with segregated dendrites . 18 iv 2.7.2 Project 2: Burst-dependent synaptic plasticity can coordinate learning in hierar- chical circuits . 19 2.7.3 Project 3: Spike-based causal inference for weight alignment . 19 3 Project 1: Towards deep learning with segregated dendrites 20 3.1 Abstract . 20 3.2 Author contributions . 21 3.3 Introduction . 21 3.4 Results . 24 3.4.1 A network architecture with segregated dendritic compartments . 24 3.4.2 Calculating credit assignment signals with feedback driven plateau potentials . 28 3.4.3 Co-ordinating optimization across layers with feedback to apical dendrites . 30 3.4.4 Deep learning with segregated dendrites . 33 3.4.5 Coordinated local learning mimics backpropagation of error . 35 3.4.6 Conditions on feedback weights . 37 3.4.7 Learning with partial apical attenuation . 38 3.5 Discussion . 39 3.6 Methods . 45 3.6.1 Neuronal dynamics . 45 3.6.2 Plateau potentials . 47 3.6.3 Weight updates . 47 3.6.4 Multiple hidden layers . 50 3.6.5 Learning rate optimization . 51 3.6.6 Training paradigm . 51 3.6.7 Simulation details . 52 3.7 Acknowledgments . 52 4 Project 2: Burst-dependent synaptic plasticity can coordinate learning in hierarchi- cal circuits 54 4.1 Abstract . 55 4.2 Author contributions . 55 4.3 Introduction . 55 4.4 Results . 57 4.4.1 A burst-dependent rule enables top-down steering of plasticity . 57 4.4.2 Dendrite-dependent bursting combined with short-term plasticity supports multi- plexing of feedforward and feedback signals . 60 4.4.3 Combining a burst-dependent plasticity rule with short-term plasticity and apical dendrites can solve the credit assignment problem . 61 4.4.4 Burst-dependent plasticity promotes linearity and alignment of feedback . 64 4.4.5 Ensemble-level burst-dependent plasticity in deep networks can support good per- formance on standard machine learning benchmarks . 66 4.5 Discussion . 69 4.6 Methods . 72 4.6.1 Spiking model . 72 v 4.6.2 Deep network model for categorical learning . 76 4.7 Acknowledgments . 79 4.8 Code availability . 80 5 Project 3: Spike-based causal inference for weight alignment 81 5.1 Abstract . 81 5.2 Author contributions . 82 5.3 Introduction . 82 5.4 Related work . 84 5.5 Our contributions . 84 5.6 Methods . 85 5.6.1 General approach . 85 5.6.2 RDD feedback training phase . 85 5.6.3 LIF dynamics . 86 5.6.4 RDD algorithm ..

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    164 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us