
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18) Brain-inspired Balanced Tuning for Spiking Neural Networks Tielin Zhang1;3, Yi Zeng1;2;3;4;5;∗, Dongcheng Zhao1;2;3 and Bo Xu1;2;3;5 1 Institute of Automation, Chinese Academy of Sciences (CAS), China 2 University of Chinese Academy of Sciences, China 3 Research Center for Brain-inspired Intelligence, Institute of Automation, CAS, China 4 National Laboratory of Pattern Recognition, Institute of Automation, CAS, China 5 Center for Excellence in Brain Science and Intelligence Technology, CAS, China ftielin.zhang, [email protected] Abstract from the brain have contributed to the research of Artificial Intelligence (AI). For example, Hopfield network with recur- Due to the nature of Spiking Neural Network- rent connections is inspired by the Hippocampus; Hierarchi- s (SNNs), it is challenging to be trained by bio- cal temporary memory (HTM) network with micro-column logically plausible learning principles. The multi- structures is inspired by the neocortex; Convolutional neu- layered SNNs are with non-differential neurons, ral network (CNN) with hierarchical perception is inspired temporary-centric synapses, which make them n- by the primary visual cortex; Reinforcement learning with early impossible to be directly tuned by back prop- dynamic acquisition of online rules is inspired by the basal agation. Here we propose an alternative biological ganglia centric pathway. inspired balanced tuning approach to train SNNs. The approach contains three main inspirations from Many Artificial Neural Network (ANN) models are with the brain: Firstly, the biological network will usu- brain-inspirations at different level of details. And they have ally be trained towards the state where the temporal shown their power on various tasks, such as image cap- update of variables are equilibrium (e.g. membrane tion, language translation [LeCun et al., 2015] and the game potential); Secondly, specific proportions of exci- Go [Hassabis et al., 2017]. tatory and inhibitory neurons usually contribute to However, the tuning methods of back propagation in ANNs stable representations; Thirdly, the short-term plas- are facing challenges on preventing overfitting, improving ticity (STP) is a general principle to keep the in- transferability, and increasing convergence speed. The fire- put and output of synapses balanced towards a bet- rate neuron models in ANNs are also short at processing tem- ter learning convergence. With these inspirations, poral information which makes them hard to be with good we train SNNs with three steps: Firstly, the SNN self-stability. The principles of neurons, synapses, and net- model is trained with three brain-inspired princi- works in biological systems are far more complex and pow- ples; then weakly supervised learning is used to erful than those used in ANNs [Hassabis et al., 2017]. It has tune the membrane potential in the final layer for been proved that even a single biological neuron with dendrit- network classification; finally the learned informa- ic branches needs a three-layered ANN for finer simulation- tion is consolidated from membrane potential into s [Hausser¨ and Mel, 2003]. the weights of synapses by Spike-Timing Depen- The intelligence of biological systems is based on multi- dent Plasticity (STDP). The proposed approach is scale complexities, from microscale of neurons and synaps- verified on the MNIST hand-written digit recog- es to the macroscale of brain regions and their interactions. nition dataset and the performance (the accuracy At the microscale, the neurons in biological systems repre- of 98.64%) indicates that the ideas of balancing sent or process information by discrete action potentials (or state could indeed improve the learning ability of spikes). It raises an open question that how these discrete neu- SNNs, which shows the power of proposed brain- ron activities interpret continuous functions, or from a similar inspired approach on the tuning of biological plau- point of view, how these network with non-differential neu- sible SNNs. rons could be successfully tuned by biological learning prin- ciples. Understanding these mechanisms of biological sys- 1 Introduction tems will give us hints on the new biological-plausible tuning methods [Abbott et al., 2016]. Decoding brain on both structural and functional perspectives Compared to other neural network models, Spiking Neu- has lasted for centuries. In this procedure, many inspirations ral Networks (SNNs) are generally more solid on biological ∗Tielin Zhang and Yi Zeng contribute equally to this article and plausibility. The SNNs are considered to be the third gen- should be considered as co-first authors. eration of neural network models, and are powerful on the 1653 Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18) processing of both spatial and temporal information [Maass, 2 Related Works 1997]. Neurons in SNNs communicate with each other by discon- Zenke et al. showed that the interaction of Hebbian homosy- tinuous spikes, which raises gaps between spikes and behav- naptic plasticity with rapid non-Hebbian heterosynaptic plas- iors but also narrow down the multi-level integration chal- ticity would be sufficient for memory formation, and then lenges since the spikes could be considered as naturally in- memory could be recalled after a brief stimulation of a sub- teractive signals. In SNNs, the neurons will not be activated set of assembly neurons in a spiking recurrent network mod- [ ] until the membrane potentials reach thresholds. This makes el Zenke et al., 2015 . them energy efficient. Alireza et al. proposed a local learning rule supported by The diversity of neuron types (e.g. excitatory and inhibito- the theory of efficient, balanced neural networks (EBN) for ry neurons) also enables SNNs to keep balance, which will, the tuning of recurrent spiking neural networks. An addition- in turn, help SNNs on efficient learning and forming specific al tight excitatory and inhibitory balance is maintained for the [ ] functions. In addition, different computing costs in neurons spiking efficiency and robustness Alemi et al., 2018 . and synapses cause various kinds of time delays which will Zeng et al. proposed seven biologically plausible rules to also contribute to the asynchronous computation of SNNs, s- train multi-layer SNNs with Leaky-Integrated and Fire (LIF) ince these kinds of delays will open up a new temporal dimen- neurons, which includes more local principles such as dy- sion on SNN for better representation capacity. SNNs have namic neuron allocations, synapse formation and elimination, been well applied on XOR problem [Sporea and Gruning,¨ various kinds of STDPs, and also more global learning prin- 2013], visual pattern recognition [Diehl and Cook, 2015; ciples such as background noise influence and the proportion Zeng et al., 2017], probabilistic inference [Soltani and Wang, of different kinds of neurons [Zeng et al., 2017]. It has been 2010] and planning tasks [Rueckert et al., 2016]. proved that the synaptic weights in first few layers of SNNs Although SNNs have shown more biological plausibility could be dynamically updated by STDP rules without any su- than conventional ANNs, from the computational perspec- pervision, and the weights between the final two layers could tive, lack of efficient and biological plausible learning meth- be supervised and learned by Hebb’s law. ods in the current SNN models limits their values to support Diehl et al. trained an SNN with conductance-based understanding the nature of intelligence and potential appli- synapses, STDP, lateral inhibition, and adaptive spiking cations. threshold, and used an unsupervised learning scheme to train With respect to this, some efforts have been made to train a two-layered SNN. Finally, the accuracy reached 95% on the the networks by biological plausible principles. Long-Term MNIST benchmark [Diehl and Cook, 2015]. Potentiation (LTP), Long-Term Depression (LTD), Short Ter- Some other efforts get around of the direct training of m Plasticity (STP) which includes Short Term Facilitation SNNs by equivalent converting of learned synaptic weight- (STF) and Depression (STD), Hebbian learning, Spike Tim- s from ANNs into SNNs. Diehl et al. try to convert deep ing Dependent Plasticity (STDP), lateral inhibition, synap- ANNs into SNNs directly and keep the minimum perfor- tic scaling, synaptic redistribution, and many other brain- mance loss in the conversion process, the key techniques in- inspired learning principles from biological nervous system- clude the limitation of rectified linear units (ReLUs) with zero s are proposed and applied on the training procedure of bias and weight normalization into a linear range [Diehl et al., SNNs [Abraham and Bear, 1996]. Nevertheless, there is still 2015]. Although this method could achieve the performance gaps for SNNs in specific applications when compared with of 98.48% on 10-class hand-written digit MNIST classifica- ANN models. More efficient and comprehensive learning tion task, the performance of SNN is actually contributed by frameworks for SNNs need to be proposed and applied. ANN from backpropagation instead of pure biological plau- In this paper, we propose brain-inspired balanced tuning sible SNN learning. for SNNs (Balanced SNN for short), we will tune the SNNs Lee et al. argued that the discontinuities between spikes based on three inspirations from the brain: Firstly, the bi- could
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-