Deep Spiking Neural Network with Neural Oscillation and Spike-Phase Information

Deep Spiking Neural Network with Neural Oscillation and Spike-Phase Information

The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21) Deep Spiking Neural Network with Neural Oscillation and Spike-Phase Information Yi Chen,1 Hong Qu,1 Malu Zhang,1,2 Yuchen Wang,1 1School of Computer Science and Engineering, University of Electronic Science and Technology of China, China 2Department of Electrical and Computer Engineering, National University of Singapore, Singapore fchenyi, [email protected], [email protected], [email protected] Abstract such as edge devices. Spiking neural networks (SNNs) sim- ulate the biological brain at neuronal level and hold the Deep spiking neural network (DSNN) is a promising compu- potential of ultra-low-power neuromorphic implementation tational model towards artificial intelligence. It benefits from through event-driven computation. Besides, the spiking neu- both the DNNs and SNNs through a hierarchy structure to ex- rons model the signal processing of biological neurons in tract multiple levels of abstraction and the event-driven com- putational manner to provide ultra-low-power neuromorphic more detail, then the SNNs model can capture more types of implementation, respectively. However, how to efficiently information such as neural oscillation and spike phase infor- train the DSNNs remains an open question because of the mation. The study of deep spiking neural networks (DSNNs) non-differentiable spike function that prevents the traditional benefits from both DNNs and SNNs, and is a promising way back-propagation (BP) learning algorithm directly applied to towards human-level artificial intelligence. However, how to DSNNs. Here, inspired by the findings from the biological efficiently train DSNNs remains an open question as the bi- neural networks, we address the above-mentioned problem nary nature of spike train and non-differentiable spike func- by introducing neural oscillation and spike-phase information tion limit usage of BP learning algorithm. To resolve this to DSNNs. Specifically, we propose an Oscillation Postsy- problem, many learning algorithms and training strategies naptic Potential (Os-PSP) and phase-locking active function, have been proposed. Depending on the training mechanisms, and further put forward a new spiking neuron model, namely Resonate Spiking Neuron (RSN). Based on the RSN, we pro- they can be divided into three categories. pose a Spike-Level-Dependent Back-Propagation (SLDBP) The first category adopts the ANN-SNN transformation learning algorithm for DSNNs. Experimental results show strategy, which means that the parameters of a pre-trained that the proposed learning algorithm resolves the problems ANN are transplanted to the SNN with a similar structure caused by the incompatibility between the BP learning algo- (Esser et al. 2015; Hunsberger and Eliasmith 2015; Esser rithm and SNNs, and achieves state-of-the-art performance et al. 2016; Peter et al. 2013; Liu, Chen, and Furber 2017; in single spike-based learning algorithms. This work inves- Diehl et al. 2015, 2016; Bodo et al. 2017; Rueckauer and tigates the contribution of introducing biologically inspired mechanisms, such as neural oscillation and spike-phase infor- Liu 2018; Han, Srinivasan, and Roy 2020). The original in- mation to DSNNs and providing a new perspective to design tention of this transformation strategy is to make full use future DSNNs. of the state-of-the-art ANNs, and make the converted SNN have a comparable performance. However, these attempts do not meet the expectation that the loss of accuracy caused Introduction by approximation in the transformation process seems in- evitable. Although some mitigation strategies have been pro- Deep neural networks (DNNs) have achieved great success posed, such as weight/activation normalization (Diehl et al. in computer vision(He et al. 2016), natural language pro- 2015, 2016; Bodo et al. 2017) and additional noise (Peter cessing(Young et al. 2018; Ghaeini et al. 2018; Lee and et al. 2013; Liu, Chen, and Furber 2017), the problem is not Li 2020), speech processing(Abdel-Hamid et al. 2014; Lam completely resolved. Furthermore, in order to better match et al. 2019) and other fields(Liu et al. 2018; Shao, Liew, the activation of ANNs, the corresponding SNNs normally and Wang 2020). The hierarchical organization inspired by use the spike rate coding, which means that the SNN needs the biological brain enables DNNs to extract multiple lev- to generate a relatively large number of spikes, resulting in els of features and the back-propagation (BP) learning al- obscure time information of spike activity and huge energy gorithm offers an efficient global learning method. How- consumption (Mostafa 2017). ever, DNNs’ training performs highly rely on large-scale computing resources (e.g., GPUs and server cluster). The The second category is the local learning method based high requirements for hardware devices limit the deploy- on spike-timing-dependent plasticity (STDP), an unsuper- ment of DNNs on power-critical computation platforms, vised form of synaptic plasticity observed in different brain areas. This category update DSNNs layer-by-layer so that Copyright c 2021, Association for the Advancement of Artificial derivative transport between layers in BP is not needed. Be- Intelligence (www.aaai.org). All rights reserved. sides, STDP updates synaptic weights by considering the 7073 time difference between presynaptic and postsynaptic spikes like temporal coding. Hence, the directly training algorithms so that the derivative of spike function is not needed either. hold the potential of providing an ultra-low-power neuro- Although STDP is capable of training spiking neuron to de- morphic implementation. With these considerations, we fo- tect coincidence spike pattern, like other unsupervised learn- cus on developing a new spiking neuron model to solve ing algorithms, it is difficult to recognize rare but diagnostic problem proposed before with neural oscillation and spike- unique features. To enhance the neuron’s decision-making phase information. Our main contributions are summarized ability, various STDP-variants have been proposed, such as as follows: neuron competition(Kheradpisheh et al. 2018), lateral inhi- 1) We analyze the issues that limit the usage of BP bition, and reward system(Mozafari et al. 2018). However, learning algorithm for training DSNNs, including: non- the improvements from these methods are limited, and most differentiable spike function, gradient explosion and dead of these methods need an external non-spiking encoding neurons during training. Based on these understandings, or readout layer, e.g. support vector machines. Thus, these we propose Oscillation Postsynaptic Potential (Os-PSP) and methods Cannot take full advantage of SNNs. phase-locking active function for spiking neurons to solve Different from the above mentioned two categories, the these problems with a single spike. third category of learning algorithms directly trains the deep 2) Based on the proposed Os-PSP and phase-locking ac- spiking neural networks. The typical examples are Spike- tive function, we propose a Spike-Level-Dependent Back- Prop(Bohte, Kok, and La Poutre 2002) and its various im- Propagation (SLDBP) learning algorithm for DSNNs. Af- provements(Shrestha and Song 2017a,b; Xu et al. 2013). ter that, we explain how to solve the above mentioned three By a linear assumption, the problem of non-differentiable problems with the proposed algorithm. spike function is addressed. However, their methods still suf- 3) We design the XOR problem from both absolute- fer from the problems of gradient exploding and dead neu- oriented and relative-oriented to test the ability of the single- rons. These two problems have been partially addressed by layer resonate SNN to deal with nonlinear problems. Then adding extra strategies during the training process, such as we design resonate DSNN and apply it to MNIST and CI- constraints on weights and gradient normalization (Mostafa FAR10 dataset to test the model’s ability to process real data. 2017). In addition, some works resolve these problems Experimental results demonstrate that the proposed learn- by applying non-leaky PSP function,such as S4NN (Kher- ing algorithm achieves the state-of-art performance in spike adpisheh and Masquelier 2020)and STBDP(Zhang et al. time based learning algorithms of SNNs. This work explores 2020b), or multi-bits spikes(Voelker, Rasmussen, and Elia- the relationship between neural oscillations and phase- smith 2020; Xu et al. 2020). Another path of the third cate- locking from another perspective, providing a new direction gory is the surrogate derivatives with the typical examples for brain-like computing systems. of STBP(Wu et al. 2018, 2019; Zhang and Li 2020),and SLAYER (Shrestha and Orchard 2018; Neftci, Mostafa, and Problem Description Zenke 2019). These learning algorithms show competitive The great success of DNNs in multiple fields because of the results as compared with their ANN counterparts. However, BP algorithm, and this arouses the interest of applying the the computational and memory demands of these algorithms BP algorithm to DSNN. However, the mechanisms of infor- are high and the advantage of event-driven learning is not mation encoding and processing between typical artificial fully utilized since they need to store the state of neurons at neurons in DNNs and SNNs are different. Due to the non- all times in order to learn. differential spike function, dead neurons, and gradient ex- Recent neuroscience researchers have discovered that plosion

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us