Towards Faster and Better Conversion of Artificial Neural Networks to Spiking Neural Networks with Bistable Neurons

Towards Faster and Better Conversion of Artificial Neural Networks to Spiking Neural Networks with Bistable Neurons

LATEX 1 BSNN: Towards Faster and Better Conversion of Artificial Neural Networks to Spiking Neural Networks with Bistable Neurons Yang Li, Yi Zeng, and Dongcheng Zhao Abstract—The spiking neural network (SNN) computes and AI have attracted the attention of researchers. They design communicates information through discrete binary events. It compression algorithms [13], [14]to enable artificial neural is considered more biologically plausible and more energy- networks (ANN) to significantly reduce network parameters efficient than artificial neural networks (ANN) in emerging neuromorphic hardware. However, due to the discontinuous and and calculations while maintaining their original performance. non-differentiable characteristics, training SNN is a relatively Another part of the work focuses on computing architecture challenging task. Recent work has achieved essential progress on [15], less computational energy consumption can be achieved an excellent performance by converting ANN to SNN. Due to the by designing hardware that is more suitable for the operational difference in information processing, the converted deep SNN characteristics of neural network models. But the problem of usually suffers serious performance loss and large time delay. In this paper, we analyze the reasons for the performance loss the high computational complexity of deep neural networks and propose a novel bistable spiking neural network (BSNN) still exists. Therefore, the spiking neural network, known as that addresses the problem of spikes of inactivated neurons the third-generation artificial neural network [16], has received (SIN) caused by the phase lead and phase lag. Also, when more and more attention [17], [18], [19], [20], [21]. ResNet structure-based ANNs are converted, the information Spike neural networks (SNNs) process discrete spike signals of output neurons is incomplete due to the rapid transmission of the shortcut path. We design synchronous neurons (SN) through the dynamic characteristics of spiking neurons, rather to help efficiently improve performance. Experimental results than real values, and are considered to be more biologically show that the proposed method only needs 1/4-1/10 of the time plausible and more energy-efficient [22], [23], [24]. For the steps compared to previous work to achieve nearly lossless con- former, the event-type information transmitted by neurons in version. We demonstrate state-of-the-art ANN-SNN conversion SNN is the spike, which is generated when the membrane for VGG16, ResNet20, and ResNet34 on challenging datasets including CIFAR-10 (95.16% top-1), CIFAR-100 (78.12% top- potential reaches the neuron firing threshold. Thus, its informa- 1), and ImageNet (72.64% top-1). tion processing process is more in line with biological reality than traditional artificial neurons [25], [26], [27]. For the latter, Index Terms—Spiking Neural Network, Bitability, Neuromor- phic Computing, Neural Coding. the information in SNN is based on the event, e.g., neurons that do not emit spikes do not participate in calculations, and the information integration of neurons is an accumulate (AC) I. INTRODUCTION operation, which is more energy-efficient than the multiply- Deep learning (or Deep Neural Network, DNN) has made accumulate (MAC) operations in ANN [28], [29]. Therefore, breakthroughs in many fields such as computer vision [1], researchers put forward the concept of neuromorphic com- [2], [3], natural language processing [4], [5], and speech puting [30], [31], [32], which realizes the more biologically processing [6], and has even surpassed humans in some plausible SNN on hardware. It shows more significant progress specific fields. But many difficulties and challenges also need in fast information processing and energy saving. But due to arXiv:2105.12917v1 [cs.NE] 27 May 2021 to be overcome in the development process of deep learning the non-differentiable characteristics of SNN, training SNN is [7], [8], [9], [10]. One concerning issue is that researchers still a challenging task. Because of the lack of the derivative pay more attention to higher computing power and better of the output, the common backpropagation algorithm cannot performance while ignoring the cost of energy consumption be used directly. How to use SNN for effective reference has [11]. Taking natural language processing tasks as an example, become a problem for researchers. the power consumption and carbon emissions of Transformer Taking inspiration from the brain, such as Spike-Timing [12] model training are very considerable. In recent years, the Dependent Plasticity (STDP) [33], [34], lateral inhibition [35], cost advantages and environmental advantages of low-energy [36], Long-Term Potentiation (LTP) [37], and Long-Term Yang Li is with the Research Center for Brain-inspired Intelligence, Institute Depression (LTD) [38] are effective methods. By properly of Automation, Chinese Academy of Sciences (CASIA), Beijing 100190, integrating different neural mechanisms in the brain [39], SNN China, and School of Artificial Intelligence, University of Chinese Academy can be effectively trained. Because most of these methods are of Sciences (UCAS), Beijing 100190, China, e-mail: [email protected] Dongcheng Zhao is with CASIA and UCAS, Beijing 100190, China. unsupervised, researchers often add SVM [40] or other classi- Yi Zeng is with CASIA, UCAS and Center for Excellence in Brain fiers for supervised learning [18], [41] or directly do learning Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai in an unsupervised manner [19], [42]. All of them are of great 200031, China, e-mail: [email protected] The corresponding author is Yi Zeng. importance for further enhancing the interpretability of SNN Manuscript received May 27, 2021; revised XX XX, 2021. and exploring the working mechanism of the human brain. LATEX 2 ANN y from input neurons through two paths. The experimental a0 w0 a w x results demonstrate they can help achieve nearly lossless 1 1 l ReLU ai conversion and state-of-the-art in MNIST, CIFAR-10, CIFAR- wn a ll−1 n ai= max(0, w j a j ) 100, and ImageNet while significantly reduce time delay. Our l li= max{a } ANN-SNN Conversion contributions can be summarized as follows: ww'/= ll−1 • V We propose a novel BSNN that combines phase coding 1 ' r00= w Vth T 0 and bistability mechanism. It effectively solves the prob- 1 w' 1 t r11= l lem of SIN and greatly reduces the performance loss and Input T ' IF r 1 wn i r = time delay of the converted SNN. nnT Vl=+ V l w l−1 SNN t++1 t i i , t 1 • We propose synchronous neurons to solve the prob- Fig. 1: Illustration of ANN-SNN Conversion lem that information in the spiking ResNet cannot syn- chronously reach the output neurons from two paths. • We achieve state-of-the-art on the MNIST, CIFAR-10, CIFAR-100, and ImageNet datasets, verifying the effec- However, this optimization method that only uses local neural tiveness of the proposed method. activities is challenging to achieve high performance and be applied to complex tasks. Some researchers try to train SNNs through approximated gradient algorithms [43], [44], [45], II. RELATED WORK [46], where the backpropagation algorithm can be applied to Many conversion methods have been proposed in order the SNN by continuous the spike firing process of the neuron. to obtain high-performance SNN. According to the encoding However, this method suffers from difficulty in convergence method they can be divided into three kinds. and requires a lot of time in training procedure in the deep Temporal Coding Based Conversion. Temporal coding neural networks (DNN) because it is difficult to balance the uses neural firing time to encode the input to spike trains and whole firing rate. For the above two methods, they perform approximate activations in ANN [58]. However, since neurons poorly in large networks and complex tasks. We believe that in the hidden layer need to accumulate membrane potential to the inability to obtain an SNN with effective reference ability spike, when the activation value is equal to the maximum, is a key issue in the development and application of SNN. neurons in deep layers are difficult to spike immediately, Recently, the conversion method has been proposed to making this method difficult to convert deep ANNs. Zhang convert the training result of ANN to SNN [47]. The ANN- et al. [59] use ticking neurons to modify the method above, SNN conversion method maps the trained ANN parameters which transfers information layer by layer. Nevertheless, this with ReLU activation function to SNN with the same topology method is less robust and difficult to be used in models with as illustrated in Figure 1, which makes it possible for SNN complex network structures like the residual block. to obtain extremely high performance at a very low computa- Rate Coding Based Conversion. Unlike temporal coding, tional cost. But direct mapping will lead to severe performance the rate coding-based conversion method uses the firing rates degradation [48]. Diehl et al. [49] propose the data-based of spiking neurons to approximate the activation values in the normalization method, which scales the parameters with the ANN [47]. Diehl et al. [49] propose data-based and model- maximum activation values of each layer in ANN, improving based normalization, which use the maximum activation value the performance of the converted SNN. Reuckauer et al. [50] of neurons in each layer to normalize the weights. When and Han et al. [51] use integrate-and-fire (IF) neurons with disturbed by noise, the normalization parameter may be quite soft reset to make SNN achieve performance comparable to large, which will cause the weight smaller and the time to ANN. Nonetheless, it usually takes more than 1000-4000 spike longer. Researchers propose to use the p-th largest value time steps to achieve better performance on complex datasets. for normalization operation, thereby greatly improving robust- And when converting ResNet [52] to SNN, researchers suffer ness and reducing time delay [50].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us