An Architecture for Ultra-Low Power Binary-Weight CNN Acceleration

An Architecture for Ultra-Low Power Binary-Weight CNN Acceleration

1 YodaNN1: An Architecture for Ultra-Low Power Binary-Weight CNN Acceleration Renzo Andri∗, Lukas Cavigelli∗, Davide Rossiy and Luca Benini∗y ∗Integrated Systems Laboratory, ETH Zurich,¨ Zurich, Switzerland yDepartment of Electrical, Electronic and Information Engineering, University of Bologna, Bologna, Italy Abstract—Convolutional Neural Networks (CNNs) have rev- computation in the domain of embedded applications. Sev- olutionized the world of computer vision over the last few eral approaches leverage FPGAs to maintain post-fabrication years, pushing image classification beyond human accuracy. The programmability, while providing significant boost in terms of computational effort of today’s CNNs requires power-hungry parallel processors or GP-GPUs. Recent developments in CNN performance and energy efficiency [14]. However, FPGAs are accelerators for system-on-chip integration have reduced en- still two orders of magnitude less energy-efficient than ASICs ergy consumption significantly. Unfortunately, even these highly [15]. Moreover, CNNs are based on a very reduced set of optimized devices are above the power envelope imposed by computational kernels (i.e. convolution, activation, pooling), mobile and deeply embedded applications and face hard limi- but they can be used to cover several application domains tations caused by CNN weight I/O and storage. This prevents the adoption of CNNs in future ultra-low power Internet of (e.g., audio, video, biosignals) by simply changing weights Things end-nodes for near-sensor analytics. Recent algorithmic and network topology, relaxing the issues with non-recurring and theoretical advancements enable competitive classification engineering which are typical in ASIC design. accuracy even when limiting CNNs to binary (+1/-1) weights Among CNN ASIC implementations, the precision of arith- during training. These new findings bring major optimization metic operands plays a crucial role in energy efficiency. Sev- opportunities in the arithmetic core by removing the need for expensive multiplications, as well as reducing I/O bandwidth eral reduced-precision implementations have been proposed and storage. In this work, we present an accelerator optimized recently, relying on 16-bit, 12-bit or 10-bit of accuracy for for binary-weight CNNs that achieves 1.5 TOp/s at 1.2 V on both operands and weights [15–19], exploiting the intrinsic re- a core area of only 1.33 MGE (Million Gate Equivalent) or siliency of CNNs to quantization and approximation [20, 21]. 2 1.9 mm and with a power dissipation of 895 µW in UMC 65 nm In this work, we take a significant step forward in energy ef- technology at 0.6 V. Our accelerator significantly outperforms the state-of-the-art in terms of energy and area efficiency achieving ficiency by exploiting recent research on binary-weight CNNs 61.2 TOp/s/[email protected] V and 1.1 TOp/s/[email protected] V, respectively. [22, 23]. BinaryConnect is a method which trains a deep neural network with binary weights during the forward and backward Index Terms—Convolutional Neural Networks, Hardware Ac- celerator, Binary Weights, Internet of Things, ASIC propagation, while retaining the precision of the stored weights for gradient descent optimization. This approach has the poten- tial to bring great benefits to CNN hardware implementation I. INTRODUCTION by enabling the replacement of multipliers with much simpler Convolutional Neural Networks (CNNs) have been achiev- complement operations and multiplexers, and by drastically ing outstanding results in several complex tasks such as image reducing weight storage requirements. Interestingly, binary- recognition [2–4], face detection [5], speech recognition [6], weight networks lead to only small accuracy losses on several text understanding [7, 8] and artificial intelligence in games well-known CNN benchmarks [24, 25]. In this paper, we introduce the first optimized hardware de- arXiv:1606.05487v4 [cs.AR] 24 Feb 2017 [9, 10]. Although optimized software implementations have been largely deployed on mainstream systems [11], CPUs sign implementing a flexible, energy-efficient and performance [12] and GPUs [13] to deal with several state of the art scalable convolutional accelerator supporting binary-weight CNNs, these platforms are obviously not able to fulfill the CNNs. We demonstrate that this approach improves the energy power constraints imposed by mobile and Internet of Things efficiency of the digital core of the accelerator by 5:1×, and (IoT) end-node devices. On the other hand, sourcing out the throughput by 1:3×, with respect to a baseline architecture all CNN computation from IoT end-nodes to data servers based on 12-bit MAC units operating at a nominal supply is extremely challenging and power consuming, due to the voltage of 1:2 V. To extend the performance scalability of large communication bandwidth required to transmit the data the device, we implement a latch-based standard cell memory streams. This prompts for the need of specialized architectures (SCM) architecture for on-chip data storage. Although SCMs to achieve higher performance at lower power within the end- are more expensive than SRAMs in terms of area, they provide nodes of the IoT. better voltage scalability and energy efficiency [26], extending A few research groups exploited the customization paradigm the operating range of the device in the low-voltage region. by designing highly specialized hardware to enable CNN This further improves the energy efficiency of the engine by 6× at 0:6 V, with respect to the nominal operating voltage 1YodaNN named after the Jedi master known from StarWars – “Small in of 1:2 V, and leads to an improvement in energy efficiency size but wise and powerful” [1]. by 11.6× with respect to a fixed-point implementation with 2 SRAMs at its best energy point of 0:8 V. To improve the flex- In a follow-up work [25], the same authors propose to quantize ibility of the convolutional engine we implement support for the inputs of the layers in the backward propagation to 3 several kernel sizes (1×1 – 7×7), and support for per-channel or 4 bits, and to replace the multiplications with shift-add scaling and biasing, making it suitable for implementing a operations. The resulting CNN outperforms in terms of ac- large variety of CNNs. The proposed accelerator surpasses curacy even the full-precision network. This can be attributed state-of-the-art CNN accelerators by 2.7× in peak performance to the regularization effect caused by restricting the number with 1:5 TOp=s [27], by 10× in peak area efficiency with of possible values of the weights. 1:1 TOp=s=MGE [28] and by 32× peak energy efficiency Following this trend, Courbariaux et al. [24] and Rastegari with 61:2 TOp=s=W [28]. et al. [23] consider also the binarization of the layer inputs, such that the proposed algorithms can be implemented using II. RELATED WORK only XNOR operations. In these works, two approaches are Convolutional neural networks are reaching record-breaking presented: accuracy in image recognition on small data sets like MNIST, i) Binary-weight-networks BWN which scale the output SVHN and CIFAR-10 with accuracy rates of 99.79%, 98.31% channels by the mean of the real-valued weights. With this and 96.53% [29–31]. Recent CNN architectures also perform approach they reach similar accuracy in the ImageNet data set very well for large and complex data sets such as ImageNet: when using AlexNet [2]. GoogLeNet reached 93.33% and ResNet achieved a higher ii) XNOR-Networks where they also binarize the input images. recognition rate (96.43%) than humans (94.9%). As the trend This approach achieves an accuracy of 69.2% in the Top-5 goes to deeper CNNs (e.g. ResNet uses from 18 up to 1001 measure, compared to the 80.2% of the setup i). Based on layers, VGG OxfordNet uses 19 layers [32]), both the mem- this work, Wu [35] improved the accuracy up to 81% using ory and the computational complexity increases. Although log-loss with soft-max pooling, and he was able to outperform CNN-based classification is not problematic when running on even the accuracy results of AlexNet. However, the XNOR- mainstream processors or large GPU clusters with kW-level based approach is not mature enough since it has only been power budgets, IoT edge-node applications have much tighter, proven on a few networks by a small research community. mW-level power budgets. This ”CNN power wall” led to the Similarly to the scaling by the batch normalization, Merolla development of many approaches to improve CNN energy et al. evaluated different weight projection functions where efficiency, both at the algorithmic and at the hardware level. the accuracy could even be improved from 89% to 92% on Cifar-10 when binarizing weights and scaling every output channel by the maximum-absolute value of all contained filters A. Algorithmic Approaches [36]. In this work we focus on implementing a CNN inference Several approaches reduce the arithmetic complexity of accelerator for neural networks supporting per-channel scaling CNNs by using fixed-point operations and minimizing the and biasing, and implementing binary weights and fixed- word widths. Software frameworks, such as Ristretto focus point activation. Exploiting this approach, the reduction of on CNN quantization after training. For LeNet and Cifar-10 complexity is promising in terms of energy and speed, while the additional error introduced by this quantization is less than near state-of-the-art classification accuracy can be achieved 0.3% and 2%, respectively, even when the word width has been with appropriately trained binary networks [22, 23]. constrained to 4 bit [21]. It was shown that state-of-the-art results can be achieved quantizing the weights and activations of each layer separately [33], while lowering precision down to B. CNN Acceleration Hardware 2 bit (−1,0,+1) and increasing the network size [20].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us