
A Logarithmic Floating-Point Multiplier for the Efficient Training of Neural Networks Zijing Niu Honglan Jiang Mohammad Saeed Ansari University of Alberta Tsinghua University University of Alberta Edmonton, AB, Canada Beijing, China Edmonton, AB, Canada [email protected] [email protected] [email protected] Bruce F. Cockburn Leibo Liu Jie Han University of Alberta Tsinghua University University of Alberta Edmonton, AB, Canada Beijing, China Edmonton, AB, Canada [email protected] [email protected] [email protected] ABSTRACT the Efficient Training of Neural Networks. In Proceedings of the Great Lakes The development of important applications of increasingly large Symposium on VLSI 2021 (GLSVLSI ’21), June 22–25, 2021, Virtual Event, USA. neural networks (NNs) is spurring research that aims to increase ACM, New York, NY, USA, 6 pages. https://doi.org/10.1145/3453688.3461509 the power efficiency of the arithmetic circuits that perform the huge amount of computation in NNs. The floating-point (FP) repre- sentation with a large dynamic range is usually used for training. 1 INTRODUCTION In this paper, it is shown that the FP representation is naturally Artificial neural networks (NNs) are computational models that suited for the binary logarithm of numbers. Thus, it favors a design possess attractive characteristics of the biological NNs of the brain. based on logarithmic arithmetic. Specifically, we propose an effi- NNs have been widely explored and used in many areas such as cient hardware implementation of logarithmic FP multiplication signal processing, image analysis and medical diagnosis [13]. An ar- that uses simpler operations to replace complex multipliers for the tificial NN requires massive multiply-accumulate (MAC) arithmetic training of NNs. This design produces a double-sided error distri- computations in both the training and inference phases. With the bution that mitigates the accumulative effect of errors in iterative increasing size of NNs, the amount of MAC computation becomes operations, so it is up to 45% more accurate than a recent logarith- a limiting factor. This challenge motivates the search for more mic FP design. The proposed multiplier also consumes up to 23.5× efficient computation processes and hardware implementations. less energy and 10.7× smaller area compared to exact FP multipliers. Various methods have been explored for accelerating NNs. They Benchmark NN applications, including a 922-neuron model for the include reducing redundancies in structure, optimizations of gradie- MNIST dataset, show that the classification accuracy can be slightly nt-based backpropagation and decreasing the computation inten- improved using the proposed multiplier, while achieving up to 2.4× sity in the convolution to improve training and inference [18][21]. less energy and 2.8× smaller area with a better performance. Taking advantages of the error tolerance in NNs, approximate com- puting is a promising technique to improve the computational and CCS CONCEPTS energy efficiency in deep learning applications [7]. • Hardware ! Combinational circuits; Application specific in- To ensure the accuracy of NN models, a floating-point (FP) rep- tegrated circuits; • Computer systems organization ! Neural resentation is usually adopted in the training phase as the wider networks. range of representation leads to more accurate training. Since the FP MAC circuits, especially multiplication, dominate power con- KEYWORDS sumption and circuit area, the design of efficient FP multipliers has been extensively investigated [2, 15, 20]. Low-precision computa- Floating-Point Multiplier; Neural Network; Approximate Comput- tion in training and inference has recently been pursued. Significant ing progress has been made in exploiting reduced-precision integers ACM Reference Format: for inference, while 8-bit FP numbers [19] and logarithmic 4-bit FP Zijing Niu, Honglan Jiang, Mohammad Saeed Ansari, Bruce F. Cockburn, numbers [17] have been shown to be effective in the training of Leibo Liu, and Jie Han. 2021. A Logarithmic Floating-Point Multiplier for deep NNs. Hence, finding a balance between accuracy, speed and area in the implementation of NNs is a key challenge as apparently Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed no hardware has been proposed for these low-precision NNs. for profit or commercial advantage and that copies bear this notice and the full citation In this work, we show that the standard IEEE 754 FP representa- on the first page. Copyrights for components of this work owned by others than the tion is naturally suited for logarithmic arithmetic. A logarithmic FP author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission multiplier is designed using simple operators such as adders and and/or a fee. Request permissions from [email protected]. multiplexers. Unlike other approximate FP multipliers, a double- GLSVLSI ’21, June 22–25, 2021, Virtual Event, USA sided error distribution is produced by the proposed design, which © 2021 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-8393-6/21/06...$15.00 minimizes the increase of accumulative errors. The proposed design https://doi.org/10.1145/3453688.3461509 also improves the energy efficiency of training with only a slight loss of accuracy. In some cases, it even improves the accuracy of 3 LOGARITHMIC FLOATING-POINT NNs compared to those using exact multipliers. REPRESENTATION AND FORMULATION The rest of this paper is organized as follows: Section 2 describes FOR MULTIPLICATION the research motivation and related work. Section 3 introduces the logarithmic FP representation and formulation for multiplication. 3.1 FP Representation and Multiplication Section 4 presents the proposed multiplier with an accuracy and The IEEE 754 standard defines the most commonly used formats performance evaluation. Section 5 presents NN applications and for representing FP numbers. The IEEE 754 FP formats contain a experimental results. Finally, Section 6 concludes the paper. 1-bit sign (, a ?-bit exponent 퐸 and a @-bit mantissa ". Fig. 1 shows the IEEE 754 representation of a single-precision FP number. 2 MOTIVATION AND RELATED WORK The training phase, which plays a key role in achieving a high NN accuracy, requires more arithmetic computation than infer- ence due to the use of the iterative gradient descent algorithm for updating weights and biases. Since inference is less sensitive to precision reduction with simpler computations, fixed-point arith- metic units dealing with fixed-point numbers are applied in the inference phase. Many approximate designs have been developed Figure 1: The IEEE-754 single-precision format. to perform fixed-point multiplication, such as truncated multipliers An FP number # in base-2 scientific notation is expressed as: and logarithm-based multipliers [9]. However, with a wider range # = ¹−1º( · 2퐸−180B · ¹1 ¸ Gº (1) of representation, the FP arithmetic unit, especially the FP multi- plier, permits greater training accuracy at the cost of higher power where ( is either 0 for a positive number or 1 for a negative number. consumption and larger area usage. To ensure unsigned integers in the exponent field, a bias, such as Hence, hardware-efficient FP multipliers have mostly been ex- 127 for the single-precision, is added to the actual exponent value. − plored for the training of NNs. A short-bit-length FP format is 퐸 180B denotes the actual exponent value of # . With the hidden considered in [8] for the training of convolutional NNs. An approx- ‘1’, G is the fractional part of the FP number, represented by the ≤ imate computing technique called Tunable Floating-Point (TFP) mantissa ", and hence 0 G < 1. Note that - is used to denote ¸ adjusts the precisions for different operations to lower the power the actual mantissa, 1 G, in the following content. consumption [5]. Both of the above proposals depend on bit width In the IEEE 754 formats, the FP multiplication can be calculated reduction of the FP representation to increase the efficiency, similar by three processes, including the XOR operation for the sign bits, to most of the other studies focused on bit-width scaling [5]. In the addition of the exponents, and the multiplication of the mantissa = × [15], mantissa multiplication is converted to addition of the input bits. Consider % 퐴 퐵, which is computed as follows: operands; however, exact multiplication is required when the er- (% = (퐴 ⊕ (퐵, (2) ror rate exceeds a pre-determined value. These methods do not -퐴퐵 = ¹1 ¸ G퐴º × ¹1 ¸ G퐵º, (3) completely eliminate the multiplication. ( 퐸퐴 ¸ 퐸퐵 − 180B, -퐴퐵 < 2, As an energy-efficient alternative to the conventional FP repre- 퐸% = (4) sentation, logarithmic representations of FP numbers have been 퐸퐴 ¸ 퐸퐵 − 180B ¸ 1, >Cℎ푒AF8B4, considered for the acceleration of NNs. For example, Lognet shows ( -퐴퐵,-퐴퐵 < 2, that logarithmic computation can enable more accurate encoding -% = (5) - /2, >Cℎ푒AF8B4, of weights and activations that results in higher classification ac- 퐴퐵 curacies at low resolutions [11]. A state-of-the-art 4-bit training where the sign bit, exponent and mantissa of A, B and P are respec- strategy for deep NNs is based on the logarithmic radix-4 format tively denoted with the corresponding subscripts. The exponent [17]. Hence, efficient logarithm-based FP multipliers have become and mantissa of product % relate to the comparison of the obtained promising for the training of NNs. mantissa -퐴퐵 with 2. Note that (2) is valid for the sign computation Recently, a logarithmic approximate multiplier (LAM) was pro- of the proposed design as well, so it will not be discussed in the posed to improve the power efficiency of NN training by imple- following content. menting FP multiplication with fixed-point addition [4]. However, the LAM always underestimates the product, so approximation 3.2 Logarithmic FP Representation errors are accumulated in the training process.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-