Design of Fully Analogue Artificial Neural Network with Learning

Design of Fully Analogue Artificial Neural Network with Learning

RADIOENGINEERING, VOL. 30, NO. 2, JUNE 2021 357 Design of Fully Analogue Artificial Neural Network with Learning Based on Backpropagation Filip PAULU, Jiri HOSPODKA Dept. of Circuit Theory, Czech Technical University in Prague, Technická 2, 160 00 Prague, Czech Republic {paulufil, hospodka}@fel.cvut.cz Submitted July 7, 2020 / Accepted April 7, 2021 Abstract. A fully analogue implementation of training though this can diminish structural flexibility, it also offers algorithms would speed up the training of artificial neural a significant speed-up in the training process of artificial neu- networks. A common choice for training the feedforward ral networks [7, 10, 11]. networks is the backpropagation with stochastic gradient de- For that, it is necessary to use an electronic component scent. However, the circuit design that would enable its that can work as analogue memory and is possible to up- analogue implementation is still an open problem. This pa- date. For this, state of the art articles are using memristors or per proposes a fully analogue training circuit block concept memristive crossbar arrays [2–4, 6, 7, 10, 16]. Unlike CMOS based on the backpropagation for neural networks without technology, memristors cannot yet work precisely for large- clock control. Capacitors are used as memory elements for scale multi-level simulations [7], [10]. That is why in this the presented example. The XOR problem is used as an ex- proposal of block concept, capacitors are used as memory el- ample for concept-level system validation. ements. Capacitors in VLSI occupy a lot of space, but for fast near-sensor application it is not always necessary to use large neural network structures. The capacitors can be changed to Keywords other technologies with transistor-level designing. Fully analogue, analogue circuit, neural network, neu- A large number of designs of an on-chip learning pro- romorphic, backpropagation cesses are done in steps [7, 10, 11]. This allows to use more flexible structure, but it slows down the learning process and disables the usage of the analogue input from sensors with- 1. Introduction out sampling. That is why this design does not use any clock The edge computation architectures minimise the move- control. It avoids synchronization signals, and it makes this ment of the data by performing the computations close to concept fully-analogue and fully-parallel [2], [15]. the sensors. The significant computing power in this domain For above-mentioned reasons, this article presents and comes with the motivation to design hardware for near-sensor verifies the novel concept of a fully analogue training process data analysis, which is not affected by the von Neu-mann of an artificial neural network circuit implementation inspired bottleneck [1–4]. This field has also been influenced by by the backpropagation and based on the gradient descent. the recent advancement of the Internet-of-Things (IoT) de- vices [5–7]. Neuromorphic computing draws inspiration from Ar- 2. Background tificial Neural Networks (ANNs), where learning is done in The speedup of the training process of a neural network analogue or mixed-signal domain [1,3,8,9]. Training of these is based on several principles in this article. The training networks requires a learning algorithm such as backpropaga- process is designed to be fully parallel [2, 7, 15]. Further, tion using gradient descent [7,10–13] to implement hardware any clock control that holds back the propagation signal is for real-time training [2, 7, 14, 15]. Despite the widespread avoided. The training process is realized by analogue elec- use of digital backpropagation for the training of ANNs, its tric circuit feedback; thus, it is fully analogue [15]. The analogue implementation is still an open problem [7], [10]. design consists of two parts: forward propagation and back- While hardware-based ANNs currently exist, their ward propagation. Inputs, outputs and weights of a classical training has to be performed with specialised software and neural network [12] are represented by: input xi as Vini , FPGA-based units. An analogue implementation of the back- output yˆi as Vouti , target yi as Vouti and weight wi as Vwi . propagation would enable the creation of fully hardware- For the transistor-level implementation, current can be used based ANNs, which could be trained on-chip [2], [15]. Even instead of voltage for xi, yˆi and yi. DOI: 10.13164/re.2021.0357 CIRCUITS 358 F. PAULU, J. HOSPODKA, DESIGN OF FULLY ANALOGUE ARTIFICIAL NEURAL NETWORK . The forward propagation of the designed neural network indicates the forward propagation part of the circuit. The blue is defined at the level of a single neuron by boundary indicates the backpropagation part of the circuit. N ! Õ The multiplier block has the schematic symbol shown in Vout = S Vwi Vini (1) Fig. 2. It is a function which multiplies two voltages [19], [21] i=0 and whose output is a current calculated by where V is the output signal (here, the voltage of this neu- out Iout = Km · Vin1 · Vin2 (5) ron), S¹·º is the activation function of this neuron, Vini is one 2 of the input signals of this neuron and Vwi is the weight for where Km [A/V ] is the ratio of output to the product of input i, where for i = 0 the Vin0 is a bias input. inputs. There are two parts to the backpropagation between Figure 3 shows the schematic symbol of the block rep- layers. The first part solves the calculation of an error at the resenting an activation function. A sigmoid is the most com- output layer of a neural network (Type1). It is possible to monly used activation function, which is described by define an error of the NN’s output layer as a Mean Square Vamp Error (MSE) [12], [17] by Vout = (6) 1 + e−Iin/Iref N 1 Õ E = ¹V − V º2 (2) where Vamp is a constant determining the maximum possible 2 outi targeti i=0 voltage and Iref is a referential current. 2 Vteach where E [V ] is the error, N is the number of outputs, Vtargeti is the ideal output and Vouti is the obtained output signal. @E Backpropagation VE The second part solves the backpropagation of the error be- @Vout tween the hidden layers of a neural network (Type ). 2 @Vout @net The backpropagation circuit implementation is based on a well-known backpropagation algorithm with the stochas- tic gradient descent method where the update of the weight [12], [13] is defined by Forward propagation @E Vw wk+1 = wk − η (3) Vbias 0 @wk Inet Vout where wk is a weight at step k 2 N, E is the error and η is a learning rate. The fully Analogue Artificial Neural Network (AANN) IE1 Vw does not operate by separate and distinct steps. The proposed 1 @net design for the update of the weight uses Vin1 @Vw1 ¹ t @E¹τº V ¹tº = V ¹t º − η dτ (4) w w 0 ¹ º t0 @Vw τ IE2 Vw2 where Vw¹tº is a function of weight at continuous time t 2 R. @net Vin 2 @Vw2 Type2 3. Circuit Design Fig. 1. The block implementation of an analogue neuron with learning and two inputs. The proposed AANN is based on a structure that al- lows the construction of most types of neural networks such Vin2 as RNN, CNN etc. Each of these structures consists of the same cells, called neurons. The proposed hardware con- Vin Iout cept of a fully analogue neuron with training is illustrated 1 in Fig. 1. The signals can be realised by voltages or cur- Fig. 2. Symbol of multiplier. rents depending used technologies. For example, a multi- plier can be implemented in both voltage and current mode. Iin Vout However the capacitors are necessary to charge by current Fig. 3. Symbol of activation function. and voltages of capacitors are used as a memory quantity, see Sec. 3.2. It consists of voltage multipliers, capacitors Vin Vout representing weights of a neuron, and blocks representing activation function and its derivative. The green boundary Fig. 4. Symbol of derivation of activation function. RADIOENGINEERING, VOL. 30, NO. 2, JUNE 2021 359 I Figure 4 shows the schematic symbol of the block rep- charged resenting the derivative of an activation function. In the C sigmoid activation function, the block is described by Vw eVin/Vref Vout = Km (7) ¹eVin/Vref + 1º2 K · Vin · Vw where Km is a proportionality voltage constant and Vref is a referential voltage. Vin 3.1 Implementation of Forward Propagation Fig. 5. Analogue implementation of a weight. There have been many solutions to the problem of hard- ware implementation of forward propagation of neural net- The first partial derivative from (11) is denoted in Fig. 1 works [1,7–10,15,18,19]. The whole behaviour is described as voltage V : by (1), and the analogue design is shown in Fig. 1 in the E @E = VE : (12) green area. The weights correspond to the voltages of capac- @Vout itors [15]. Each input is connected to the multiplier, whose The last partial derivative from (11) is solved as output is a current given by (5). The activation function is @net applied to the sum of these currents Inet [18]. = Vin: (13) @Vw The crucial element in this type of implementation is The remaining partial derivative cannot be solved gen- the capacitor. Each neuron has N + 1 capacitors as weights, erally. However, this is a derivative of the activation function. where N is the number of input. For example, neural network If the activation function is known, it is possible to calculate in Fig. 7 has 17 capacitors. However, bigger chip area and and implement this partial derivative as a separate sub-circuit.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us