<<

Research Article Robot Autom Eng J Volume 1 Issue 4 - November 2017 Copyright © All rights are reserved by Denis Pereira de Lima DOI: 10.19080/RAEJ.2017.01.555568 Neural Network Training Using Unscented and Extended

Denis Pereira de Lima*, Rafael Francisco Viana Sanches, Emerson Carlos Pedrino Federal University of São Carlos (UFSCar), Brazil Submission: August 03, 2017; Published: November 10, 2017 *Corresponding author: Denis Pereira de Lima, Federal University of São Carlos (UFSCar), São Carlos, Brazil, Email:

Abstract

This work demonstrates the training of a multilayered neural network (MNN) using the Kalman filter variations. Kalman filters estimate the weights of a neural network, considering the weights as a dynamic and upgradable system. The (EKF) is a tool that has been used by many authors for the training of Neural Networks (NN) over the years. However, this filter has some inherent drawbacks such as instability caused by initial conditions due to linearization and costly calculation of the Jacobian matrices. Therefore, the Unscented Kalman Filter variant has been demonstrated by several authors to be superior in terms of convergence, speed and accuracy when compared to the EKF. Training using this tends to become more precise compared to EKF, due to its linear transformation technique known as the Unscented literature.Transform (UT). The results presented in this study validate the efficiency and accuracy of the Kalman filter variants reported by other authors, with superior performance in non-linear systems when compared with traditional methods. Such approaches are still little explored in the Keywords :

Genetic algorithm (GA); Evolutionary strategy (ES); Evolutionary programming (EP); Genetic programming (GP); Differential evolution (DE); Bacterial foraging optimisation (BFO); Artificial bee colony (ABC); Particle swarm optimisation (PSO); Ant colony optimization (ACO);Abbreviations Artificial :immune system (AIS); Bat Algorithm (BA)

MNN: Multilayered Neural Network; EKF: Extended Kalman Filter; NN: Neural Networks; UT: Unscented Transform; ANNs: Artificial Neural Networks; SD: Steepest Decent; UKF: Unscented Kalman Filter; GRV: Gaussian Random Variables Introduction the weights of a neural network are adjusted adaptively.

In the early 1940s, the pioneers of the field, McCulloch have been widely used in identifying and modeling nonlinear And because of their strong ability to learn, neural networks & Pitts [1], studied the characterization of neural activity, the neural events and the relationships between them. Such systems. Artificial Neural Networks (ANNs) are currently an a neural network can be treated by a propositional logic. additional tool which the engineer can use for a variety of Others, like Hebb [2], were concerned with the adaptation purposes. Classification and regression are the most common laws involved in neural systems. Rosenblatt [3] coined the tasks; however, control, modeling, and forecasting name Perceptron and devised an architecture which has are other common tasks undertaken. For more than three subsequently received much attention. Minsky & Seymour [4] many properties and pointed out limitations of several models decades, the field of ANNs has been a focus for researchers. introduced a rigorous analysis of the Perceptron; they proved of many different software tools used to train these kinds of As a result, one of the outcomes has been the development [5]. Multilayer perceptrons are still the most popular artificial networks, making the selection of an adequate tool difficult for neural net structures being used today. Hopfield [6] tried to a new user [8]. The conventional algorithm for Multilayered contextualize the biological model of addressable memory of Neural Networks (MNN) is Back propagation (BP), which is biological organisms in comparison to the integrated circuits of an algorithm that uses the Steepest Decent method (SD), an a computer system, thus generalizing the collective properties extension of the Laplace method, which tries to approximate, that can be used for both the neurons and the computer systems. as close as possible, to the points of recurring non-stationary According to Zhan & Wan [7], neural networks are good tools point. BP in neural networks works to adjust the weights of the problems, in which you want to predict an event at some future for working with approximation in some nonlinear systems neural network, minimizing the error between the inputs and in order to obtain a desired degree of accuracy in predicting or even in non-stationary processes in which changes occur expected outputs. Problems related to BP are that it does not constantly and fast. As these activities happen in real time, Robot Autom Eng J 1(4): RAEJ.MS.ID.555568 (2017) 00100 & Automation Journal

converge very quickly and the prediction for non-stationary the mean and the , the yields the same also used to solve non-linear problems that are not linearly generalizes to nonlinear systems without the linearization problems tends to be inaccurate. Another algorithm that is performance as a KF for linear systems however it elegantly separable is the Newton method. The difference between the steps required by the EKF [17,20]. We were motivated by faster than BP. However the Newton method requires more Newton method and BP is that the Newton method converges previous studies demonstrating the best performance and apply two methodologies in training of neural networks in efficiency of UKF related to EKF and, therefore, we tried to computing power and, also, large-scale problems are not feasible [7]. Recently, several authors have shown in studies order to prove of the efficiency of methods. that the Extended Kalman Filter (EKF), already well-discussed The discussion of the subject is distributed according to the in the literature in engineering circles, can also be used for following sections: 2- Training Neural Networks using Kalman the purpose of training networks to perform expected input- TrainingFilters, 3- Numerical Neural Network Testing Results, Using 4-Kalman Conclusion. Filters output mappings [9]. The publications by [10-15] are important with respect to EKF. Several authors such as [13,16-18], have EKF Algorithm described a variant, the Unscented Kalman Filter (UKF). Its performance is better than that of the EKF in terms of numerical Wk and stability and accuracy, and so a comparison of the two methods To understand the EKF, assume a non-linear dynamic Vk in the training of neural networks is needed. The basic system can be described by a state space model where Rk and Qk . difference between the EKF and UKF systems is the approach found in equations 1 and 2 are white Gaussian process f kZ, in which Gaussian Random Variables (GRV) are represented with zero mean independent( k ) covariance matrices for propagation through the system dynamics. The GRVs are The function is a transition of a non-linear hkZ( , k ) propagated analytically through the first-order function, the matrix possibly varying over a period of time. varying over time [13]. expansion of the . A set of sigma points is The function is a non-linear measurement matrix also ^− X+ = f( kX,) + W #(1) chosen so that their sample mean and sample covariance are k1 kk X k the true mean and true covariance respectively [19] (Figure 1). Y= hkX, + V #(2) k( kk)

The basic idea of EKF according to [13] is linearizing the

state-space model which is^ defined^− by equations 1 and 2 its This update is made by X or X current values are close to kthe estimatedk ones for each state. function depends on the particular problem to be considered since the linear model is obtained by applying the EKF equations. Also it is possible to say that the approximation made by the EKF occurs in two distinct phases [13]: 4: The first∂fkx(,) phase is the^− construction of the matrices in 3 and F= , XX= #(3) kk+1, ∂X k ∂hk(, X ) ^− H= k /,XX= #(4) kk∂X

In this phase, the ijth entry Fkk+1, F kX, is identical to the partial Figure 1: Difference between EKF and UKF [9]. Hk derivative of component ith with respect to the jth order the partial derivative i part of HkX(, ) component of X. Similarly, the ijth entry is identical to with respect to the jth We were motivated by previous studies demonstrating therefore we tried to apply two methodologies in training of component of X. k are evaluated, so the best performance and efficiency of UKF related to EKF ^− ^− X k X k In the first case, the derivatives in ^X . Fkk+1, and Hk are both known as ^ ^− neural networks in order to prove of the efficiency of methods. that in the latter case the derivativesX are evaluated at X k and k are evaluated at time k [13,16]. Filter proposed by Julier et al. [16] is the propagation of a The entries of the matrices The vital operation performed in the Unscented Kalman computable because Fkk+1, and Hk

Gaussian random variable through the system dynamics, using In the second phase of the EKF as the matrix a deterministic sampling approach. Using the principle in which are evaluated in the sequence they are used in a function of a set of discretely sampled points can be used to parametrize a first order Taylor series expansion to perform nonlinear

How to cite this article: Denis P d L, Rafael F V S, Emerson C P. Neural Network Training Using Unscented and Extended Kalman Filter. Robot Autom 00101 Eng J. 2017; 1(4): 555568. DOI: 10.19080/RAEJ.2017.01.555568. Robotics & Automation Engineering Journal

^ ^− FkX(, ) HkX(,k ) k and round X k and X k T T 12T TT 1 FkX(,k ) and HkX(,k ) are estimated as follows: XxxxYyy= [ , ,..., ,] ,= [ , ,..., yww ] ,= [( ) ,( w ) ] , W= 12 km12 1Where,TT 1 1TT 2 2TT 2 2TT 1 11 1T 2 functions ^^ , respectively. In [(W1 ) ,( W 2 ) ,...,( WL ) ] , W= [(( W1 ) ,( W 2 ) ,...,( WM ) ] , W l= [ WW l,1 , l ,2 ,..., W lk , ] , W m = FkX(,k )≈+ FXX (,kk ) Fkk+1, (, XX )#(5) specific manner [W22 , W ,..., w 2 ]T , and 1≤ m ≤ M ,1 ≤≤ l L ^^−− m,1 m ,2 mL , HkX( , )≈+ HXX ( ,kk ) H ( XX , ) #(6) k kk+1, [7].

The variables d and y are the desired out puts, v is the noise With the estimated of expressions 5 and 6 we can perform 8, as shown below: of random observation. Assume white gaussian noise with an the process of approximation in the nonlinear equations 7 and Xk++1≈ F kk 1, XWd k ++ k k#(7)

− Yk ≈+ HXkk V k#(8) average R from the . the algorithm will be represented in summarized form, as Because there are a lot of articles explicity about the EKF, below: − In this part,^^ −− two new quantities are introduced to be T Yk=−− Ykk{ hX ( , X kk ) H X }#(9) Sk+1= H k + 1 PH kk ++ 11 + R k#(13) calculated: ^ T −1 dk = fXX( ,k ) − F ^ #(10) kk+1, X k Kk+1= PH kk ++ 11 S k#(14) − ^^ ^ The entries in Y k are all known at the time k, and therefore − Wkk+1 =+− W Kkk+1[ d gW (k , Xk )]#(15) Y k Similarly, the entries in the term d are all known at the time Pk+1= P k − KHP k ++ 11 kk#(16) can be considered as an observationk vector at time n. k [13]. In this way is Kk +1 Pk Hk +1 is the Training a Neural Network with EKF is the gain of the Kalman filter, W the approximation error^ covariance matrix, and . Figure 2. In this network has an input layer, a hidden layer Jacobian matrix of theW functionk with respect to the state For the purposes of EKF, one MLP is used as shown in and an output layer. The values enter to the network and the withAlgorithm current UKFestimate output values are connected by their weights and the mapping The Unscented Transformation (UT) is a way to calculate function. linear transformation [9]. the of a random variable, which undergoes a non- n size) Y=f(X − Let us consider the propagation of a variable X ( X PX through aχ nonlinear function ). Assuming X is significant and covariance . To calculate statistics Y, which forms − a matrix to 2L+1 vectors sigma as follows: χ0 = X ,

χλi=++X( ( L ) Pixi ) , = 1,..., L ,

− χλ=−+X( ( L ) P ) , iL =+ 1,...,2 L , i xiL− #(17) Figure 2 : Structure of the multilayered neural network. , λ ∈ℜ ((LP+ λ )Xi ) ()LP+ λ and W Where, X is the ithi row or column of the To conduct the training of a neural network at first is matrix of is the weight which is as follows [9]: necessary to organize all inputs, outputs and the network associated with the ith point. The transformation procedure is weights as state vectors [7]. Then the network computes the EKF and the same weights are used to modify the estimate yield the set of transformed sigma points Yii= fX[]. of filter, state vectors and observations vectors are First step is instantiate each point through the function to processed. estimating states where you have the following equations 11 average of the transformed points: To describe the network training, imagine a problem os In the Second step, the mean is given by the weighted − 2L Y= WY #(18) WW= #(11) ∑ ii and 12,kk dynamic−1 and with the observations given [21]. i=0

dk=+= Y k V k gW( kk , X ) + V k #(12) transformed points: In the Third step the covariance is weights, product of the

How to cite this article: Denis P d L, Rafael F V S, Emerson C P. Neural Network Training Using Unscented and Extended Kalman Filter. Robot Autom 00102 Eng J. 2017; 1(4): 555568. DOI: 10.19080/RAEJ.2017.01.555568. Robotics & Automation Engineering Journal

2L −− P= WYYYY{ −− }{ }T #(19)  ^ k X∑ ii i χω=W k −1 = i=0  0,k − 1 0  ()Lk+  ^ 0.5 Filter follow these steps: χωik,1−−=++Wk −1 (( L KP )k1 ) i i = After learning the concepts of UT for applying in Kalman ()Lk+ −  X (λλ+ 1/ ) and your  ^ 0.5 P(λλ+ 1/ ) χωiLk+−,1=−+Wk −1 (( L kP )k−1 ) i iL + = For predict a new state of system  ()Lk+ associated covariance . According Julier & Uhlmann [9] to realize this predict it is necessary to understand the ^ effects of this noise process. Z(λλ+ 1/ ) and the Where, i =1,2,..., L and L is the state dimension. Parameter k P (λλ+ 1/ ) Predicting the expectedvv observation is usedStep to 3: control Time update: the covariance matrix. innovation covariance , this predictian should χχik,/ k−− 1= ik , 1#(25) Pxz (λλ+ 1/ ) . include the effects of observation noise. Finally, predict the ^− 2L =ωχ − cross-correlation matrix Wk ∑ ii kk/ 1#(26) i=0 ^− These steps can be easily changed by reconstructing the 2L ^− T − vectors of process and observation models of states. To do so, P=ωχ[−− −− WW ][ χ ] #(27) La=L+q, where: k ∑ i ik,/ k 1 k ik,/ k 1 k the state vector is augmented with the process of noise terms i=0 λ a X () for a Xvector(λ )=  dimensional #(20) γχik,/ k−− 1= gX([ik ,/ k 1 , k ) #(28) V ()λ ^_ 2L a Y k = ωγ #(29) X ()λ where, ∑ i ik,/ k− 1 i=0 X(λ+= 1) FX [a ( λ ), u ( λλ ), ] . The UT uses 2rta +1 sigma points The process model is rewritten as a function

2L ^ ^^−− ^a  T X (/)λλ and StepSk= 4: Measurementωγ i[ ik,/ k−− 1 − Y ][ γupdate:ik ,/ k 1 −+ YRk ]k #(29) whichX (/) areλλ drawn=  from, ∑  i=0 O(qx ) (1)  2L ^^−− T PP(/)λλxv (/) λλ G=ωχ[ −− WY ][ γ k ] #(29) Pa (/)λλ= #(21) k∑ i ik,/ k−− 1 k ik,/ k 1  i=0 PQxv (λλ / ) () λ ^ ^_ ^_ −1 Wkk=+− W GSkk( d k Yk ) #(32)

− T The matrices on the leading diagonal are the Pk= P k − kSk k kk#(33) and off-diagonal sub-blocks are the correlations between the requires the use of additional sigma points, it means that state errors and the process noises. Although this method Numerical Testing Results the effects of the process noise (in terms of its impact on the The simulations were carried out in Matlab using mean and covariance) are introduced with the same order of Recursive Bayesian Estimation Library (REBEL) by [17] on a accuracyTraining as a the Neural uncertainty Network in the with state UKF [9]. AMD Quadricore PC with 3.0-GHz CPU and 8 Gb memory. An Multilayer Network was used to approximate the following =+−+3 nonlinearyx function2cos( x )( 5)sin(that is x )used x #(34) for benchmark purposes. The UKF algorithm for training a neural network according  The input and output {,xyk } to Zhan & Wan [21] is very similar to EKF. In the same manner, k as in EKF the weights of the connections must be in state of the function in equation {,xyk k } were vector format. For UKF states are calculated by the Unscented 34 are noiseless, while noisy counterparts will be used for through the non-linear system without the need to measure randomly generated for training with xk =+= xα vx, x ~ N (0,1) Transform (UT). Thus, it propagates by analytical form ~ k kk NN training.= + βη Specifically, 100 noisy data sets and yyk kk, k ∈[1,2,...,100], and the noisy terms v η N ~ (0,1) . The parameters α and β the Jacobian matrix. , wherekk, k is the data set index, TheStep UKF 1: Initialization is presented : below in four basic steps [21]: are used to control the strength of the noise. The Log-sigmoid ^ W0 = EW[0 ]#(22) function in the hidden layer.

^^ T PEWWWW00=−−[(00 )( 0 ) ]#(23) The test was carried out the test using various numbers of of the number of neuron in hidden layer, depending on the neurons (6, 8 and 10) in the hidden layer, evaluating the impact Neural Network. Step 2: Calculation of the sigma points: Mean Square Error (MSE) obtained during the training of the

How to cite this article: Denis P d L, Rafael F V S, Emerson C P. Neural Network Training Using Unscented and Extended Kalman Filter. Robot Autom 00103 Eng J. 2017; 1(4): 555568. DOI: 10.19080/RAEJ.2017.01.555568. Robotics & Automation Engineering Journal

parameters α and β =10−2 In Table 1 the training was carried out using the following magnitude of the degree of error in input data.Note that using , which are responsible for the

6 or 10 neurons in the hidden layer the occurrence of MSE, increaseLinear during Regression the training (R) was time. evaluated against the training data and results of the neural network, denotes ensure the accuracy of the training, being used as a parameter to choose training. the number of neurons in the hidden layer for more effective Using 8 neurons in hidden layer by means of training in Figure 3. compare the EKF and UKF, the results are shown

The aforementioned comparisons were performed in the α neurons in hidden layer. The value of α and β =10−3 . Note Figure 3 : Training NN using EKF and UKF ( and ). same way that data contained in Table 2 using (6, 8 and 10) that the pattern of errors in the Table 1 & 2 are repeated when 6 and 10 neurons are used in the hidden layer, however, the Figure 4 shows the fastest learning UKF filter until iteration 30. From the iteration 35, the two training filters tend to have improvement over the use of 8 and 10 neurons in the hidden the 100 iterations. It is demonstrating a lower noise input data, EKF filter using 6 neurons in the hidden layer is a significant a good generalization of the function, and MSE are close until layer. helps learning rates of the training algorithm. Table 1: Comparison number of hidden neurons using α and β =1−2 .

Hidden Mse R Neurons Ukf ekf Ukf Ekf

6 5.48E-02 7.58E-02 0.99974 0.99948

8 2.82E-02 6.34E-02 0.99940 0.99956

10 6.70E-02 6.54E-02 0.99813 0.99924

It was decided to use 8 neurons in the hidden layer for comparing the EKF and UKF training algorithms presented Linear Regression parameter (R) in Table 2. in Figure 4 because accuracy is improved as demonstrated by Table 2: Comparison number of hidden neurons using α and β =1−3 . − Figure 4 : Training NN using EKF and UKF ( α and β =1 2 ). Hidden Mse R

Neurons Ukf ekf Ukf Ekf The minor errors were obtained by the UKF filter in 6 8.31E-02 1.01E-02 0.99965 0.99996 iterations 45, 63 and 75, showing the best performance of nonlinear systems. 8 3.97E-02 1.54E-02 0.99969 0.99998 the EKF filter in training Neural Networks for prediction of 10 9.02E-02 2.44E-03 0.99909 0.99991 Conclusion This work demonstrates the training of a multilayered

Figure 3 shows the best rate of UKF filter learning until iteration 75. In iterations 45, 63 and 75, there is a significant neural network (MNN) using the Kalman filter variations. decrease in MSE training. The EKF filter tends to start a better Kalman filters estimate the weights of a neural network, learning at iteration 70. The oscillations presented in training considering the weights as a dynamic and upgradable system. the equation 34 used in the simulation. many authors for the training of Neural Networks (NN) over the rate are due to large non-linearity of the input data, because of The Extended Kalman Filter (EKF) is a tool that has been used by

How to cite this article: Denis P d L, Rafael F V S, Emerson C P. Neural Network Training Using Unscented and Extended Kalman Filter. Robot Autom 00104 Eng J. 2017; 1(4): 555568. DOI: 10.19080/RAEJ.2017.01.555568. Robotics & Automation Engineering Journal

9. years. However, this filter has some inherent drawbacks such 246.Ronald WJ (1992) Training Recurrent Networks Using the Extended Kalman Filter. Neural Networks International Joint Conference 4: 241- as instability caused by initial conditions due to linearization 10. and costly calculation of the Jacobian matrices. Therefore, the Singhal S, Lance W (1989) Advances in Neural Information Processing Unscented Kalman Filter variant has been demonstrated by 11. Systems 1. USA: Morgan Kaufmann Publishers Inc pp. 133-140. several authors to be superior in terms of convergence, speed Puskorius, Gintaras V, Feldkamp LA (1991) Decoupled Extended and accuracy when compared to the EKF. Training using this Kalman Filter Training of Feed forward Layered Networks. Ijcnn-91- algorithm tends to become more precise compared to EKF, due 12. Transform (UT). The results presented in this study validate Seattle International Joint Conference 1: 771-777. to its linear transformation technique known as the Unscented Puskorius, Gintaras V, Feldkamp LA (1994) Neurocontrol of Nonlinear Dynamical Systems with Kalman Filter Trained Recurrent Networks. the efficiency and accuracy of the Kalman filter variants 13. Neural Networks, IEEE Transactions 5(2): 279-297. reported by other authors, with superior performance in non- Haykin, Simon S, Haykin SS (2002) Kalman Filtering and Neural linear systems when compared with traditional methods. Such 14. Networks. Wiley Online Library. Acknowledgementapproaches are still little explored in the literature. Stubberud SC, Kramer KA (2008) Using the Neural-Extended Kalman Filter for State-Estimation and Controller 15. Modification. IEEE International Joint Conference pp. 1352-1357. / 23297-4. We thank FAPESP for financial support under Grant 2015 Adnan R, Ruslan FA, Samad AM, Zain AM (2013 New Artificialth International Neural References ColloquiumNetwork and pp. Extended 252-257. Kalman Filter Hybrid Model of Flood Prediction System. and Its Applications (Cspa), 9 1. 16.

5(4):McCulloch, 115-133. Warren S, Pitts W (1943) A Logical Calculus of the Ideas ControlsJulier, Simon 3: 3-32. J, Uhlmann JK (1997) A New Extension of the Kalman Immanent in Nervous Activity. The Bulletin of Mathematical Biophysics Filter to Nonlinear Systems. Aerospace/Defense Sensing, Simul and 2. 17.

Hebb OD (1999) The Organization of Behavior: A Neuropsychological 153-158.Wan EA, Van der Merwe R (2000) The Unscented Kalman Filter for 3. Approach. John Wiley & Sons. Nonlinear Estimation. Adaptive Systems for Signal Processing 2000: 18. ReviewRosenblatt 65(6): F 386. (1958) The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychological 60(2):Gustafsson 545-555. F, Hendeby G (2012) Some Relations Between Extended 4. and Unscented Kalman Filters. Signal Processing, IEEE Transactions 19. 5. Minsky, Marvin, Seymour P (1969) Perceptrons. MIT press, USA. Trebatický P, Pospíchal J (2008) Neural Network Training with Hunt KJ, Sbarbaro S, Żbikowski R, Gawthrop PJ (1992) Neural Networks Extended Kalman Filter Using . Lecture Notes 6. for Control Systems-A Survey. Automatica 28(6): 1083-1112. 20. in Computer Science. Springer Berlin Heidelberg pp. 198-207. 2554-2558.Hopfield, John J (1982) Neural Networks and Physical Systems with Lima DP, Kato ERR, Tsunaki RH (2014) A New Comparison of Kalman Emergent Collective Computational Abilities. Proc Nat Acad Sci 79(8): Filtering Methods for Chaotic Series. Systems, Man and Cybernetics 7. 21. (Smc) pp. 3531-3536. 13(7):Ronghui 445-448. Z, Wan J (2006) Neural Network-Aided Adaptive Unscented Iiguni Y, Sakai H, Tokumaru H (1992) A Real-Time Learning Algorithm Kalman Filter for Nonlinear State Estimation. Signal Processing Letters for a Multilayered Neural Network Based on the Extended Kalman 8. Darí Filter. Signal Processing, IEEE Transactions 40(4): 959-966. Englando B, 23(3-4): Morgado-DF 609-615. (2013) A Survey of Artificial Neural Network Training Tools. Neural Computing and Applications Springer, London,

This work is licensed under Creative Your next submission with Juniper Publishers Commons Attribution 4.0 Licens DOI: 10.19080/RAEJ.2017.01.555568 will reach you the below assets • • Swift Peer Review Quality Editorial service • Reprints availability • • E-prints Service • Manuscript Podcast for convenient understanding • Global attainment for your research ( Pdf, E-pub, Full Text, Audio) Manuscript accessibility in different formats •

Unceasing Track thecustomer below service URL for one-step submission https://juniperpublishers.com/online-submission.php

How to cite this article: Denis P d L, Rafael F V S, Emerson C P. Neural Network Training Using Unscented and Extended Kalman Filter. Robot Autom 00105 Eng J. 2017; 1(4): 555568. DOI: 10.19080/RAEJ.2017.01.555568.