International Journal of Computer Networks and Communications Security

VOL. 8, NO. 2, February 2020, 17–25 C C Available online at: www.ijcncs.org N S E-ISSN 2308-9830 (Online) / ISSN 2410-0595 (Print)

A Novel Hybrid Dragonfly Algorithm with Modified Conjugate Method

Layth Riyadh Khaleel1 and Prof. Dr. Ban Ahmed Mitras2

1 M.sc. Student, department of mathematics, college of computer sciences & mathematics, Mosul University 2 Prof. Dr. department of mathematics, college of computer sciences & mathematics, Mosul University

[email protected], [email protected] ABSTRACT

Dragonfly Algorithm (DA) is a meta-heuristic algorithm, It is a new algorithm proposed by Mirjalili in (2015) and it simulate the behavior of dragonflies in their search for food and migration. In this paper, a modified conjugate gradient algorithm is proposed by deriving new conjugate coefficient. The sufficient descent and the global convergence properties for the proposed algorithm are proved. Novel hybrid algorithm of the dragonfly (DA) was proposed with modified conjugate gradient Algorithm which develops the elementary society that is randomly generated as the primary society for the dragonfly optimization algorithm using the characteristics of the modified conjugate gradient algorithm. The efficiency of the hybrid algorithm was measured by applying it to (10) of the optimization functions of high measurement with different dimensions and the results of the hybrid algorithm were very good in comparison with the original algorithm. Keywords: Conjugate Gradient Methods, Meta-Heuristic Algorithms, Dragonfly Optimization Algorithm.

1 INTRODUCTION [5]. In the same year, Pathania and others used a dragonfly algorithm to solve the issue of multi- Optimization can be defined as one of the target distribution of the thermal system [6]. In branches of knowledge dealing with discovering or 2017, Abhiraj and Aravindhababu used a dragonfly arriving at the optimal solutions to a specific issue algorithm to reconfigure distribution networks in within a set of alternatives.[1] The methods of order to improve the electrical potential winding solving optimization problems are divided into two [7]. types of algorithms: Deterministic Algorithms and The aim of the research is: First: modified Stochastic Algorithms [2]. Most of classical conjugate is proposed by deriving algorithms are specific algorithms. For example, a new conjugancy coefficient named MCG the Simplex method in is a algorithm. specific algorithm, and some specific algorithms Second: proposed a new hybrid algorithm use tilt information (Gradient), which is called consisting of a dragonfly algorithm (DA) with slope-based algorithms. For example, Newton- modified Conjugate Gradient conjugation methods Raphson algorithm) is an algorithm based on slope called the DA-MCG algorithm. or derivative [3]. As for random algorithms, they have two types of algorithms, although the 2 CONJUGATE GRADIENT METHOD difference between them is small: Heuristic Algorithms and Meta-Heuristic Algorithms. In In unconstrained optimization, we minimize an 2015 a new algorithm was proposed by Mirjalili objective function which depends on real variables which is a dragonfly algorithm that simulates the with no restrictions on the values of these behavior of dragonflies in their search for food and variables. The unconstrained optimization problem migration [4]. In 2016, Bashishtha and Srivastava is: used a dragonfly algorithm to address the problem Min f (x) : x  R n 1, of optimal energy flow in an electric power system. 18

L. R. Khaleel and B. A. Mitras / International Journal of Computer Networks and Communications Security, 8 (2), February 2020

n where f : R  R is a continuously y  G s *s  yT s  Gs T s  k k k  k k k k k differentiable function, bounded from below. A T nonlinear conjugate gradient method generates a yk sk G  2 .I nxn sequence {xk}, k is integer number, k≥0. Starting sk from an initial point x0, the value of xk calculate by (10) the following equation: Let d N  G1g xk1  xk  k dk ; 2 , k1 k k1 (11) T where the positive step size λk>0 is N yk sk obtained by a , and the directions dk are dk1   gk1 s 2 generated as: k (12) dk1  gk1   k dk ; 3, y Multiply both sides of equation (12) by k and we Where, d =-g , the value of β is 0 0 k get determine according to the algorithm of Conjugate Gradient (CG), and its known as a conjugate  T  T N yk sk T gradient parameter, sk=xk+1-xk and yk d k1   2  yk g k1 g  f x  f (x )  sk  k  k  k , consider ǁ.ǁ is the Euclidean   (13) norm and yk=gk+1-gk. The termination conditions  yT d CG  yT g   d T y for the conjugate gradient line search are often k k1 k k1 k k k (14) based on some version of the Wolfe conditions. From (13) and (14) we have The standard Wolfe conditions  T  T T y k sk T  y k g k 1   k d k y k    y k g k 1 ; 15 T  s 2  f xk  kdk  f xk  k gk dk (4)  k  T T We assume that gxk  k dk  dk  gk dk ; 5; T (DY ) gk1gk1 where dk is a descent search direction and     k k yT d 0<ρ<σ<1, where βk is defined by one of the k k following formulas: Then we have T T T y g g g y g  T  (HS ) k k1 (FR) k1 k1 (PRP) k k1 T DY T y k sk T  k  T ;  k  T  k  T  yk g k 1   k d k yk    y k g k 1 ; 16 y d g g g g  s 2  k k k k ; k k  k  …………… (6) T T T 2 g g y g g g g  yT s   (CD)   k 1 k 1  (LS)   k k 1  (DY )  k 1 k 1  yT g  k 1 d T y   k k  yT g ; 17 k T k T k T k k 1 d T y k k 2 k k 1 g d g d y s k k  sk  k k ; k k ; k k   … …………….(7) Al-Bayati and Al-Assady In (Al-Bayati and From eq.(17) we get: Al-Assady ,1986) proposed three forms for the  T  T yk sk T scalar βk defined by :  yk g k1   k k   2  yk g k1; 18 2 2 2 y y y  sk   AB1  k ; AB2   k ; AB3  k ; k 2 k T k T Then, we have g d k g k d k yk k ………… (8) [8]  2  T  sk yk sk T T   T  2  yk gk1  yk gk1 3 PROPOSED A NEW CONJUGANCY 2fk  fk1  gk1sk  s      k  COEFFICIENT k  k …… (19) Newton conditionآ-We have the quasi  yT s  y  G s  k k yT g  yT g k k k (9)  T  k k1 k k1 2( fk  fk1  gk1sk ) We multiply both sides of equation (9) by y and   k k  we get k …. (20)

19

L. R. Khaleel and B. A. Mitras / International Journal of Computer Networks and Communications Security, 8 (2), February 2020

T  yk sk  T 1  yk g k1 5.1 Sufficient Descent Property 2( f  f  g T s )  k k1 k1 k   k  , 21 We will show that in this section the proposed  k algorithm which defined in the equations (22) and 2 Since τk+1 then we suppose: τk =ǁgkǁ then: (3) satisfy the sufficient descent property which T  yk sk  T satisfy the convergence property. 1   yk g k 1 2( f  f  g T s )  k k 1 k 1 k  Theorem (1):  k  2 ; 22 g k The search direction dk that generated by the proposed algorithm of modified CG satisfy the 4 OUTLINES OF THE PROPOSED descent property for all k, when the step size λk ALGORITHM satisfied the Wolfe conditions (4),(5). Proof: we will use the indication to prove the Step(1):The initial step: We select starting point descent property, for k=0, n T x  R d0  g0  d0 g0   g0  0 0 , and we select , then we proved that the accuracy solution ε>0 is a small the theorem is true for k=0,we assume that positive real number and s ; g   and g 2 2 k k1 k we find dk=-gk, λ0=argmin ǁg0ǁ , and we set k=0. and assume that the theorem is true for any k, i.e. T T ; Step(2): The convergence test: If ǁg0ǁ≤ε then stop dk gk  0 or sk gk  0 or sin ce sk  k dk and set the optimal solution is xk. Else, go to step(3). now we will prove that the theorem is true for k+1 Step(3): The line search: We compute the value of then: (New) λk by Cubic method d k1  g k1   k d k and that satisfy the Wolfe conditions in Eqs.(4),(5) (24) and go to step(4). i.e. T  y s  T xk1  xk  k d k k k Step(4): Update the variables: 1 T  yk gk1  2( fk  fk1  gk1sk ) f (xk1 ),g k1 dk1  gk1  dk and compute . And g 2 k s  x  x y  g  g ……………. (25) k k1 k , k k1 k . Multiply both sides of the equation (25) by g we Step(5): Check: if ǁg ǁ≤ε then stop. Else k+1 k+1 get: continue.  yT s  Step (6): The search direction: We compute the k k T 1 T  yk gk1 ( New) T 2  2( fk  fk1  gk1sk ) T  k scalar by use the equation (20) and set gk1dk1   gk1  2 gk1dk g k=k+1, and go to step (4). k ……………(26) 5 THE CONVERGENCE ANALYSIS  T   yk sk  T 1 2 yk gk 1  y  Theoretical Properties for the New CG-Method. T 2  k  T gk 1dk 1  gk 1  2 gk 1dk In this section, we focus on the convergence  gk  (1 ) dk yk  New behavior on the k method with exact line (27) 2 searches. Hence, we make the following basic Divided both side by ǁgk+1ǁ : assumptions on the objective function.   yT s   Assumption(1): k k T  T 2 1 T  yk gk 1  f is bounded below in the level set g d  g 2( f  f  g s ) gT d  k 1 k 1 k 1   k k 1 k 1 k  k 1 k  n  2 2 2  Lx  {x  R f (x)  f (x0 )} g g g 0 ; in some  k 1 k k 1  neighborhood U of the level set Lx0, f is   continuously differentiable and its gradient f is (28) Lipschitz continuous in the level set Lx0 , namely, there exists a constant L> 0 such that:

f (x)  f (y)  L x  y ; for all x, y  Lx0 (23)

20

L. R. Khaleel and B. A. Mitras / International Journal of Computer Networks and Communications Security, 8 (2), February 2020

 yT s  1 k k y g T 2  T  k k1 g d gk1dk1  gk1  2( fk  fk1  gk1sk ) k1 k 2  2 2 Lemma 1: gk1 gk gk1 Suppose assumptions (1) (i) and (ii) hold and consider any conjugate gradient method (22) and  yT s  (3), where dk is a descent direction and λk is k k obtained by the strong Wolfe line search. If T 2 1 T  yk dk g d  g 2( f  f  g s ) k 1 k 1 k 1   k k 1 k 1 k  ; 30 2 2  1 gk 1 gk    2 k1 d k (38)  y s  Then 1 k k  y d 2 k k (39) T  2( f  f  g s ) l i m i nf gk 0 gk 1dk 1  gk 1  k k 1 k 1 k  2  2 ; 31 k gk 1 gk For uniformly convex functions which satisfy the above assumptions, we can prove that the norm of T 2 g k1d k1  g k1 yk d k dk+1 given by (25) is bounded above. Assume  32 that the function f is a uniformly convex function, g 2 g 2 k1 k i.e. there exists a constant   0 such that for all

2 2 x,y S, g g 2 k1  k   1; 33 (g(x)  g(y))T (x  y)   x  y , 40 T 2 y d g k1d k1  g k1 k k Using lemma 1 the following result can be proved. T 2 Theorem 2: gk1dk1  gk1 1  (34) 2  Suppose that the assumptions (i) and (ii) hold. gk1 sk T 2 1 2 Consider the algorithm (3), (22). If tends to g k 1d k 1  g k1  g k 1 ; 35  zero and there exists nonnegative constants 1 and

 2 such that: T 1 2 2 2 2 gk1dk1  (1 ) gk1 g k 1 sk ; g k1 2 sk 41  Let and f is a uniformly convex function, then. 1 lim inf g 0 c  (1 ); 36 k ; 42  k Then gT d  c g 2 Proof: From eq. (22) We have: k1 k1 k1 (37)  yT s  1 k k yT g For some positive constant c>0. This condition has  T  k k1 new  2( fk  fk1  gk1sk ) often been used to analyze the global convergence k  g 2 of conjugate gradient methods with inexact line k search. From Cuchy-Shwartz we get:   T yk sk  yk sk  T   1  T yk gk 1 5.2 Global Convergence Property 1  T  yk gk 1  2( fk  fk 1  gk 1sk ) New  2( fk  fk 1  gk 1sk )   k 1  2  2 ; (43) The conclusion of the following lemma is used to gk gk prove the global convergence of nonlinear conjugate gradient methods, under the general But Wolfe line search.

21

L. R. Khaleel and B. A. Mitras / International Journal of Computer Networks and Communications Security, 8 (2), February 2020

y  L s ,. k k Then

  T L sk sk  yk sk  T 1 y g 1 L sk g k1  T  k k1  2( f  f  g s ) New  2( f k  f k1  g k1sk )  k k1 k1 k   k1  2  2 g k g k

………………. (44) Fig. 1. a) the shape of a true dragonfly insect   L sk sk 1 L sk gk1  2( f  f  g s ) New  k k1 k1 k  k1  g 2 k ……………………. (45) From (41)  L 2  1 L  2( f k  f k1 )  New    k1 1 s k (46) Let from theorem (1): b) the life cycle of dragonflies

A  ( fk  fk1) 6.1 Dragonfly Algorithm then  L 2  Swarm behavior follows three basic principles of 1 L exploration and exploitation: 2( A ) New   Separation: This refers to the constant avoiding  k 1  1 sk collision of individuals with other individuals in (47) the neighborhood. L  New  • Alignment: that indicates matching the speed of k1 1 s individuals with other individuals in the k (48) neighborhood. Hence, Cohesion: which indicates the tendency of individuals towards the center of the

N neighborhood block. As shown in the d  g   s ; (49) k1 k1 k k numbered Fig. (2 L L d    s    k1 1 s k 1 k (50) 1   (51)  2 k1 d k1 1 1   52 2     L  k1     1 

6 DRAGONFLY ALGORITHM Fig. 2. Primitive Correction Patterns Among Swarm Individuals

Dragonflies are one of the types of flying insects, attraction towards a food source Leaving away which may reach about 3000 species, and from the enemies dragonflies are predators so some of them called he above behaviors are modeled mathematically as the devil needle or the devil's arrow accordingly follows: [4]. The principle of separation is calculated as follows:

22

L. R. Khaleel and B. A. Mitras / International Journal of Computer Networks and Communications Security, 8 (2), February 2020

∑N V there are no adjacent solutions. In this case, the A = j=1 j (53) i N dragonfly's site is updated with the following whereas : formula: Vj: represents the velocity of j of the adjacent Xt+1 = Xt + Levy(d) ∗ Xi (59) individuals. Where: t is the current iteration, d: is the dimension The principle of cohesion is calculated of the location vector. The (Levy flight) equation is mathematically as follows: calculated as follows: ∑N X C = j=1 j − X (54) i N r1 ∗ σ As: X: represents the current individual location, Levy(x) = 0.01 ∗ 1 Xj: represents the j location of the adjacent β |r2| individuals, and N: the number of adjacent (60) individuals. Where: r1, r2 are random numbers enclosed The principle of attraction to a food source is between [0,1], β: constant (equal to 1.5) and that σ calculated as follows: + is calculated as follows: Fi = X − X (55) πβ Whereas: r(1+β)∗sin( ) 1 2 β X: represents the individual's current location and σ = ( β−1 ) (61) 1+β ( ) X +: represents the location of the food source. F( )∗β∗2 2 Finally, the principle of leaving away from 2 enemies is calculated as follows: − WHERE : F(x) = (x − 1)! Ei = X + X (56) As: X: represents the individual's current location and X- : represents the enemy's location. 6.2 The Steps of the Dragonfly Algorithm It is assumed that the dragonflies behavior is a combination of these five corrective patterns. To The steps for the Dragonfly DA algorithm can be update the position of the artificial dragonflies in summarized in below: the research area and simulate their movements, Step (1): Configure the dragon community Xi two vectors are taken into consideration, namely: (i = 1, 2, ..., n). step (∆X) and location (X). The DA algorithm has Step (2): Initialize the vector of step Xi (i = 1, 2, been developed based on the PSO algorithm. The .. ., n) step vector shows the direction of motion of the Step (3): When the stopping condition is not met dragonfly (note that the site-update model of the (access to max- iter.). artificial dragonfly is defined in one dimension, but Step (4): Calculate the target function value for all the proposed method can extend to higher dragonflies. dimensions) and is defined as follows: Step (5): Update the source of the food and the enemy according to Eqs.(4), (5). Step (6): Update the values for ( w, s, a, c, f, e). ∆X = (sS + aA + cC + fF + eE ) + w∆X t+1 i i i i i t Step (7): Calculate the values of (S, A, C, F, E) …………… (57) using equations 1 to 5. As: (s) represents the weight of the separation Step (8): Update the Radius beam to the and (Si ) indicates the separation for (i) of the Neighborhood. individuals, ( a ) is the weight of the alignment, (A) Step (9): If the dragonfly has at least one of the is the alignment for (i) of the individuals, and (c) neighboring dragonflies, then update the velocity indicates the weight of the cohesion, (Ci) is the vector using equation (57) and the location vector coherence for (i) of the individual, and (f) is the using equation (58) otherwise go to step (10). food factor, (Fi) is the source food for (i) of an Step (10): Otherwise, update the location vector individual, (e) is the enemy factor, (Ei) is the using equation (49). enemy location for (i) of an individual ,(w) is the Step (11): Verify and correct new locations based initial weight, and (t) is the repeater counter. on variable limits and finish. After calculating the step vector, the location vector is calculated as follows: 7 PROPOSED HYBRID ALGORITHM X = X + ∆X (58) t+1 t t+1 As: t is the current iteration. To improve randomness, random In this section, a new hybrid method has been behavior, and exploration of artificial dragonflies, proposed to solve the optimization as in the the dragonflies swarm is required to fly around the following flow chart : search space using the (Levy flight) method when

23

L. R. Khaleel and B. A. Mitras / International Journal of Computer Networks and Communications Security, 8 (2), February 2020

Function Dim Rang Fmin e Start n 2 F1(x) = ∑ xi 30 [- 0 i=1 100,1 Preparing the initial values for vectors w,s,a,c,f ,e 00]

F2(x) n 30 [- 0 MCG Algorithm = ∑ |x| 10,10 i=1 n ]

+ ∏ |x | Calculate the goal function and values i i=1 S,A,C,F,E F3(x) 30 [- n n 100,1 0 2 Update the velocity vector and the location vector = ∑(∑ xi) 00] j−1 i=1 F4(x) = maxi {|xi|. 1 30 [- 0 Choose the new location ≤ i 100,1 ≤ n} 00] F (x) iter = iter+1 5 n−1 30 [- 0 = ∑ [100(xi+1 i=1 30,30 2 2 2 − xi ) + (xi − 1) ] ]

F6(x) 0 If n 30 [- iter=maxi 2 = ∑ [xi ter i=1 5.12,5 − 10 cos(2πxi) + 10] .12] F7(x) 30 [- 0

n 32,32 n 1 2 1 = −20exp (−0.2√ ∑ xi ) xp] ( ∑ cos(2πxi) End n i=1 n i=1 + 20 + e Fig. 3. Flowchart Of The Proposed Algorithm (Da-Mcg) F8(x) 30 [- 0 n 2 600,6 That have been called DA-MCG A = ∑ xi i=1 00] proposed hybrid Algorithm, called DA- MCG. The n xi steps of the proposed hybrid algorithm (DA-MCG) − ∏ cos ( ) i=1 √i

+ 1 8 NUMERICAL RESULTS F (x) 4 [-5,5] 0.00 9 11 For the purpose of evaluating the performance of 030 = ∑ [ai the proposed algorithms in solving optimization i=1 issues, the proposed algorithm was tested DA- 2 2 x1(bi + bix2) MCG, using (10) standard functions in order to − 2 ] compare with the dragonflies algorithm itself. bi + bix3 + x4 Table (1) shows the details of the test functions. F10 2 [-5,5] - The stopping condition is used if the function 1 1.03 = 4x2 − 2.1x4 + x6 reaches the minimum value and the highest 1 1 3 1 16 frequency of all programs is equal to (500) 2 4 + x1x2 − 4x2 + 4x2 repetitions.

24

L. R. Khaleel and B. A. Mitras / International Journal of Computer Networks and Communications Security, 8 (2), February 2020

As for the numbered tables (2,3,4), it shows the Func DA DA-MCG results of the algorithm (DA-MCG) compared to . the results of the algorithm (DA), as it shows the F2 3.5461 e-06 2.647690000000000 e-100 success of the proposed algorithm (DA-MCG) by improving the results of most of the standard high- F3 5.4455 e-07 2.566200000000000 e-198 performance test functions and this confirms the success of Hybridization process. F4 4.67 e-06 1.164059000000000 e-99

Table 2: Comparison the results between DA and DA-MCG F5 59.4639409 3 using the number of elements consisting of 15 elements and 09 number of iterations 500 F6 1.9899 0

Func. DA DA-MCG F7 2.552 e-08 8.8818 e-16

F8 0 0 F1 0.0422 2.88401300000 e-198 F9 0.0016 0.14841 -1.0316 -2.122295805409 e-43 F2 0.0093 4.0600000000 e-100 F10 F 0.0851 3.357760000e-198 3 F 0.0260 1.71242000000e-99 The test was applied by a laptop that carries the 4 following characteristics: the processor speed is F 4.2346 3 5 2.70, the memory size is 8GB, and the Matlab F6 2.2909 0 R2014a program is running Windows 8. F7 0.3821 8.8818 e-16 9 CONCLUSIONS F8 0.2228 0

F9 0.0043 0.14841 F -1.0316 -1.6494000278 e-52 Hybridization of heuristic algorithms with one of 10 the modified classical algorithms contributed to improving its performance by increasing the speed Table 3: Comparison the results between DA and DA-MCG of convergence, and also led to an improvement in using the number of elements consisting of 20 elements and number of iterations 500 the quality of the resulting solutions by increasing its exploratory and exploitative capabilities, as Func. DA DA-MCG numerical results showed the ability of hybrid F 3.6151e-05 6.868900000e-198 algorithms to solve various optimization issues. 1 The results of the DA- MCG algorithm were F2 2.0277e-08 3.2190710000 e-100 compared with the algorithm of examples of dragonflies themselves, the DA, which resulted in F3 0.18831746555 3.336236666 e-198 encouraging results.

F4 1.8951e-05 1.98945400000e-99 10 REFERENCES F5 3.55604000000 3 F6 4.08238090909 0 [1] J. Nocedal and S. J. Wright, "Numerical F 3.680815 e-08 8.8818 e-16 optimization 2nd," Springer2006. 7 [2] P. Moallem, S. A. Monadjemi, B. Mirzaeian F 0.3106550 0 8 and M. Ashourian, A novel fast F 0.00594456875 0.14841 backpropagation learning algorithm using 9 parallel tangent and heuristic line search, F10 -1.0316 -1.5265e-103 Proceedings of the 10th WSEAS international conference on Computers, [3] World Scientific and Engineering Academy Table 4: Comparison the results between DA and DA-MCG and Society (WSEAS), 2006, pp. 634-639. using the number of elements consisting of 30 elements and number of iterations 500 [4] Meng, X., Liu, Y., Gao, X., & Zhang, H. (2014). A new bio-inspired algorithm: chicken Func DA DA-MCG swarm optimization. International conference . in , pp.86-94 F1 7.9415 e-06 2.583248000000000 e-198

25

L. R. Khaleel and B. A. Mitras / International Journal of Computer Networks and Communications Security, 8 (2), February 2020

[5] Reddy, P. D. P., Reddy, V. V., & Manohar, T. G. (2017). Whale optimization algorithm for optimal sizing of renewable resources for loss reduction in distribution systems. Renewables: Wind, Water, and Solar, Vol.4(1), pp.1-13. [6] Bashishtha, T. K., & Srivastava, L. (2016). Nature inspired meta- heuristic dragonfly algorithms for solving optimal power flow problem. International Journal of Electronics, Electrical and Computational System, Vol.5(5),pp.111-120. [7] Pathania, A. K., Mehta, S., & Rza, C. (2016). Multi-objective dispatch of thermal system using dragonfly algorithm International Journal of Engineering Research, Vol.5(11), pp. 861-866. [8] AL – Bayati, A.Y. and AL – Assady, N.H. (1986). " Conjugate gradient method", Technical Research , school of computer studies , Leeds University.