Weighted mean variant with exponential decay function of Grey Wolf Optimizer under Based Algorithm

Alok Kumar1, Avjeet Singh2, Lekhraj3, Anoj Kumar4

1,2,3,4 Motilal Nehru National Institute of Technology, Allahabad, Computer Science and Engineering Department, {alokkumar, 2016rcs01, lekhraj, anojk}@mnnit.ac.in1,2,3,4

Abstract. Nature-Inspired Meta-heuristic algorithms are optimization algorithms those are becoming famous day by day from last two decades for the researcher with many key features like diversity, simplicity, proper balance between exploration and exploitation, high convergence rate, avoidance of stagnation, flexibility, etc. There are many types of nature inspired meta- heuristics algorithms employed in many different research areas in order to solve complex type of problems that either single-objective or multi-objective in nature. Grey Wolf Optimizer (GWO) is one most powerful, latest and famous meta-heuristic algorithm which mimics the leadership hierarchy which is the unique property that differentiates it from other algorithms and follows the hunting behavior of grey wolves that found in Eurasia and North America. To implement the simulation, alpha, beta, delta, and omega are four levels in the hierarchy and alpha is most powerful and leader of the group, so forth respectively. No algorithm is perfect and hundred percent appropriate, i.e. replacement, addition and elimination are required to improve the performance of each and every algorithm. So, this work proposed a new variant of GWO namely, Weighted Mean GWO (WMGWO) with an exponential decay function to improve the performance of standard GWO and their many variants. The performance analysis of proposed variant is evaluated by standard benchmark functions. In addition, the proposed variant has been applied on Classification Datasets and Function Approximation Datasets. The obtained results are best in most of the cases.

Keywords: GA, GP, ES, ACO, PSO, GWO, Exploitation, Exploration, Meta- heuristics, .

1 Introduction

Heuristic algorithms face some problems and limitations like it may stuck in local optima, produced limited number of solutions, problem dependent, and so forth. To overcome these types of issues, meta-heuristics algorithms come into the picture and play an important role to improve the performance and simplicity to the researchers. Nature-inspired meta-heuristic algorithms become inspired by nature and follow the teaching-learning process to the group elements. Nature-inspired meta-heuristic algorithms can be classified into four categories as shown in figure [1], Evolutionary algorithm, Swarm based algorithm, Physics Based Algorithms, and Biological inspired algorithms. Surprisingly, algorithms such as, Genetic Algorithm (GA), Genetic Programming (GP), Evolution strategy (ES), etc. comes under evolutionary algorithm, Ant Colony Optimization (ACO), Bat algorithm, Particle Swarm Optimization (PSO), Grey Wolf Optimizer (GWO), etc. comes under swarm based algorithm. In this paper, a focus on Grey Wolf Optimizer (GWO) of swarm based algorithm to improve the performance. A homogenous and large group of bird or animal is known to swarm. Surprisingly, an algorithm is employed on the intelligence of swarm that considered as swarm intelligence algorithms. GWO is a swarm intelligence algorithm and the scientific name of grey wolves is canis lupus that was inspiration of innovation of this proposed algorithm.

Fig. 1. Classification of Nature-inspired meta-heuristic algorithms Genetic Algorithm (GA) [1-2] is an optimization algorithm and was introduced by John Holland in 1960 that follows the principle of Darwin's theory of evolution which state that “theory of natural selection and evaluation” regarding survival of fitness, i.e. eliminate those elements or species from the environment whose are not survive or fitted in the environment. Holland„s student David E. Goldberg further extended and proposed the GA in 1989. It is initiated with random solution called population and performs bio-inspired operators such as selection, crossover, and mutation relaying recursively until obtained the desired output. More chance to get the best solution in next generation than present one after implementation. Crossover and mutation operators of GA perform the exploration as well as exploitation property of optimization technique. GA comes under the evolutionary algorithms of nature inspired meta-heuristic optimization. This optimization algorithm employed in many research areas to solve complex type of problem. The problem of image segmentation and image classification are solved by GA which are the research domains of image processing. Genetic Programming (GP) [3-4] is a sub class of Evolutionary Algorithms and based on evolution theory. It was introduced by Jone Koza in 1992 and performed reproduction, crossover, and mutation operators initially and architecture-altering operations at the end to implement this algorithm. This algorithm is an elongation of GA and a domain-independent method. This optimization algorithm employed in many research areas to solve complex type of problem. The problem of image segmentation and image classification are solved by GP which are the research domains of image processing. GP can exploit complex and variable length representation that uses various kinds of operator to combine the input in linear or non- linear form which is suitable to construct new features. Evolution strategy (ES) [5-8] also comes under the evolutionary algorithms of nature inspired meta-heuristic optimization and was introduced in early 1960s by Ingo Rechenberg, Hans-Paul Schwefel and Bienert. It was further developed in 1970 and based on evolution theory. Mutation and recombination operators are employed to perform the process of evaluation of algorithm to obtain the batter results in each generation. (1+1) in [6], (1+λ) and (1, λ) in [7] are categories of ES which are used to select the parent. This optimization algorithm employed in many research areas with different domains. The problem of image segmentation and medical image are solved by ES which are the research domains of image processing. Ant Colony Optimization (ACO) [9-11] is a sub class of Swarm Based Algorithms, based on the concept of swarm intelligence, and during searching the food by ants is the inspiration of this algorithm. It was initially proposed by Marco Dorigo in 1992 in his Ph.D. thesis. Ant has the capability or ability to find the food source from their nest with best possible shortest path. To find the optimal path, ants disperse the pheromone (a special type of perfume or chemical) to indirect communication between them. This meta-heuristic optimization algorithm employed in many research areas commonly to solve graphical type of problems. The problem of image classification is solved by ACO which comes under the research domains of image processing. The Bat algorithm [12] is also a meta-heuristic algorithm, sub class of Swarm Based Algorithms, and inspired by echolocation behavior of micro bats. In 2010, Xin- She Yang developed this algorithm for . A highly innovative aesthesia of hearing have developed by few bats in their path and generates echoes back to bats. Simplicity and flexibility are the main advantages of this algorithm and it is very easy to design. This meta-heuristic optimization algorithm employed in many research areas to solve complex type of problem. The problem of image compression is solved by bat algorithm which is the research domains of image processing. Particle Swarm Optimization (PSO) [13] proposed and design by Kennedy and Eberhart in 1995. The simulation of swarm (group of particles) optimization algorithm based on social behavior or social intelligence of species such as fish schooling (In biological vocabulary, any group of fishes that halt together for untidily reason known as shoaling, further, if the group of fishes swims in same direction for hunting in unified manner, that known as schooling) and bird (an assembly of group of similar types of animals in order to travel, pasturage or jaunt with one another, that known as Flocking). PSO is implemented with only two paradigms, PBEST (particle best or personal best) and GBEST (Global best). Individual best solution of any particle during any course of generation called personal best solution, subsequently best out of all personal best solution is known as Global best. The velocity and Position vector simulate the mathematical model to generate optimal results. This swarm based optimization algorithm employed in many research areas to solve complex type of problem. The problems of image segmentation and medical image which are the research domains of image processing are solved by PSO.

3

2 Literature Review In this section, a description and brief literature review of variants of Grey Wolf Optimizer and their applications in different research domain. The literature of GWO is as follows: Al-Aboody et al. [14] devised a three-level clustered routing protocol using GWO for wireless sensors to increase the performance and stability. The procedure done completely in three phases, in the first level centralized selection helps in finding the cluster heads from the base level, in the second level routing for data transfer is done where the nodes select the best route to the base station to consume less energy, and in the third & last level, distributed clustering is introduced. The evaluation of the algorithm was done through the network's lifetime, stability, and energy efficiency and it was also found that refined realization of the algorithm in terms of lifetime stability and more residual energy. The proposed algorithm performs better than LEACH in terms of the lifetime of the network. Partial discharge (PD) leading to insulation degradation occurring in the insulation systems of the transformer is the major cause of their deterioration. Dudani and Chudasama [15] adopted sensor based acoustic emission technique in order to detection of PD along with Adaptive GWO for localization of PD source. New randomization adaptive technique gives an aid to AGWO algorithm with faster convergence and less parameter dependency to realize global optimal solution. The proposed approach applied on unconstrained test benchmark function to check the performance and locate the optimum location of PD in the transformer. An outcome of AGWO shows that it is superior to other optimization algorithms and, electrical and chemical detection methods. Jayabarathi et al. [16] presented a prestigious research work on the application of a grey wolf optimizer to solve non-linear, non-convex, and discontinuous in nature dispatch problems with various constraints. The algorithm includes crossover and mutation for hybridizing the algorithm to increase its performance by giving lower final cost and good convergence named as Hybrid GWO. Four dispatch intricacies with obstructed operating zones, valve point effect and ramp rate limits were solved using this algorithm and found that there was no transmission loss, and latter it compared with several algorithms to check the competitive performance. The results reveal that this algorithm works better and has a low cost. Jitkongchuen [17] proposed an alternative approach to finding the optimized way for better performance of the general DE. It uses new mutation schemes where the controlling parameters are self-adapted based on the feedback from the evolutionary search. The GWO is applied to the crossover to increase the favorability of the solution. The experimental results show that this method has been found ruthless when compared with PSO, jDE, and DE algorithm. The proposed algorithm was also tested upon nine standard benchmarks and was found to be more productive in finding the solution to the complex problems. To optimize the histograms of image and perform multilevel image segmentation, Li et al. [18] in this work proposed an algorithm know as Modified Discrete GWO (MDGWO). MDGWO is adopted for Multilevel Thresholding as it improves the location selection mechanism of 훼, 훽, and 훿 during hunting. Also, MDGWO uses weight coefficient to optimize final position (best threshold) of prey. Kapur‟s entropy is used as the optimized function. The proposed algorithm is tested on standard test images like Lena, Cameraman, Baboon, Butterfly, Starfish etc. The experimental results demonstrate that MDGWO can sharply find out the optimal thresholds. These thresholds are very near to the outcomes tested by comprehensive searches. MDGWO is superior to Electromagnetism Optimization (EMO), the DE, ABC, and the classical GWO. MDGWO yields better image segmentation quality, objective function values, and their stability. In this research work, Li et al. [19] proposed an algorithm to handle multilevel image thresholding problem leading to image segmentation. For optimizing Fuzzy Kapur‟s entropy to obtain a set of thresholds, MDGWO is adopted. Fuzzy Kapur's entropy is picked as optimal objective function. To initiate fuzzy membership, MDGWO is utilized as the tool which implies pseudo-trapezoid shaped. Finally with the employment of local information aggregation segmentation is achieved. The algorithm is known as FMDGWO when applied with Fuzzy. The schemed algorithm is verified under a set of benchmark images which are picked from the Berkeley Segmentation Data Set Benchmarks 500. FMDGWO yields improved PSNR and formulated objective function values. FMDGWO outperforms EMO, MDGWO, FDE (Fuzzy entropy based DE algorithm). FMDGWO produced high level of segmentation and provide more stability. The popularity of meta-heuristics algorithms becoming famous day to day by solving complex and NP-hard problems those are impossible in linear time complexity. GWO is illustrious, renowned, and latest swarm intelligence algorithm. The No Free Lunch (NFL) [20] has logically proved that there does not exist any meta-heuristic algorithm which is best suited for all optimization problems. Hence, new variants are proposing day by day to overcome related issues and to solve various kinds of problems of real life. In this research article, proposed a variant of GWO, namely Weighted Mean GWO (WMGWO) with an exponential function. 3 Grey Wolf Optimizer and their variants Grey Wolf Optimizer (GWO) is a renowned, novel and famous meta-heuristic optimization algorithm that was developed by Mirjalili et al. [21]. GWO is a swarm intelligence algorithm and the scientific name of grey wolves is canis lupus that was inspiration of innovation of this proposed algorithm. To perform the hunting operation, gray wolves follow a spatial type of social hierarchy and hunting behavior. There are four levels of social hierarchy contains from level 1 to level 4 to categories the population each at different level. The social hierarchy of the wolves is depicted in the figure 2.

Fig. 2: Social hierarchy of Grey wolves [21]

5

The leader is apex wolf of the hierarchy is called as Alpha (α) hold at level 1 getting male or female. Many types of decision making like hunting, selection of place for sleeping etc. responsibilities are taken by leader of the group. One more interesting point is that all the group wolves acknowledge to the leader and holding down their tail. Subsequently, betas (β) are the advisors for the alpha and occupied the second level of hierarchy. Hence, these are the subordinate wolves and discipliner for the . They help to the alpha in decision making and ensure that order taken by the leader should be follow by all the subordinates and feedback to the leader. Surprisingly, deltas (δ) are the subordinates and take place at level three in the hierarchy. They play many duties for the pack and categories into many class according to their duty like scouts (responsible for observation of boundary), elders (old wolf who retired from the post of alpha or beta), caretakers (caring for ill wolves, lovingness for weak and wounded wolves), hunters (helps to alpha and beta in process of hunting), and sentinels (duty for protecting the pack). Omegas stay at last and fourth level of the hierarchy and are the ant apex wolves. They are like scapegoat and lastly allow eating. Leadership and decision making power goes down from top to bottom. To create the proper balance between exploration and exploitation, Mittal et al. [22] proposed a modified GWO (mGWO) algorithm in this work. The modification involves an exponential decay function (equation: 2) to balance the exploration and exploitation in the search space during course of iterations. The Clustering problem in WSN is also illustrated in which mGWO is adopted for the Cluster Head (CH) selection. For simulation, many benchmark functions like, Rastrigin‟s function, Weierstrass‟ function, Griewank‟s function, Ackley‟s function, Sphere function are selected. According to gotten outcomes of the proposed method, due to rapid convergence and fewer possibilities to get stuck at local minima, is advantageous for real-world applications. When compared with other existing meta-heuristic algorithms (GA, PSO, BA, and CS) and traditional GWO, mGWO yielded better results and has potential to solve real word optimization problem. One more modified variant of GWO (MVGWO) proposed by Singh [23] through enhancing the leadership hierarchy of grey wolves by adding one more gamma (γ) level of wolves (hierarchical level: alpha, beta, gamma, delta, and omega) simulated in hunting behavior and mean operator variable (μ) obliges the wolves to encircling and attacking prey to assist update their positions by modifying corresponding equations. 23 classical testing well-known benchmark functions employee to check the performance of proposed variant. Proposed variant apply on sine dataset and cantilever beam design function. The competitive comparison of proposed variant with other related algorithms like Convex Linearization method (CONLIN), Method of Moving Asymptotes (MMA), Symbiotic Organisms Search (SOS), CS, and Grid based clustering algorithm –I and II (GCA-I and GCAII) to find optimal solution. In classical GWO, the value of a goes from 2 to 0 which is decreasing linearly with equation as follows: a= 2*(1-t/T) (1) Where t indicates current iteration and T is maximum number of iteration in the implementation of standard GWO [22]. To proposed new variant namely Weighted Mean GWO (WMGWO), mGWO [23] employed with a weighted mean function as described in equation 3. In mGWO, an exponential function (equation: 2) used to calculate the value of a instead of above linear function. Where a= 2*(1-(t^2)/(T^2)) (2) This exponential function 70% values employed for exploration while 30% for exploitation to balance between them. Simultaneously, in forth step of proposed algorithm, to evaluate the next position of prey or optimal solution of the problem, the following equation is used Xp= ((C1*X1) + (C2*X2) + (C3*X3)) (3) Which state that Alpha is most powerful search agent in the population or hierarchy i.e. imposing maximum weight (C1= .54) to find the optimal value. At the next level, beta is second most powerful search agent i.e. imposing medium weight (C2= .3) to find the optimal value. Delta held at last level of hierarchy, i.e. that imposing lowest weight (C3= .16) to evaluate optimal values. 4 Simulation Environment The GWO, mGWO, MVGWO, and WMGWO meta-heuristic algorithms are coded and implementing on MATLAB R2017a, 12GB RAM, and Intel(R) Core(TM) i7- 4770 CPU @ 3.40GHz. 5 Results and Discussion

In this section, a test bed of 23 standard benchmark functions (F1-F23) that are used to check the performance of proposed variant. All considered functions are taken from CEC 2005 [24]. Tables of Unimodal, multimodal, and fixed-dimension multimodal benchmark functions are listed in [24] that are containing and indicating the minimization functions in which “Function” indicates the function‟s number in the list, dimensionality of function is indicated by Dim, Range indicates the boundary of function‟s search space, and optimum value of function is indicated by fmin. Unimodal functions (F1-F7) contains the single optima and in order of analysis of exploitation. On the other hand, multimodal functions (F8-F13) contain many local optima and in order of analysis of exploration that is contrast to unimodal functions. To simulate the classical algorithm (GWO), their variants (mGWO and MVGWO), and proposed work (WMGWO), 30 are the number of search agents and 300 maximum numbers of iterations. To improve the accuracy and check the performance due to randomness, every algorithm repeats 30 times and obtained results shown in table 1, 2, and 3. Average (Avg.) and slandered deviation (Std.) are the evaluated parameters and bold values show the best result of particular functions.

6 Real-World Dataset Problems

To apply the proposed variant on real-world applications, XOR Dataset, Balloon Dataset, Cancer Dataset, and Heart Dataset are to be considered of the Classification dataset. In addition, Sigmoid Dataset, Cosine Dataset, and Sine Dataset are to be considered of function approximation dataset. To simulate the application datasets, there are 200 search agents and maximum iterations are also 200. The dimensionality of XOR dataset (abbreviated as DXOR) is 36; Balloon dataset (DBAL) is 55, 209 of Cancer (DCAN), 1081 of Heart (DH), 46 of Sigmoid (DSIG), 46 of Cosine (DCOS), and 46

7 of Sine (DSIN) dataset. Rest of the detail about dataset and other information can take from [7].

Table 4 dictates the results of different datasets with Best_score and Classification Rate (%) that are considered as evaluation parameters of classification dataset and bold values depict the best result.

Table 4: Best_score and Classification Rate of Classification Dataset

Table 5 also dictates the results of different datasets with Best_score and Test_error that are considered as evaluation parameters of function approximation dataset and bold values depict the best result as above. Table 5: Best_score and Test_error of Function Approximation Dataset

7 Conclusion

Meta-heuristics algorithms are becoming popular from last two decades with their strength and ability to solve complex and NP-hard types of problems. GWO is illustrious, renowned, and latest swarm intelligence algorithm. No free lunch theorem state that no such algorithm exists that solves all kinds of problems and satisfies all related conditions. Hence, new variants are proposing day by day to overcome related issues and to solve various kinds of problems of real life. In this research article, proposed a variant of GWO, namely Weighted Mean GWO (WMGWO) with an exponential function. To check the performance of proposed variant, test beds of 23 benchmark functions are employed and compare the obtained results with standard GWO and their other variants of GWO like mGWO, MVGWO. The results of unimodal and multimodal benchmark functions state that the proposed variant worked properly i.e. compete the other algorithm. Surprisingly, it also provides the competitive results on fixed-dimension multimodal benchmark function. Formulate the proposed model, simulate on higher dimensional, and compare the results with other swarm and meta-heuristics algorithm as the future work. The proposed variant gives better and comparative results on classification and function approximation datasets than standard and other variants. CEC-2017 benchmark function will be considered as future work to evaluate the performance of proposed variant.

References

1. Holland, J. H. (1992). Genetic algorithms. Scientific american, 267(1), 66-73 2. Davis, L. (1991). Handbook of genetic algorithms. 3. Koza, J. R. (2010). Human-competitive results produced by genetic programming. Genetic Programming and Evolvable Machines, 11(3-4), 251-284. 4. Kinnear, K. E., Langdon, W. B., Spector, L., Angeline, P. J., & O'Reilly, U. M. (Eds.). (1999). Advances in genetic programming (Vol. 3). MIT press. 5. Hansen, N., & Kern, S. (2004, September). Evaluating the CMA evolution strategy on multimodal test functions. In International Conference on Parallel Problem Solving from Nature (pp. 282-291). Springer, Berlin, Heidelberg. 6. Jagerskupper, J. (2006). How the (1+ 1) ES using isotropic mutations minimizes positive definite quadratic forms. Theoretical Computer Science, 361(1), 38-56. 7. Auger, A. (2005). Convergence results for the (1, λ)-SA-ES using the theory of ϕ- irreducible Markov chains. Theoretical Computer Science, 334(1-3), 35-69. 8. T. Back and F. Hoffmeister and H–P. Schwefel, "A survey of evolution strategies", in Proceedings of the Fourth International Conference on Genetic Algorithms, 1991. 9. Dorigo, M., Maniezzo, V., & Colorni, A. (1996). Ant system: optimization by a colony of cooperating agents. IEEE Transactions on Systems, man, and cybernetics, Part B: Cybernetics, 26(1), 29-41. 10. Parsons, S. (2005). Ant Colony Optimization by Marco Dorigo and Thomas Stutzle, MIT Press, 305 pp., $40.00, ISBN 0-262-04219-3. The Knowledge Engineering Review, 20(1), 92-93. 11. Colorni, A., Dorigo, M., & Maniezzo, V. (1992, December). Distributed optimization by ant colonies. In Proceedings of the first European conference on artificial life (Vol. 142, pp. 134-142).

9

12. Yang, X. S. (2010). A new bat-inspired algorithm. In Nature inspired cooperative strategies for optimization (NICSO 2010) (pp. 65-74). Springer, Berlin, Heidelberg. 13. Eberhart, R., & Kennedy, J. (1995, November). Particle swarm optimization. In Proceedings of the IEEE international conference on neural networks (Vol. 4, pp. 1942- 1948). 14. Al-Aboody, N. A., & Al-Raweshidy, H. S. (2016, September). Grey wolf optimization- based energy-efficient routing protocol for heterogeneous wireless sensor networks. In 2016 4th International Symposium on Computational and Business Intelligence (ISCBI) (pp. 101- 107). IEEE. 15. Dudani, K., & Chudasama, A. R. (2016). Partial discharge detection in transformer using adaptive grey wolf optimizer based acoustic emission technique. Cogent Engineering, 3(1), 1256083. 16. Jayabarathi, T., Raghunathan, T., Adarsh, B. R., & Suganthan, P. N. (2016). Economic dispatch using hybrid grey wolf optimizer. Energy, 111, 630-641. 17. Jitkongchuen, D. (2015, October). A hybrid differential evolution with grey wolf optimizer for continuous global optimization. In 2015 7th International Conference on Information Technology and Electrical Engineering (ICITEE) (pp. 51-54). IEEE. 18. Li, L., Sun, L., Guo, J., Qi, J., Xu, B., & Li, S. (2017). Modified discrete grey wolf optimizer algorithm for multilevel image thresholding. Computational intelligence and neuroscience, 2017. 19. Li, L., Sun, L., Kang, W., Guo, J., Han, C., & Li, S. (2016). Fuzzy multilevel image thresholding based on modified discrete grey wolf optimizer and local information aggregation. IEEE Access, 4, 6438-6450. 20. Wolpert, D. H., & Macready, W. G. (1997). No free lunch theorems for optimization. IEEE transactions on evolutionary computation, 1(1), 67-82. 21. Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Grey wolf optimizer. Advances in engineering software, 69, 46-61. 22. Mittal, N., Singh, U., & Sohi, B. S. (2016). Modified grey wolf optimizer for global engineering optimization. Applied Computational Intelligence and Soft Computing, 2016, 8. 23. Singh, N. (2018). A modified variant of grey wolf optimizer. International Journal of Science & Technology, Scientia Iranica. http://scien tiair anica. shari f. edu. 24. Liang J, Suganthan P, Deb K. Novel composition test functions for numerical global optimization. In: Swarm intelligence symposium, 2005. SIS 2005. Proceedings 2005 IEEE; 2005. p. 68–75.