A novel Immune Optimization with Shuffled Frog Leaping - A Parallel Approach (P-AISFLA) Suresh Chittineni, ANS Pradeep, Dinesh Godavarthi, Suresh Chandra Satapathy Anil Neerukonda Institute of Technology and Sciences Sangivalasa, Vishakapatnam, Andhra Pradesh, India E-mail: {sureshchittineni, pradeep.6174, dinesh.coolguy222, sureshsatapathy} @gmail.com

Abstract: Shuffled frog leaping Algorithm (SFLA) is a new algorithm is used to calculate the global optimal of complex memetic, local search, population based, Parameter free, optimization problems and proven to be one of robust and meta-heuristic algorithm that has emerged as one of the fast efficient [2]. and robust algorithm with efficient global search capability. On the other side Clonal Selection Algorithm (CSA) is SFLA has the advantage of social behavior through the an Artificial Immune System Technique (AIS) that is inspired by process of shuffling and leaping that helps for the infection of the functioning of Clonal selection theory of acquired immunity. ideas. Clonal Selection Algorithm (CSA) are computational The algorithm provides two mechanisms for searching for the paradigms that belong to the computational intelligence desired final pool of memory antibodies. The first is a local search family and is inspired by the biological immune system of the provided via affinity maturation (hyper mutation) of cloned human body. Cloning and Mutation mechanisms in CSA help antibodies [3]. The second search mechanism provides a global in attaining the global optimal. CSA has the advantages of scope and involves the insertion of randomly generated antibodies Innate and Adaptive Immunity mechanisms to antigenic to be inserted into the population to further increase the diversity stimulus that helps the cells to grow its population by the and provide a means for potentially escaping from local optimal process of cloning whenever required. A hybrid algorithm is [4]. developed by utilizing the benefits of both social and immune Both these algorithms are powerful and efficient in mechanisms. This hybrid algorithm performs the parallel finding the global optimal solution. SFLA achieves this with the computation of social behavior based SFLA and Immune help of social behavior and local search exploitation. On the other behavior based CSA to improve the ability to reach the global hand, CLONALG achieves this by performing the process of optimal solution with a faster and a rapid convergence rate. cloning and Hyper mutation. Both algorithms ensure powerful This paper compared the Conventional CLONALG and search capability in different ways. So, in order to improve the SFLA approaches with the proposed hybrid algorithm and global search capability, to speed up the rate of convergence and tested on several standard benchmark functions. to achieve a stable solution, these two algorithms are combined to Experimental results show that the proposed hybrid approach form a hybri d . A hybrid algorithm is significantly outperforms the existing CLONALG and SFLA developed by making use of the benefits of both social and approaches in terms of Mean optimal Solution, Success rate, immune mechanisms. This hybrid algorithm performs the Convergence Speed and Solution stability. computation of both SFLA and CLONALG in a parallel fashion. This Hybrid evolutionary algorithm preserves the features of both Index Terms: Shuffled Frog Leaping Algorithm (SFLA), SFLA and CLONALG and helps to ensure the global optimal in a CLONALG and P-AISFLA. faster way with a better converged mean solution. This hybrid algorithm is tested on various standard and well known bench I. INTRODUCTION mark optimization functions and found that, the proposed hybrid Swarm intelligence is a research hotspot in the artificial algorithm P-AISFLA outperforms both SFLA and CLONALG. intelligence field. It mainly simulates the population behaviors of The remainder of the paper is organized as follows: the complicate system in nature or society. Shuffled Frog Leaping section II describes the Shuffled Frog Leaping algorithm (SFLA) Algorithm (SFLA), inspired from the food hunting behavior of followed by a brief description of Clonal Selection Algorithm the frog and Clonal Selection Algorithm (CSA), inspired from the (CSA) in section III. The section IV briefs the important features immune system of the human body are such examples of this of both SFLA and CLONALG in terms of Social and immune field. behavior perspectives and also describes our proposed Hybrid SFLA is a population based memetic algorithm that algorithm. Section V gives further explanations, experimental carries the process of memetic evolution in the form of infection analysis, simulation results to several benchmark problems and of ideas from one individual to another in a local search. A comparisons of our proposed algorithms with the conventional shuffling strategy allows for the exchange of information between CLONALG and SFLA. Section VI concludes our paper with local searches to move toward a global optimum [1]. This some remarks and conclusions. II. SHUFFLED FROG LEAPING ALGORITHM (SFLA) Step 5.Local exploration: Repeat the process from step 3 for a specific number of iterations to improve the local search SFLA is a memetic algorithm inspired by the research of capability. food hunting behavior of frogs. It is based on evolution of carried by the interactive frogs and by the global exchange of Step 6.Shuffling: Shuffling is the process of combining all the information among themselves. SFLA is a combination of both frogs in all the memeplexes in to a single group. deterministic and random approaches [2]. The SFL algorithm progresses by transforming ‘‘frogs’’ in a Step 7.Convergence check: Return back to Step 2, if the memetic evolution. In this algorithm, frogs are seen as hosts for termination criterion is not met, else stop. memes and described as a memetic vector. Each consists of a number of memotypes. The memotypes represent an idea in a manner similar to a gene representing a trait in a chromosome in a III. IMMUNE OPTIMIZATION ALGORITHM . The frogs can communicate with each other, (CLONALG) and can improve their memes by infecting (passing information) each other [5]. Improvement of memes results in changing an CLONALG is a population based Meta heuristic individual frog’s position by adjusting its leaping step size. algorithm and its main search power relies on cloning operator and mutation operator. The Clonal Selection Algorithm (CSA) A. Steps in SFLA Algorithm: reproduces only those individuals with high affinities and selects their improved and maturates progenies. This strategy suggests The following are the steps involved [6] in the Shuffled Frog that the algorithm performs a greedy search, where single Leaping Algorithm (SFLA): members will be locally optimized and the newcomers yield a Step 1.Random generation of frogs: Initially generate the broader exploration of the search-space [4]. This characteristic population of k frogs randomly within the feasible region. makes the CSA very suitable for solving optimization tasks. The basic steps and working of Immune Optimization algorithm Step 2.Evaluate fitness: The fitness values of each frog are (CLONALG) is described as follows: calculated and then sort all the k frogs in ascending order Step 1. Antibody Pool (AB) Initialization according to the fitness value. Identify the global best frog Initially, an Antibody Pool (AB) is created with N antibodies from the entire population. choosing randomly in the search space. Antibodies are represented by the variables of the problem (ab , ab ,...,ab ) Step 3.Partition to memeplexes: Partition all the k frogs in to p 1 2 N which are potential solutions to the problem. memeplexes, each containing q frogs such that k= p×q. The partition is done in such a way that, the first frog should send to Step 2. Antibody Selection: the first memplex, second one to the second memeplex, similarly, th th th the p frog to the p memeplex and the (p+1) frog back to the For each Antibody (abi), its corresponding affinity is determined. first memplex. These antibodies are then sorted according to the affinity calculated. Select the n highest affinity antibodies. Step 4.Memetic Evolution: In each memeplex, identify the best Step 3. Cloning: frog and the worst frog to perform the process of memetic evolution. In this process, the worst frog that is identified in Cloning is one of the key aspects in AIS. It is the process of each memeplex should be improved as follows: producing similar populations of genetically identical individuals. The selected best n Antibodies will be replicated in proportionate = (. ) × ( − ) (1) to their antigenic affinity. The replicated antibodies i.e., Clones = + (2) are maintained as a separate clone population C. The Number of (− ≤ ≤ ) Clones for each antibody can be calculated by the following Where, rand (.) is a random number between 1 and 0, and equation: B is the maximum allowed change in the frogs position. If this max β. N = ∑ round (3) process produces a better frog, then it should be replaced by the older frog. Otherwise, Xb is replaced by Xg in Equation (3) and the process is once again repeated. If non improvement becomes where Nc is the total number of clones generated, β is possible, in this case a random frog is generated which replaces a multiplying factor, N is the size of Antibody Pool (AB) and the old frog. round(.) is the operator that rounds its argument towards the closest integer. Clone size of each selected antibody is represented by each term of this sum. Higher the affinity is, higher becomes the number of clones generated for the selected If AB (i) is similar with AB (N) antibody [4]. Delete AB (N) in population Else Step 4. Affinity Maturation: j=j-1 End if The Clone Population C is now subjected to Mutation process, i=i+1 inversely proportional to its antigenic affinity measurement Until i= N-1 function. This Maturation helps for low affinity antibodies to Step3: If number of the antibodies< N mutate more in order to improve its affinity. The mutations Add new antibodies to AB. always result in better affinity antibodies. For Gray coding, Uniform mutation or Gaussian Mutation or Cauchy mutation can Step 6. Convergence Check: be used. In this Paper, Self-adaptive mutation using Gaussian Repeat step 2 to 5, until the following conditions met: distribution is used for making a search in the area surrounding the cell with high probability. And it has an outstanding ability of  Algorithm undergoes a Specified number of Iterations. both local and global searching. The Gaussian mutation operator  The optimal solutions in memory cells aren’t improved [7] can be described as follows: in a given generations.

Gaussian mutation operator IV. HYBRID ALGORITHM COMBINATION OF SFLA AND CLONALG θ = θ × exp τ × N(0,1) + τ × N (0,1) (4) Both SFLA and CLONALG are evolutionary Algorithms which have their own advantages. Both have different kinds of ab = ab + θ × N(0,1) features and also different implementation procedures. Let us τ = 2 × D , τ = 2 × D √ √ examine the advantages of SFLA and AIS. where θ = {θ, θ , . . , θ} , i=1, 2, 3...Nc , j = 1,2,....,D, The parameter θ is the mutation step of antibody ab , τ and τ a. Importance of SFLA is the whole step and the individual step respectively.  A memetic algorithm which has the property of local

search. Then, calculate the affinity of the mutated clones. The better affinity mutations are stimulated while the worse are restrained  Shuffling operator allows for Infection and exchange of when antibody undergoes affinity mutation. The higher affinity ideas to take place. values are taken for next generation while the  Exhibits social behavior Lower affinity antibodies will be deleted.  Few parameters are involved i.e. Number of

memeplexes, Number of frogs, Iterations in each Step 5. Antibody Restraint: memeplex. Inspired by the vertebrate immune system mechanism called antibodies restraint, includes the process of suppression and b. Importance of CLONALG supplementation. This step keeps diversity and helps to find new  Cloning operator allows for replication of best fit solutions that correspond to new search regions by eliminating antibodies from the antibody pool. some percentage of the worst antibodies in the population and  Mutation operator brings diversification to the antibody replacing with the randomly generated new antibodies. This helps population. the algorithm not to being trapped to local optimal. In antibodies  Exhibits the Immune Behavior. restraint [8] [9], for each Iteration, the similar antibodies are  Antibody Diversity Maintenance further increases the removed and randomly generated antibodies are introduced in the diversity and prevents from reaching to local optimal. removed antibody places of the Antibody Pool (AB). c. Hybrid Evolution of SFLA and CLONALG The pseudo code of Antibody restraint’s Redundancy (P-AISFLA): Removal: Both the advantages from Social and immune behaviors are incorporated in our proposed Hybrid approach. Social Step1: For every antibody, affinity is computed, and sorts the affinity values in descending order for antibody pool (AB). Get an behavior based SFLA and Immune behavior based CLONALG are combined together to evaluate a Hybrid antibody vector: Pop (ab1, ab2... abN) N is the total number of antibody in generation. Algorithm that has the advantages of both the conventional Step2: Set i=1, j=N Algorithms. Repeat: According to this algorithm, initially consider 2N Checking: Is AB (i) and AB (N) population was randomly generated within the given range. Similar or not? Then determine their Corresponding fitness or affinity values. Then sort all the population in the ascending order After processing each group by the algorithm, the result basing on their Computed fitness or affinity values. Then of the two algorithms is merged again to become 2N divide the entire 2N population in to two groups each with N population and the obtained 2N population should be population. Dividing should be done in such a way that, the repeated by the same above process. This whole of a process best N Sorted population should go to one group and the is repeated for specific number of iterations or until remaining N sorted population should go to another group. Convergence value is attained as the stopping criterion. The Since SFLA consists of social behavior component, the entire working of this Hybrid Evolutionary algorithm can be worst group of population should be further processed by shown in fig.1. SFLA and the best group of population should be processed by CSA since CSA has immunity mechanisms.

Begin

Initial generation of 2N random population

Evaluate Fitness for each Population

Sort all the values in ascending order

Divide in to two groups each consists of N population

Best N population Worst N population

CLONALG SFLA

Selection Partition in to m memeplexes Cloning

Memetic Affinity Evolution

maturation

Shuffle all the m Diversity memeplexes maintenance

Merge the population (N+N)=2N

Convergence Criteria?

Determine optimal solution

End

Fig. 1. Basic Flow chart of P-AISFLA V. EXPERIMENTS AND RESULTS OVER Function F1 F2 F3 F4 F5 F6 BENCHMARK FUNCTIONS Name Success 0.01 0.01 0.12 0.1 0.0001 0.001 A. Benchmark Functions for Simulation Criterion A suite of standard benchmark functions [10] [11] [12] value Θ are taken into consideration to test the effectiveness and efficiency of the proposed approach P-AISFLA with the Table 2 Θ values for Benchmark functions conventional CLONALG and SFLA. All these benchmark functions are meant to be minimized. The first function is C. Benchmark Results and Analysis: unimodal with only one global minimum. The others are multimodal with a considerable number of local minima. In this section, the results for the 6 benchmark test functions Table 1 summarizes the expressions, Initializations and are given to show the merits of the proposed P-AISFLA. search ranges of the benchmark functions. The experimental results in terms of the Mean Optimal Value, Standard deviation to measure the stability of B. Experimental Setup: optimal value, Success rate and the number of function All the three algorithms were written in Matlab scripting evaluations are summarized in Tables 3- 8 and Figs. 2-7. language. The matlab codes were run on a computing In order to further demonstrate the performance of Laptop machine at 1.8 GHz Intel processor. Each algorithm P-AISFLA over CLONALG and SFLA, the evolutionary optimized each function 30 times independently. curves of optimizing each function by the 3 algorithms are given. The number of objective function evaluations is used The following simulation conditions are used for to evaluate the convergence speed. Since the 3 algorithms CLONALG: are all randomized stochastic algorithms, it is not  Initial Antibody Pool Size, AB = 20 reasonable to compare their performances by optimizing  Clonal Multiplying factor′ =[0.5-1] these functions only one time. To reduce the stochastic or  Mutation type used: Gaussian random influences, each algorithm optimized each function

 Gaussian mutation probability Pmg = 0.01 for 30 times, and the average optimal function value was  Percentage of Suppression Psup = 0.2 calculated by:  Number of Iterations=1000  Number of Dimensions taken for each Benchmark ∑ () Function=15 MOV= (5) 30 The following simulation conditions are used for SFL where: MOV is Mean Optimal Value, F(i) is the Optimal th Algorithm value achieved in i generation. The evolutionary curves of average optimal function values against number of  Initial Population, Npop = 20  Number of Memeplexes, M= 2 iterations for each function are illustrated in Figs. 2–7.  Max. Local search iterations 10 A Successful Optimal Solution represents that it converges to the global optimum. Otherwise, the algorithms  Number of Main Iterations=1000 were regarded as getting stuck or trapped at the local  Number of Dimensions taken for each Benchmark optimal. Success rate, SR is the percentage of summation of Function=15 number of times successful optimal solutions are obtained

to the number of generations for a specific benchmark The number of generations used for CLONALG, function. SFLA and the proposed hybrid approach is P-AISFLA was

30. In each of the 30 computations, CLONALG, SFLA and Tables 3–8 illustrates that the accurate and high precision P-AISFLA employed different initial populations randomly values cannot be obtained by CLONALG. When the generated in the problem’s search space. If the algorithm function to be optimized is having many local optimal attained a solution with an optimal functional value equal to solutions with multiple modes, the algorithm is sometimes the criterion value Θ, the solution is regarded as a success trapping or stucking at the local optimal. But, due to local (Successful optimal Solution) or otherwise fails. The search operator in SFLA, it is producing more accurate and Success criterion value, Θ chosen for individual benchmark global optimal solutions than CLONALG. The “SR” value function is tabulated in Table 2. for majority of the functions is high for SFLA than

CLONALG.

Due to Cloning, mutation and local search

operators available in P-AISFLA, the global search capability of it is dominant in comparison with CLONALG Success Rate SR than both CLONALG and SFLA implying and SFLA and its convergence speed is predominantly that P-AISFLA has nice global search performance to avoid faster than CLONALG and SFLA in terms of function local optimal and the optimal solution gained by it is the evaluations for majority of the functions. It can be observed most accurate. that the proposed approach possesses a relatively higher

The experimental results are given in Tables 3–8. F4 MOV STD FE SR F1 MOV STD FE SR CLONALG 3.0761e-001 1.371e-001 40000 80 CLONALG 1.147e-003 2.811 e-003 34446 70 SFLA 6.124e-002 1.087e-002 32500 100 SFLA 7.157e-009 3.6606e-009 11312 100 P-AISFLA 2.0263e-004 3.201e-004 15782 100 P-AISFLA 7.495e-017 1.4838e-16 1288 100 Table 6 Optimization Results for Ackley function

Table 3 Optimization Results for Sphere function

F5 MOV STD FE SR F2 MOV STD FE SR CLONALG 1.71e-004 2.101e-004 17442 90

CLONALG 1.341e-001 2.087e-002 35000 30 SFLA 3.459e-003 1.473e-003 19473 75

SFLA 1.4542e-004 5.2831e- 11312 80 005 P-AISFLA 1.2107e-007 2.8321e- 2688 100 006 P-AISFLA 2.4206e-011 3.0946e-11 2419 95 Table 7 Optimization Results for Griewank function

Table 4 Optimization Results for Schewefel’s F2 function

F6 MOV STD FE SR

F3 MOV STD FE SR CLONALG 5.49e-002 1.5e-002 32850 45

CLONALG 4.0898 7.026e-001 34550 25 SFLA 4.9234e-008 3.768e-008 8616 100

SFLA 1.2097 2.290e-001 30316 55 P-AISFLA 1.5127e-012 2.4749e- 1583 100 011 P-AISFLA 7.709e-002 1.123e-002 21000 90 Table 8 Optimization Results for Zakharov function Table 5 Optimization Results for Rastrigin function In these tables, ‘‘MOV’’ is the Mean optimal value obtained by these algorithms; ‘‘STD’’ is the Standard Deviation for the Mean optimal solution; ‘‘SR’’ is the percentage of success for converging to the global optimum at 30 independent runs, and it reflect the global search performance of each individual algorithm. ‘‘FE’’ is the average number of objective function evaluations required by these algorithms to attain the criterion values for the functions; “FE” reflects the convergence speed of an algorithm.

Fig 2: Sphere Function Fig 5: Ackley function

Fig 6: Griewank Function Fig 3: Schewefel’s Function

Fig 4: Rastrigin function Fig 7: Zakharov function

VI. CONCLUSIONS [3]. Leandro N. de Castro and Fernando J. Von Zuben,”Learning and Optimization Using the Clonal Selection Principle”, IEEE Trans. on Evolutionary In this paper, we have proposed a novel and hybrid Computation, Vol. 6, No. 3, JUNE 2002, pp. 239-251. approach, P-AISFLA that utilizes the benefits of both cloning operator and local search operator in a parallel [4] De Castro and Jonathan Timmis. An Introduction to Artificial Immune Systems:A New Computational Intelligence Paradigm, Springer Verlag, 2002. fashion. The local search operator and Mutation operators are used for improving local and global search capabilities. [5] Liong S-Y, Atiquzzaman Md. Optimal design of water distribution network using shuffled complex evolution. J Inst Eng, Singapore 2004;44(1):93–107. The parallel search mechanism is utilized to improve the performance of search efficiency of the algorithm. On [6] Xuncai Zhang1,2, Xuemei Hu3, Guangzhao Cui2, Yanfeng Wang2, Ying solving a suite of benchmark functions, P-AISFLA Niu2,” An Improved Shuffled Frog Leaping Algorithm with Cognitive Behavior”, Proceedings of the 7th World Congress on Intelligent Control and Automation June performs better than both CLONALG and SFLA in terms of 25 - 27, 2008, Chongqing, China. Mean optimal solution values, stability of the optimal solution, number of function evaluations and success rate. [7] Cortes and Coello Coello, “Handling Constraints in using an Artificial Immune System”. Simulation results on Standard benchmark problems demonstrate that the proposed method is a useful technique [8] Jonathan Timmis, C.Edmonds and Kelsey, "Assessing the Performance of Two Immune Inspired Algorithms and a Hybrid Genetic Algorithm for Function to solve complex and higher dimensional optimization Optimisation," Proceedings of the Congress on , 2004, problems. pp.1044-1 051.

[9] Lijun Pan and Z Fu, A Clonal Selection Algorithm for Open Vehicle Routing Problem: Proceedings of third International Conference on Genetic and REFERENCES Evolutionary Computing, 2009.

[1] Emad Elbeltagi, Tarek Hegazy, and Donald Grierson, “Comparison among [10] XQ Zuo and YS Fan,”A chaos search immune algorithm with its application five evolutionary-based optimization algorithms”, Advanced Engineering to neuro-fuzzy controller design”, Chaos, Solitons and Fractals ;30 (2006) 94–109. Informatics, 2005 (19), pp. 43-53. [11] Suganthan and S. Baskar ,”Comprehensive Learning Particle Swarm [2] M. M. Eusuff, K. E. Lansey, F. Pasha, “Shuffled frog-leaping algorithm: a Optimizer for Global Optimization of Multimodal Functions”, IEEE Trans. on memetic meta-heuristic for discrete optimization,” Engineering Optimization, Evolutionary Computation, , Vol. 10, No. 3, June 2006.

2006, vol. 38, no. 2, pp. 129-154. [12] S. H. Ling and C. Iu,” Hybrid Particle Swarm Optimization With Wavelet Mutation and Its Industrial Applications”, IEEE Tran. On Systems, Man and Cybernetics-Part B: Cybernetics, Vol. 38, No. 3, June 2008.

TABLE 1 BENCHMARK TEST FUNCTIONS OF DIMENSION D. EACH OF THE TEST FUNCTION HAS A GLOBAL MINIMUM VALUE OF 0 [11] [12]

Function Test Function Domain Name Range F1 Sphere ∑ [−10,10]

F2 Schwefel’s ∑|| + ∏|| [−10,10] Problem 2.22 F3 Rastrigin [−5.12,5.12] [ − 10 cos(2) + 10] F4 Ackley [−32,32] 1 −20 ⎛−0.2 ⎞ ⎝ ⎠ 1 − cos(2 ) + 20 + F5 Griewank 1 [−600,600] − cos + 1 4000 √ Zakharov F6 ∑ + [−10,10] (∑ 0.5. ) +(∑ 0.5. )