Research Article a Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems
Hindawi Publishing Corporation Computational Intelligence and Neuroscience Volume 2016, Article ID 2565809, 10 pages http://dx.doi.org/10.1155/2016/2565809
Research Article A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems
Leilei Cao,1,2 Lihong Xu,1 and Erik D. Goodman2
1 Department of Control Science and Engineering, Tongji University, Shanghai 201804, China 2BEACON Center for the Study of Evolution in Action, Michigan State University, East Lansing, MI 48824, USA
Correspondence should be addressed to Lihong Xu; [email protected]
Received 23 January 2016; Revised 15 April 2016; Accepted 3 May 2016
Academic Editor: Jens Christian Claussen
Copyright Β© 2016 Leilei Cao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared.
1. Introduction Lunchβ theorem clarifies that the performance of any search algorithm depends on the complexity of the problem domain Most real-world optimization problems in engineering, man- [12]. agement, and other applications are of high dimensional- All of the above-mentioned heuristic algorithms are ity, multimodal, and/or multiobjective. Methods for solving population-based except Simulated Annealing, and some of these problems range from classical mathematical methods to them are classified as swarm intelligence. These population- heuristic algorithms. With more complex problems, heuristic basedalgorithms,whenappliedtocomplexproblems,fre- intelligent algorithms have been used successfully and fre- quently outperform classical methods such as linear pro- quently [1]. These algorithms include Simulated Annealing gramming [13], gradient methods [14], and greedy algorithms [2], Genetic Algorithm [3], Particle Swarm Optimization [15]. Although the heuristic algorithms can solve some [4], Ant Colony Optimization [5], Artificial Bee Colony [6], problems, sometimes they converge on local optima or find Firefly Algorithm [7, 8], and Bat Algorithm [9]. Support for solutions near but not at the global optimum. parallel computation and stochastic search are common char- This paper introduces a Guiding Evolutionary Algorithm acteristics of these algorithms. Inevitably, most of these algo- to address global optimization problems. It is inspired by rithms are slow to converge or converge prematurely, trapped Particle Swarm Optimization and the Bat Algorithm and in local optima upon addressing high-dimensional or multi- uses some methods from the Genetic Algorithm. It is easily modal optimization problems [10]. High-dimensional vari- understood and implemented. For some problem classes ables expand the search space, influencing the required num- tested, this algorithm has outperformed canonical implemen- ber of evaluations and speed of convergence. Multimodality tation of those individual heuristic methods. This paper is means that there exist one or more global optima and some organized as follows: Section 2 gives brief overview about number of locally optimal values [10, 11]. The βNo Free PSO and BA. Section 3 proposes this new global optimization 2 Computational Intelligence and Neuroscience
algorithm. A set of test functions and the experimental results Optimization as ππ essentially controls the pace and range of are presented in Section 4, and conclusions are presented in movement of the swarming particles. To a degree, BA can be Section 5. considered as a balanced combination of the standard Particle Swarm Optimization and an intensive local search controlled π΄ 2. Particle Swarm Optimization and by the loudness and pulse rate. Furthermore, the loudness π and rate ππ of pulse emission must be updated accordingly as Bat Algorithm the iterations proceed. The loudness usually decreases once a 2.1. Overview of PSO. A variety of heuristic algorithms are bathasfounditsprey,whiletherateincreases. called swarm intelligence algorithms, simulating collabora- tive behavior of a swarm of insects, birds, fish, or other ani- 3. A Guiding Evolutionary Algorithm malssearchingforfood[4,16].Amongthem,ParticleSwarm Optimization (PSO) is the most widely used. PSO was first Although Bat Algorithm can solve some difficult problems proposed by Eberhart and Kennedy [4] in 1995. Each solution and converge quickly, it often cannot avoid converging in PSO is a βparticleβ that searches in the independent variable to a local optimum, especially for those high-dimensional space by learning from its own experience and other particlesβ multimodal problems. Examining PSO and BA reveals some past experiences [17, 18]. There are two different ways of differences, in that BA rejects the historical experience of representing othersβ past experiences: the global best and each individualβs own position but accepts a better personal the local best. In global optimization problems, each particle solution with some probability. We will modify some of the adjusts its velocity and position according to the historical updating mechanisms of BA and add a mutation method in best position of itself and the best position among all particles. order to try to address global optimization problems more But this method tends to make each particle converge to a accurately. local optimum in those problems containing many locally optimal values. For these problems, the local best method is 3.1. Crossover. There is a leading individual in both PSO more effective. Each particle adjusts its velocity and position and BA, which can lead all individuals to update their own according to its historical best position and the best solution positions. But we find that current velocities of individuals found within its neighborhood. In fact, if the neighborhood are weighted accumulations of their own historical velocities is all particles, the local best becomes the global best version. across iterations. In the later parts of a search, this will tend PSO is simple and easy to be implemented, but PSO lacks a to decrease the speed of convergence or favor convergence mutation mechanism which could increase the diversity of on local optima, even though there are bounds on velocities the population and help to avoid convergence at local optima. in PSO. To avoid these situations and employ the leading individual effectively, we use the following expression: 2.2. Bat Algorithm [9, 19β21]. The Bat Algorithm (BA) was π₯π‘ =π₯π‘β1 +(π₯π‘β1 βπ₯π‘β1)βπ½, proposed by Yang in 2010. It is based on the echolocation π π β π (3) behavior of bats, which can find their prey and discriminate π‘β1 where π₯β is the current global best individual and π½ is the different types of insects even in complete darkness. This step length of the position increment. algorithm combines global search with local search in each In fact, this expression is a form of crossover between each iteration, which balances exploration and exploitation during individual and the best individual to generate their offspring. search. The algorithm defines the rules for how these batsβ π‘β1 π₯ is a guide to lead all other individuals to walk forward positions and velocities in a π-dimensional search space are β toward the current best one. To explain formula (3) clearly, we updated. The new positions and velocities at time step π‘ are use a simple example to describe the process of locating the given by global best individual. Figure 1 shows a two peaksβ function, ππ =πmin +(πmax βπmin)βπ½, the ordinate of which is the value of the objective function and the abscissa of which is the design variable. Point P is the π‘ π‘β1 π‘β1 π‘β1 Vπ = Vπ +(π₯π βπ₯β )βππ, (1) global best we expect to find in this maximization problem. Points A, B, C, and D indicate four individuals, and π₯1, π₯2, π₯3, π‘ π‘β1 π‘ π₯π =π₯π + Vπ , and π₯4 are their solutions in variable space, respectively. π₯min and π₯max aretheboundsofthevariablespace.Amongthefour π½β[0,1] π₯π‘β1 where is a random vector and β is the global best individuals, point C is the best; therefore, other individuals π location found to date. π is the frequency of the wave. will move toward this current best individual. Let us assume Forthelocalsearchpart,anewsolutionforeachbatis π½β[0,2]is a uniform random variable. For point A, its generated locally using a random walk around the current motion ranges between π₯1 and π₯max,because2β(π₯3 βπ₯1) best solution: exceeds the upper bound of the variable range. Theoretically, π‘ pointAcanmovetoeachpointwhosevariablevalueis π₯new =π₯old +πβπ΄, (2) between π₯1 and π₯max, and this will depend on the random where π β [β1, 1] is a uniformly distributed random number variable π½. Not only point A but also points B and D will be π‘ and π΄ is the average loudness of all bats at this time step. attracted by point C. The best individual will be replaced once Theupdateofthevelocitiesandpositionsofbatshassome a better solution occurs. After many iterations, all of these similarity to the procedure in the standard Particle Swarm individuals can reach the global best point P.While searching, Computational Intelligence and Neuroscience 3
0.8 F 0.7
P 0.6
0.5
0.4 C
Probability 0.3 B 0.2
A 0.1 D 0 0 10 20 30 40 50 Generations
xmin x1 x2 x3 x4 xmax x Figure 2: Probability of mutation.
Figure 1: A two peaksβ function. Each dimension of the vector π is set as formula (6). Sometimes the solution after mutation will exceed the range even though the best individual may reach the lower peak, ofthevariables,andwecanlimitittotheboundsinthis other individuals can also move to point P because of random situation. walking and a great number of population. 3.3. Local Search. As we know, most of global optimization 3.2. Mutation. Although these global optimization algo- algorithms have excellent capability of exploration but are rithms work quite well on some engineering problems, they weak at exploitation. To enhance this ability, especially in the sometimes cannot avoid converging on a local optimum later iterations, we will expect the algorithm to be able to with high probability. For such heuristic algorithms, it is locate the global best quickly with local search, once it has difficulttoprovethattheywillalwaysconvergetoaglobal found the right neighborhood. The probability of local search optimum. In order to raise the probability of converging to will be maintained low in early iterations and raised later in a global optimum, we add a mutation mechanism which is the search process. The probability of local search will follow inspired by the Genetic Algorithm. The purpose of mutation thesamedistributionasmutation(4). is to increase the diversity of the population and prevent Each individual has the calculated probability of being them trapping into a local optimum, especially in the later replaced by a result of local search, performed around the iterations. Therefore, the probability of mutation will be made current best individual. Similarly to the Bat Algorithmβs low at the beginning and higher later. We set the mutation random walk, we use the following formula to express local probability as follows: search: π π=πβ ( max ), π‘ π‘β1 ln (4) π₯π =π₯β +πβπΏ, (7) πmax βπ‘ π π‘β1 where is a limiting parameter which can be a constant or a where π₯β is the best individual of the current population, πβ variable, πmax is the maximum number of generations, and π‘ [β1, 1] is a uniform random number, and πΏ is a vector which isthecurrentgeneration.Wewillassumethatπmax =50and determines the search scope of the random walk, formulated π = 0.2;then,probabilityπ increases with iterations as shown inthevariablespace.Westillwillassumethattherangeofπth in Figure 2. Especially in the early iterations, π is low, while it dimension of the variables is [π, π],andthen increases rapidly in the later iterations. Themutationformulaisasfollows: πΏπ = 10% β (πβπ) . (8) π₯π‘ =π₯π‘ +πβπ, π π (5) 10% is only an example of scope of the random walk. πΏ π₯π‘ πβ Theotherdimensionsofthevector are set as formula (8). where π is the solution of an individual after crossover, Local search acts much like mutation; the differences are in [β1, 1] is a uniform random number, and π is a vector which π the points to which the mutational changes are addedβthe determines the scope of mutation. In general, we set such global best in one case and any individual in the other. that mutation can span the entire range of the independent variables. We will assume that the range of πth dimension of the variables is [π, π],andthen 3.4. Summary of GEA. Unlike the classical (greedy) simplex π‘ π‘ algorithm, a population-based algorithm can βjump outβ of a ππ = max (π₯ππ βπ,πβπ₯ππ ). (6) local optimum using parallel search, without losing all prior 4 Computational Intelligence and Neuroscience
Initialize the population π₯π (π=1,2,...,π), and define limit parameter π, scope of mutation π,rangeoflocalsearchπΏ Evaluate the initialized population π‘β1 Select the best individual π₯β While (π‘<πmax) { For each individual { π‘ Make crossover to generate a new individual π₯π If (rand <π) { π‘ Make mutation for π₯π } If (rand <π) { π‘ Makelocalsearchforπ₯π } π‘ π‘β1 If (π(π₯π ) is better than π(π₯π )) { Accept this new individual } π‘ π‘β1 If (π(π₯π ) is better than π(π₯β )) { π‘β1 π‘ Replace π₯β using π₯π } } } Output results and visualization
Algorithm 1 information. Therefore, we can afford to accept a new global inside the parabolic, narrow-shaped flat valley (variables are best-to-date individual when it is found, instead of accepting strongly dependent on each other; therefore it is difficult to it according to some probability, as Simulated Annealing converge on the global optimum); πΉ5 is multimodal and has must do in order to reduce getting trapped at local optima. In many local minima but only one global minimum; πΉ6 also has the GEA, each individual updates only when its offspring has numerous local minima and is multimodal. The definitions better fitness. The updates of the global best individual and of these benchmark functions are listed in Table 1. All of their each individual are greedy strategies, but, because of a large global minima are at 0. population size, convergence to local optima can be avoided in some multimodal global optimization problems. 4.2. Parameter Settings for Testing Algorithms. In this section, The proposed Guiding Evolutionary Algorithm with the Guiding Evolutionary Algorithm (GEA), Fast Evolu- greedy strategy resembles the framework of the Genetic tionary Algorithm (FEA) [23], Particle Swarm Optimization Algorithm, which can be described as follows: initialization, (PSO), and Bat Algorithm (BA) are tested separately on six evaluation, selection, crossover, and mutation. A difference benchmark functions. For πΉ1 to πΉ6,toillustratehowthe from the Genetic Algorithm is that there is no special dimensionality of problems influences the performance of the selection mechanism in the GEA, because each individual algorithms, three different dimensions of benchmark func- will generate its offspring by recombination with the global tions are used: π·=10, π·=20,andπ·=30. The parameters best individual and it does not require an operator to select an of the Fast Evolutionary Algorithm are set as in [23]. Through individual to evolve. In addition, local search is also employed some trials, we set the number of generations in FEA, PSO, to increase the algorithmβs exploitation capability. The whole andBAas500,whilethenumberofgenerationsissetas framework is easy to understand and implement. The whole 50 in GEA, in order to get the best results. Population sizes pseudocode of this algorithm is as shown in Algorithm 1. were adjusted according to the different benchmark functions and different dimensions. Although these algorithms run for 4. Experimental Results and Discussion different numbers of generations, the numbers of evaluations are the same for any given benchmark function. For PSO, 4.1. Test Functions [11, 22, 23]. In order to test the proposed according to some researches [17, 18], the following settings algorithm,6commonlyusedfunctionsarechosen.πΉ1βπΉ6 are can make it perform well: the inertia weight is set as 0.7, while well-known test functions for global optimization algorithms the two learning parameters are set as 1.5; the limitation of including Genetic Algorithm, Particle Swarm Optimization, velocity is set as [β1, 1]. Parameters of BA are set as follows: Bat Algorithm, Firework Algorithm [24], and Flower Polli- minimum frequency πmin =0, maximum frequency πmax = nation Algorithm [25]. πΉ1 is a simple unimodal and convex 2.0,theloudnessπ΄0 =0.8,thepulserateπ0 =0.8,andthe function; πΉ2 is a unimodal but nonconvex function; πΉ3 is control factor πΌ = π½ = 0.9. These parameters are attributed to multimodal, since the number of local optima increases with some published articles [11, 19]. For our GEA, π½β[0,2]and the dimensionality; πΉ4 has a global minimum at 0, which is π=0.2. Computational Intelligence and Neuroscience 5
Table 1: Six test functions utilized in this experiment.
Domain of Functions Name of function Expression variables π· De Jongβs sphere 2 πΉ1 π (π₯) = βπ₯π [β100, 100] function π=1 π· π· Schwefel 2.22 σ΅¨ σ΅¨ σ΅¨ σ΅¨ πΉ2 π (π₯) = β σ΅¨π₯πσ΅¨ + β σ΅¨π₯πσ΅¨ [β15, 15] function π=1 π=1 π· π· 2 π₯π π₯π πΉ3 Griewangkβs function π (π₯) =ββ cos ( )+β +1 [β15, 15] π=1 βπ π=1 4000 π·β1 2 2 2 πΉ4 Rosenbrockβs function π (π₯) = β100 β (π₯π+1 βπ₯π ) +(π₯π β1) [β15, 15] π=1 π· 2 πΉ5 Rastriginβs function π (π₯) =π·β10+β (π₯π β10βcos (2ππ₯π)) [β5, 5] π=1 1 π· 1 π· β 2 2 [β15, 15] πΉ6 Ackleyβs function π (π₯) =20+πβ20βexp (β0.2 β β βπ₯π ) β exp ( β β cos (2ππ₯π )) π· π=1 π· π=1
Generally speaking, for unimodal functions, the conver- advantage in addressing unimodal and convex functions. But gence rate is of primary interest, as optimizing such functions this advantage is lose upon testing unimodal and nonconvex toasatisfactoryaccuracyisnotamajorissue.Formultimodal function πΉ2. GEA clearly outperforms the other algorithms functions, the quality of final solutions is more significant in convergence rate and convergence accuracy; the median since they can illustrate the ability of the algorithm tested, to andmeanvalueareclosetothetrueglobaloptimum.Viewed escape from local optima and locate the true global optimum. from Figure 3(b), it is easy to see that GEA and PSO have Based on these requirements, we present the computational similar search processes in the early evaluations, but GEA optimal solutions of these benchmarks using our GEA as can jump out from a local optimum because of its mutation well as comparing with other algorithms. In addition, the operation and continue to search with local search, while PSO curves of convergence rate are plotted in order to indicate stops at the local optimum. GEA improves several orders of the numbers of evaluations to reach the approximate global magnitude compared to the results of the other algorithms. In optimum or converge on a local optimum. addition, it has better stability of optimization performance. πΉ3 is a multimodal function; both GEA and PSO perform 4.3. Experimental Results and Discussion well, and they have similar convergence rates and accuracies as indicated from Figure 3(c). We display the ability to 4.3.1. Results for the 10D Benchmark Functions. Table 2 avoid falling into local optima. As described, function πΉ4 depicts the computational results of four algorithms on six poses special difficulties for an optimizer. Although PSO benchmark functions with 10 dimensions, which were run 20 locates the minimization value exactly in median value on times independently to get the best, worst, mean, median, πΉ4, PSOβs stability is worse than that of GEA. Sometimes, and standard deviation values. Among these five values, locating the exact optimum is not our purpose, but rather the βmedianβ item can indicate the algorithmβs optimizing the stable computational performance of the algorithm is performance reasonably, while the βstandard deviationβ item more significant. Each of GEAβs values on πΉ4 is close to denotes the stability of the algorithmβs optimizing perfor- the real global optimum, and the stability of its optimizing mance. Figure 3 presents the convergence characteristics in performance is excellent. This performance benefits from terms of median value of 20 runs of each algorithm on each changeable mutation parameters and local search areas. But, benchmark function. as viewed from Figure 3(d), the convergence accuracies of From the computational results, we find that BA performs FEA and BA are worse than those of GEA and PSO. πΉ5 and πΉ6 πΉ best among these four algorithms on πΉ1,althoughthree are multimodal functions, but 5 makes it extremely difficult of them have excellent performances. As stated earlier, the to locate the global optimum, while πΉ6 is much easier. GEA convergence rate is more crucial in comparison with conver- and PSO have similar performance upon addressing πΉ5 and gence accuracy for unimodal functions. As indicated from πΉ6, while GEA is a little better than PSO especially in standard Figure3(a),GEA,PSO,andBAhavefastconvergencerateand deviation. Although neither of them can locate the real global high convergence accuracy, but FEA gets the worst accuracy. minimization on πΉ5,theystillworkbetterthanFEAand Actually, πΉ1 is only a simple unimodal and convex function. BA.AsindicatedfromFigure3(e),althoughFEAhasthe There is no mutation operation in the BAβs procedure but fastest convergence rate, the convergence accuracy is too poor only local search. However, compared with GEA and PSO, similar to that of BA. In addition, BA also performs well upon the local search and absence of mutation become the BAβs addressing πΉ6, but it is not yet better than GEA and PSO. Their 6 Computational Intelligence and Neuroscience
Table 2: Results of benchmark functions in 10 dimensions using four algorithms.
Functions Evaluations Value GEA FEA PSO BA Best 3.45πΈ β 37 2.31πΈ β 15 5.42πΈ β38 1.20E β 44 Worst 6.62πΈ β 35 1.95πΈ β 03 1.05πΈ β34 5.57E β 44 πΉ1 50000 Mean 5.86πΈ β 36 4.83πΈ β 04 1.34πΈ β35 2.41E β 44 Median 1.44πΈ β 36 2.89πΈ β 04 1.48πΈ β36 2.37E β 44 StDev 1.46πΈ β 35 5.98πΈ β 04 2.88πΈ β35 1.05E β 44 Best 5.25πΈ β 11 1.23πΈ + 00 3.79πΈ β04 3.74E β 22 Worst 3.44E β 04 7.27πΈ + 00 2.11πΈ β 01 5.11πΈ +01 πΉ2 50000 Mean 1.28E β 04 3.89πΈ + 00 6.86πΈ β 02 7.84πΈ +00 Median 5.03E β 05 3.28πΈ + 00 2.97πΈ β 02 1.23πΈ +00 StDev 1.20E β 04 1.96πΈ + 00 8.52πΈ β 02 1.42πΈ +01 Best 2.46πΈ β 02 2.34πΈ β02 7.40E β 03 2.46πΈ β 02 Worst 1.06E β 01 3.37πΈ β 01 1.62πΈ β 01 2.63πΈ β01 πΉ3 50000 Mean 6.37πΈ β 02 1.36πΈ β01 4.88E β 02 1.24πΈ β 01 Median 6.52πΈ β 02 1.31πΈ β01 3.69E β 02 1.08πΈ β 01 StDev 2.39E β 02 7.12πΈ β 02 3.82πΈ β 02 6.86πΈ β02 Best 4.23πΈ β 29 4.60πΈ β03 0E + 00 1.78πΈ + 00 Worst 6.91E β 10 6.16πΈ + 00 3.99πΈ + 00 6.98πΈ +00 πΉ4 200000 Mean 3.52E β 11 2.99πΈ + 00 5.98πΈ β 01 4.44πΈ +00 Median 3.88πΈ β 21 2.04πΈ +00 0E + 00 4.43πΈ + 00 StDev 1.54E β 10 2.55πΈ + 00 1.46πΈ + 00 1.31πΈ +00 Best 4.97πΈ + 00 6.96πΈ +00 3.98E + 00 6.96πΈ + 00 Worst 1.19E + 01 2.69πΈ + 01 1.39πΈ + 01 2.69πΈ +01 πΉ5 200000 Mean 7.56E + 00 1.71πΈ + 01 7.71πΈ + 00 1.77πΈ +01 Median 7.46E + 00 1.59πΈ + 01 7.46πΈ + 00 1.69πΈ +01 StDev 1.69E + 00 5.82πΈ + 00 2.77πΈ + 00 4.82πΈ +00 Best 7.99E β 15 1.16πΈ + 00 7.99πΈ β 15 7.99πΈ β15 Worst 1.65E + 00 3.40πΈ + 00 1.65πΈ + 00 2.32πΈ +00 πΉ6 100000 Mean 1.40E β 01 2.11πΈ + 00 3.96πΈ β 01 7.13πΈ β01 Median 7.99E β 15 2.17πΈ + 00 7.99πΈ β 15 7.99πΈ β15 StDev 4.38E β 01 7.93πΈ β 01 6.34πΈ β 01 9.39πΈ β01 median values are very close to the real global minimum. The become easier as the dimension increases [26]. GEA performs convergence rates of GEA, PSO, and BA are also similar, but best among the methods on πΉ3, although increasing the FEA has been trapped in a local optimum prematurely as seen problemβs dimension and its convergence accuracy brings the from Figure 3(f). computational optimum closer to the real global optimum for all tested methods. For πΉ4, πΉ5,andπΉ6,withincreasing 4.3.2. Results for the 20D and 30D Benchmark Functions. dimensions, none of these algorithms can locate the real Table 3 shows these four algorithmsβ computational results global optimum or approach it closely. But GEA is the best on six 20D benchmark functions, and Table 4 shows com- compared with the other algorithms in optimization results. putational results on six 30D benchmark functions. As the High dimension is a calamity for these problems, which convergence graphs are similar to those of the 10D functions makes it difficult to locate their real global optimum. except the convergence accuracy, they are not presented here. It is easy to see that increasing the dimension of πΉ1 4.3.3. Discussion. Analysis of the results of four algorithms did not influence the nature of the performance of these on six functions with different dimensions indicates that algorithms, just increasing the number of evaluations needed GEA has better optimization performance compared to FEA, to get similar accuracy. Increased dimension of πΉ2 reduces the PSO, and BA. BA is suitable to address simple unimodal convergence accuracy of GEA, PSO, and BA, but FEA is not problems, but it has an undistinguished performance on affected by that because FEA can never get good accuracy. other problems, especially on multimodal problems. FEA Sometimes we can accept the results of GEA and PSO on πΉ2 has a fast convergence rate, but its convergence accuracy in20Dand30D,butthatofFEAandBAcannotbeaccepted. is too bad in some problems. Sometimes GEA and PSO The effects of πΉ3 on 20D and 30D are better than those on have similar computational results and convergence rate and 10D for these four algorithms. This problem is known to accuracy on multimodal problems, but GEA performs more Computational Intelligence and Neuroscience 7
1010 105
100
10β10
10β20 100
β30 Function value 10 Function value
10β40
β50 10 10β5 0 1 2 3 4 5 0 1 2 3 4 5 4 4 Evaluations Γ10 Evaluations Γ10
GEA PSO GEA PSO FEA BA FEA BA (a) (b) 101 1010
105
100 100 10β5
10β10
β15 β1 10 Function value 10 Function value 10β20
10β25
β2 10 10β30 0 1 2 3 4 5 0 0.5 1 1.5 2 4 Evaluations Γ10 Evaluations Γ105
GEA PSO GEA PSO FEA BA FEA BA (c) (d) 103 105
100 102
10β5
1 Function value 10 Function value 10β10
100 10β15 0 0.5 1 1.5 2 0 2 4 6 8 10 5 Evaluations Γ10 Evaluations Γ104
GEA PSO GEA PSO FEA BA FEA BA (e) (f)
Figure 3: The median value convergence characteristics of 10D benchmark functions. (a) πΉ1;(b)πΉ2;(c)πΉ3;(d)πΉ4;(e)πΉ5;and(f)πΉ6. 8 Computational Intelligence and Neuroscience
Table 3: Results of benchmark functions in 20 dimensions using four algorithms.
Functions Evaluations Value GEA FEA PSO BA Best 1.56πΈ β 36 2.26πΈ β 09 4.59πΈ β30 4.07E β 44 Worst 4.72πΈ β 35 6.20πΈ β 04 5.30πΈ β15 1.18E β 43 πΉ1 200000 Mean 1.30πΈ β 35 1.84πΈ β 04 3.56πΈ β16 7.51E β 44 Median 7.39πΈ β 36 9.57πΈ β 05 4.45πΈ β26 7.40E β 44 StDev 1.37πΈ β 35 2.02πΈ β 04 1.22πΈ β15 2.06E β 44 Best 4.23πΈ β 03 1.24πΈ + 00 3.57πΈ β03 8.71E β 22 Worst 3.07E β 01 4.32πΈ + 00 1.36πΈ + 00 1.31πΈ +02 πΉ2 200000 Mean 6.83E β 02 3.11πΈ + 00 2.29πΈ β 01 6.54πΈ +01 Median 5.30E β 02 3.13πΈ + 00 1.10πΈ β 01 7.71πΈ +01 StDev 6.86E β 02 9.24πΈ β 01 3.32πΈ β 01 4.39πΈ +01 Best 0E + 00 1.95πΈ β 03 0πΈ + 00 1.11πΈ β16 Worst 7.40E β 03 8.13πΈ β 02 1.23πΈ β 02 7.13πΈ β02 πΉ3 200000 Mean 1.85E β 03 2.74πΈ β 02 2.22πΈ β 03 2.93πΈ β02 Median 1.11E β 16 2.30πΈ β 02 1.67πΈ β 16 2.22πΈ β02 StDev 3.29E β 03 1.92πΈ β 02 4.07πΈ β 03 2.19πΈ β02 Best 3.20E + 00 2.21πΈ β 02 5.33πΈ + 00 1.39πΈ +01 Worst 1.26E + 01 4.93πΈ + 01 1.69πΈ + 01 1.51πΈ +02 πΉ4 400000 Mean 8.90E + 00 1.80πΈ + 01 1.12πΈ + 01 3.92πΈ +01 Median 9.34E + 00 1.90πΈ + 01 1.16πΈ + 01 1.59πΈ +01 StDev 2.00E + 00 9.30πΈ + 00 2.56πΈ + 00 4.58πΈ +01 Best 7.96E + 00 1.29πΈ + 01 7.96πΈ + 00 1.68πΈ +02 Worst 2.39E + 01 8.56πΈ + 01 2.89πΈ + 01 2.35πΈ +02 πΉ5 400000 Mean 1.79πΈ + 01 4.55πΈ +01 1.66E + 01 1.93πΈ + 02 Median 1.89πΈ + 01 4.58πΈ +01 1.64E + 01 1.92πΈ + 02 StDev 3.73E + 00 2.04πΈ + 01 6.29πΈ + 00 1.67πΈ +01 Best 1.51E β 14 1.18πΈ + 00 1.51πΈ β 14 1.33πΈ +01 Worst 2.17E + 00 3.95πΈ + 00 2.45πΈ + 00 1.54πΈ +01 πΉ6 200000 Mean 8.92E β 01 2.95πΈ + 00 9.60πΈ β 01 1.44πΈ +01 Median 1.16E + 00 3.04πΈ + 00 1.16πΈ + 00 1.45πΈ +01 StDev 7.86πΈ β 01 6.62E β 01 7.89πΈ β 01 6.77πΈ β01
stably because of its changeable mutation operation and local guarantee that each individual stays in its historically best search, which can increase its capability of exploration and position and moves forward toward the global best position exploitation. The changeable mutation operation will reduce found to date. Comparing with PSO and BA, the velocity the convergence rate, but the greedy strategy offsets this was removed from GEA but replaced by a random walk step reduction. Therefore, GEA has a convergence rate as quick toward the current global best individual. After crossover as that of PSO and BA, and the convergence accuracy of GEA with the global best individual, mutation was added to the is better than/or similar to that of the others. generated individual according to a mutation probability. To increase the algorithmβs exploitation power, a local search 5. Conclusion mechanism was applied to a new individual according to a given probability of local search; this mechanism actually In this paper, a Guiding Evolutionary Algorithm (GEA) has produces a random walk around the current global best indi- been proposed to solve global optimization problems. This vidual. algorithm mixes advantage of Particle Swarm Optimization, Experimental results show that GEA can approximate Bat Algorithm, and Genetic Algorithm. The largest difference the globally optimal solution very accurately in most of the is that the GEA accepts a newly generated individual only problems tested. Comparing with three other typical global when its fitness is better than that of the parent, while the optimization algorithms, GEA performs more accurately and PSO and many versions of the GA always accept the new stably. In the future work, niching methods will be considered individual. The BA accepts the better solution according for addition to the GEA to solve those multimodal problems to some given probability. This mechanism of GEA can yet more effectively. Computational Intelligence and Neuroscience 9
Table 4: Results of benchmark functions in 30 dimensions using four algorithms.
Functions Evaluations Value GEA FEA PSO BA Best 9.32πΈ β 35 1.22πΈ β 08 6.14πΈ β12 1.06E β 43 Worst 6.01πΈ β 33 3.86πΈ β 04 2.15πΈ β07 2.42E β 43 πΉ1 400000 Mean 1.28πΈ β 33 5.09πΈ β 05 1.99πΈ β08 1.62E β 43 Median 6.57πΈ β 34 1.86πΈ β 05 8.45πΈ β10 1.62E β 43 StDev 1.467πΈ β 33 8.90πΈ β 05 5.01πΈ β08 3.41E β 44 Best 2.18πΈ β 02 1.73πΈ + 00 7.29πΈ β01 9.46E β 22 Worst 6.98E β 01 5.65πΈ + 00 9.51πΈ β 01 1.93πΈ +02 πΉ2 400000 Mean 2.66E β 01 3.37πΈ + 00 3.90πΈ β 01 7.45πΈ +01 Median 2.35E β 01 3.30πΈ + 00 3.21πΈ β 01 6.32πΈ +01 StDev 1.89E β 01 1.11πΈ + 00 2.45πΈ β 01 7.04πΈ +01 Best 8.88E β 15 2.76πΈ β 02 4.68πΈ β 12 1.11πΈ β16 Worst 9.86E β 03 1.43πΈ β 01 9.88πΈ β 03 2.71πΈ β02 πΉ3 400000 Mean 1.36E β 03 7.59πΈ β 02 2.10πΈ β 03 8.99πΈ β03 Median 2.98E β 13 6.67πΈ β 02 6.39πΈ β 06 8.63πΈ β03 StDev 3.34E β 03 3.53πΈ β 02 3.77πΈ β 03 7.79πΈ β03 Best 1.37E + 01 2.90πΈ + 00 2.01πΈ + 01 1.92πΈ +01 Worst 2.39E + 01 1.59πΈ + 02 2.92πΈ + 01 9.35πΈ +01 πΉ4 600000 Mean 2.05E + 01 3.35πΈ + 01 2.44πΈ + 01 3.05πΈ +01 Median 2.08E + 01 2.88πΈ + 01 2.48πΈ + 01 2.44πΈ +01 StDev 2.21E + 00 3.42πΈ + 01 2.50πΈ + 00 1.70πΈ +01 Best 1.39πΈ + 01 4.08πΈ +01 1.09E + 01 2.51πΈ + 02 Worst 3.28E + 01 1.78πΈ + 02 3.38πΈ + 01 3.72πΈ +02 πΉ5 600000 Mean 2.30πΈ + 01 7.87πΈ +01 2.24E + 01 3.25πΈ + 02 Median 2.09E + 01 6.87πΈ + 01 2.19πΈ + 01 3.26πΈ +02 StDev 5.33E + 00 3.53πΈ + 01 6.87πΈ + 00 2.99πΈ +01 Best 9.31πΈ β 01 2.15πΈ +00 3.48E β 06 1.39πΈ + 01 Worst 2.41πΈ + 00 3.92πΈ +00 2.01E + 00 1.61πΈ + 01 πΉ6 300000 Mean 1.72πΈ + 00 3.06πΈ +00 1.10E + 00 1.52πΈ + 01 Median 1.78πΈ + 00 3.01πΈ +00 1.50E + 00 1.54πΈ + 01 StDev 4.01E β 01 6.64πΈ β 01 7.73πΈ β 01 5.90πΈ β01
Competing Interests [4] R. C. Eberhart and J. Kennedy, βA new optimizer using particle swarm theory,β in Proceedings of the 6th International Sympo- The authors declare that they have no competing interests. sium on Micro Machine and Human Science,vol.1,pp.39β43, Nagoya, Japan, October 1995. Acknowledgments [5] M. Dorigo, M. Birattari, and T. Stutzle,Β¨ βAnt colony optimiza- tion,β IEEE Computational Intelligence Magazine,vol.1,no.4, This work was supported in part by the US National Science pp.28β39,2006. Foundationβs BEACON Center for the Study of Evolution [6] D. Karaboga and B. Basturk, βOn the performance of artificial in Action, funded under Cooperative Agreement no. DBI- bee colony (ABC) algorithm,β Applied Soft Computing Journal, vol.8,no.1,pp.687β697,2008. 0939454. [7] X. S. Yang, βFirefly algorithm,β in Engineering Optimization,pp. 221β230, John Wiley & Sons, New York, NY, USA, 2010. References [8] X.-S. Yang, βFirefly algorithm, stochastic test functions and design optimization,β International Journal of Bio-Inspired Com- [1] I. Fister Jr., D. Fister, and X. S. Yang, βA hybrid bat algorithm,β putation,vol.2,no.2,pp.78β84,2010. ElektrotehniΛski Vestnik,vol.80,no.1-2,pp.1β7,2013. [9] X. S. Yang, βA new metaheuristic bat-inspired algorithm,β in [2] W. L. Goffe, G. D. Ferrier, and J. Rogers, βGlobal optimization Nature Inspired Cooperative Strategies for Optimization (NICSO of statistical functions with simulated annealing,β Journal of 2010),vol.284ofStudies in Computational Intelligence,pp.65β Econometrics,vol.60,no.1-2,pp.65β99,1994. 74, Springer, Berlin, Germany, 2010. [3] D. Whitley, βA genetic algorithm tutorial,β Statistics and Com- [10] B. Y. Qu, P.N. Suganthan, and J. J. Liang, βDifferential evolution puting,vol.4,no.2,pp.65β85,1994. with neighborhood mutation for multimodal optimization,β 10 Computational Intelligence and Neuroscience
IEEE Transactions on Evolutionary Computation,vol.16,no.5, pp. 601β614, 2012. [11] G. Wang and L. Guo, βA novel hybrid bat algorithm with harmony search for global numerical optimization,β Journal of Applied Mathematics,vol.2013,ArticleID696491,21pages, 2013. [12] H. Xu, C. Caramanis, and S. Mannor, βSparse algorithms are not stable: a no-free-lunch theorem,β IEEE Transactions on Pattern Analysis and Machine Intelligence,vol.34,no.1,pp.187β193, 2012. [13] S. Sanner and C. Boutilier, βApproximate linear programming forfirst-orderMDPs,βinProceedings of the 21st Conference on Uncertainty in Artificial Intelligence (UAI β05),pp.509β517, Edinburgh, UK, July 2005. [14] S. Yazdani, H. Nezamabadi-Pour, and S. Kamyab, βA gravita- tional search algorithm for multimodal optimization,β Swarm and Evolutionary Computation,vol.14,pp.1β14,2014. [15] G. Obregon-Henao, B. Babadi, C. Lamus et al., βA fast iterative greedy algorithm for MEG source localization,β in Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC β12), pp. 6748β6751, San Diego, Calif, USA, September 2012. [16] R. S. Parpinelli and H. S. Lopes, βNew inspirations in swarm intelligence: a survey,β International Journal of Bio-Inspired Computation,vol.3,no.1,pp.1β16,2011. [17] J. Kennedy, βParticle swarm optimization,β in Encyclopedia of Machine Learning, pp. 760β766, Springer, 2010. [18] R. C. Eberhart and Y. Shi, βParticle swarm optimization: developments, applications and resources,βin Proceedings of the Congress on Evolutionary Computation, pp. 81β86, IEEE, Seoul, South Korea, May 2001. [19] X.-S. Yang and A. Hossein Gandomi, βBat algorithm: a novel approach for global engineering optimization,β Engineering Computations,vol.29,no.5,pp.464β483,2012. [20] P.-W. Tsai, J.-S. Pan, B.-Y. Liao, M.-J. Tsai, and V. Istanda, βBat algorithm inspired algorithm for solving numerical optimiza- tion problems,β Applied Mechanics and Materials,vol.148-149, pp.134β137,2012. [21] X.-S. Yang and X. He, βBat algorithm: literature review and applications,β International Journal of Bio-Inspired Computa- tion,vol.5,no.3,pp.141β149,2013. [22]I.Fister,S.Fong,J.Brest,andI.Fister,βAnovelhybridself- adaptive bat algorithm,β The Scientific World Journal,vol.2014, Article ID 709738, 12 pages, 2014. [23] H. Liu, F. Gu, and X. Li, βA fast evolutionary algorithm with search preference,β International Journal of Computational Science and Engineering,vol.3/4,no.3-4,pp.197β212,2012. [24] Y. Tan and Y. Zhu, βFireworks algorithm for optimization,β in Advances in Swarm Intelligence, pp. 355β364, Springer, Berlin, Germany, 2010. [25] X. S. Yang, βFlower pollination algorithm for global optimiza- tion,β in Unconventional Computation and Natural Computa- tion, pp. 240β249, Springer, Berlin, Germany, 2012. [26] D. Whitley, S. Rana, J. Dzubera, and K. E. Mathias, βEvaluating evolutionary algorithms,β Artificial Intelligence,vol.85,no.1-2, pp. 245β276, 1996.