Primal-Dual Asynchronous Particle Swarm Optimisation (Pdapso) Hybrid Metaheuristic Algorithm for Solving Global Optimisation Problems
Total Page:16
File Type:pdf, Size:1020Kb
American Journal of Engineering Research (AJER) 2017 American Journal of Engineering Research (AJER) e-ISSN: 2320-0847 p-ISSN : 2320-0936 Volume-6, Issue-4, pp-60-71 www.ajer.org Research Paper Open Access Primal-Dual Asynchronous Particle Swarm Optimisation (pdAPSO) Hybrid Metaheuristic Algorithm for Solving Global Optimisation Problems Dada Emmanuel Gbenga1, Joseph Stephen Bassi1 and Okunade Oluwasogo Adekunle2 1(Department of Computer Engineering, Faculty of Engineering, University of Maiduguri, Nigeria). 2(Department of Computer Science, Faculty of Science, National Open University of Nigeria (NOUN), Abuja, Nigeria). ABSTRACT: Particle swarm optimization (PSO) is a metaheuristic optimization algorithm that has been used to solve complex optimization problems. The Interior Point Methods (IPMs) are now believed to be the most robust numerical optimization algorithms for solving large-scale nonlinear optimization problems. To overcome the shortcomings of PSO, we proposed the Primal-Dual Asynchronous Particle Swarm Optimization (pdAPSO) algorithm. The Primal Dual provides a better balance between exploration and exploitation, preventing the particles from experiencing premature convergence and been trapped in local minima easily and so producing better results. We compared the performance of pdAPSO with 9 states of the art PSO algorithms using 13 benchmark functions. Our proposed algorithm has very high mean dependability. Also, pdAPSO have a better convergence speed compared to the other 9 algorithms. For instance, on Rosenbrock function, the mean FEs of 8938, 6786, 10,080, 9607, 11,680, 9287, 23,940, 6269 and 6198 are required by PSO-LDIW, CLPSO, pPSA, PSOrank, OLPSO-G, ELPSO, APSO-VI, DNSPSO and MSLPSO respectively to get to the global optima. However, pdAPSO only use 2124 respectively which shows that pdAPSO have the fastest convergence speed. In summary, pdPSO and pdAPSO uses the lowest number of FEs to arrive at acceptable solutions for all the 13 benchmark functions. Keywords: Asynchronous Particle Swarm Optimization (A-PSO), Swarm Robots, Interior Point Method, Primal-Dual, gbest, and lbest. I. INTRODUCTION In Venayagomoorthy et al [1], the authors described PSO as is a stochastic population based algorithm that operates on the optimization of a candidate solution (or particle) centered to optimize a directed performance measure [2]. The first PSO algorithm was proposed by Kennedy and Eberhart [3] was based on the social behavior exemplified by a flock of bird, a school of fish, and herds of animals. The algorithm uses a set of candidates called particles that undergo gradual changes through collaboration and contest among the particles from one generation to the other. PSO have been used to solve non-differentiable [4], non-linear [5], and non- convex engineering problems [6]. Abraham, Konar and Das [7] opined that PSO is theoretically straightforward and does not require any sophisticated computation. PSO uses a few parameters, which have minimal influence on the results unlike any other optimization algorithms. This property also applies to the initial generation of the algorithm. The randomness of the initial generation will not affect the output produced. Despite these advantages, PSO faces similar shortcomings as other optimization algorithms. Specifically, PSO algorithm suffers from premature convergence, inability to solve dynamic optimization problems, the tendency of particles to be trapped in the local minima and partial optimism (i.e., which degrades the regulation of its speed and direction). Hypothetically, particle swarm optimization (PSO) is an appropriate tool for addressing many optimisation problem in different fields of human endeavour. Notwithstanding the optimal convergence rate of PSO, the algorithm is not able to efficiently handle dynamic optimization jobs which is very crucial in some fields such as stock market prediction, crude oil price prediction, swarm robotics and space project. This explains the unfavourable inadequacies that led to the recent drifts of development new variants of PSO algorithms for solving complex tasks. It has become the custom to develop hybrid PSO heuristic algorithms to tackle the weaknesses of some existing variants of PSO. The idea behind the development of pdAPSO is to w w w . a j e r . o r g Page 60 American Journal of Engineering Research (AJER) 2017 improve the performance of PSO in solving global optimization problems through the hybridization of APSO and Primal Dual algorithms. In this study, we proposed a fusion of Asynchronous PSO with Primal-Dual Interior-Point method to resolve those common issues relevant to PSO algorithm. Two key components of this implementation are the explorative capacity of PSO, and the exploitative capability of the Primal-Dual Interior- Point algorithm. On the one hand, exploration is key in searching (i.e., traversing the search landscape) to provide reliable approximation values of the global optimal [8]. On the other, exploitation is critical to focusing the search on the ideal solutions resulting from exploration to produce more refined results [9]. The state-of-the-art Interior-Point algorithm has gained popularity as the most preferred approach for providing solution to large-scale linear programming problems [10]. They are however limited due to their inability to solve problems that are unstable in nature. This is because contemporary Interior-Point algorithms are not able to cope with the increasing need of the large number of constraints. Efforts to increase the efficiency of the Interior-Point algorithm have led to the development of another variant of this algorithm that can handle unstable linear programming problems. These algorithms lower the number of work per iteration by using small number of constraints thereby reducing the computational processing time drastically [11]. II. REVIEW OF RELEVANT WORK New variants of the PSO algorithm can be developed by fusing it with an already tested approaches which have been successfully applied to solve complex optimization problems. Academicians and researchers have improved the performance of PSO by integrating into it the basics of other famous methods. Some researchers have also made efforts to increase the performance of popular evolutionary algorithms like Genetic Algorithm, Ant Colony and Differential Evolution, etc. by infusing the position and velocity update equations of the PSO. The purpose of the integration is to make PSO overcome some of its setbacks like premature convergence, particles been trapped in the local minima, and partial optimization. Robinson et al. [12] developed the GA-PSO and PSO-GA and used them to solve a specific electromagnetic application problem of projection antenna. The results of their experiments revealed that the PSO-GA hybrid algorithm performs better than the GA-PSO, standard PSO only and GA only. He proposed the hybridization of GA and Hill Climbing algorithm the same year and used it to solve unconstrained global optimization problems [13]. Conradie, Miikkulainen, and Aldrich [14], developed the symbiotic neuro memetic evolution (SMNE) algorithm when they hybridized PSO and ‘symbiotic genetic algorithm’ and used it for neural network control devices in a corroboration learning context. Grimaldi et al. [15] developed the genetical swarm optimization (GSO) by hybridizing PSO and GA. They later went ahead and used their algorithm to solve combinatorial optimization problems. The presented different hybridization approaches [16]. They authenticated the genuineness of GSO using different multimodal benchmark problems and applied it in different domain as demonstrated by the authors in Gandelli et al. [17], Grimaccia et al. [18] and Gandelli et al. [19]. Hendtlass [20] proposed the Swarm Differential Evolution Algorithm (SDEA) where PSO swarm acts as the population for Differential Evolution (DE) algorithm, and the DE is carried out over some generations. After the DE have performed its part in the optimization, the resulting population is then optimized by PSO. Talbi and Batauche [21] developed the DEPSO algorithm and used it to solve problem in the field of medical image processing. In Hao et al. [22] introduced another variant of DEPSO where some probability distribution rules are used any of PSO or DE to produce the best solution. Omran et al. [23] developed a Bare Bones Differential Evolution (BBDE) algorithm which used the idea of barebones PSO and self-adaptive DE approaches. They used their algorithm to solve image categorization problem. Jose et al. [24] developed another variant of DEPSO algorithm that uses the differential modification systems of DE to update the velocities of particles in the swarm. Zhang et al. [25] proposed the DE-PSO algorithm that uses three unconventional updating approaches. Liu et al. [26] developed the PSO-DE algorithm that combines DE with PSO and uses the DE to update the former best positions of PSO particles to make them escape local magnetizers thereby avoiding inertia in the population. Capanio et al. [27] proposed a Superfit Memetic Differential Evolution (SFMDE) algorithm which is a hybrid of DE, PSO, Nelder Mead algorithm and Rosenbrock algorithm. The algorithm was used to solve some standard benchmark and engineering problems. The researchers in Xu and Gu [28] developed the particle swarm optimization with prior crossover differential evolution (PSOPDE). Pant et al. [29] reported a DE-PSO algorithm that uses DE for the initial optimization process and then moved on to the PSO segment