Particle Swarm Optimization Stability Analysis
Total Page:16
File Type:pdf, Size:1020Kb
PARTICLE SWARM OPTIMIZATION STABILITY ANALYSIS A Thesis Submitted to The School of Engineering of the UNIVERSITY OF DAYTON In Partial Fulfillment of the Requirements for The Degree Master of Science in Electrical Engineering by Ouboti Seydou Eyanaa Djaneye-Boundjou UNIVERSITY OF DAYTON Dayton, Ohio December 2013 PARTICLE SWARM OPTIMIZATION STABILITY ANALYSIS Name: Djaneye-Boundjou, Ouboti Seydou Eyanaa APPROVED BY: Raul´ Ordo´nez,˜ Ph.D. Russell Hardie, Ph.D. Advisor Committee Chairman Committee Member Professor, Electrical and Computer Professor, Electrical and Computer Engineering Engineering Malcolm Daniels, Ph.D. Committee Member Associate Professor, Electrical and Computer Engineering John G. Weber, Ph.D. Tony E. Saliba, Ph.D., P.E. Associate Dean Dean, School of Engineering School of Engineering & Wilke Distinguished Professor ii c Copyright by Ouboti Seydou Eyanaa Djaneye-Boundjou All rights reserved 2013 ABSTRACT PARTICLE SWARM OPTIMIZATION STABILITY ANALYSIS Name: Djaneye-Boundjou, Ouboti Seydou Eyanaa University of Dayton Advisor: Dr. Raul´ Ordo´nez˜ Optimizing a multidimensional function – uni-modal or multi-modal – is a problem that reg- ularly comes about in engineering and science. Evolutionary Computation techniques, including Evolutionary Algorithm and Swarm Intelligence (SI), are biological systems inspired search meth- ods often used to solve optimization problems. In this thesis, the SI technique Particle Swarm Optimization (PSO) is studied. Convergence and stability of swarm optimizers have been subject of PSO research. Here, using discrete-time adaptive control tools found in literature, an adaptive particle swarm optimizer is developed. An error system is devised and a controller is designed to adaptively drive the error to zero. The controller features a function approximator, used here as a predictor to estimate future signals. Through Lyapunov’s direct method, it is shown that the devised error system is ultimately uniformly bounded and the adaptive optimizer is stable. More- over, through LaSalle-Yoshizawa theorem, it is also shown that the error system goes to zero as time evolves. Experiments are performed on a variety of benchmark functions and results for comparison purposes between the adaptive optimizer and other algorithms found in literature are provided. iii To my family Gbandi Djaneye-Boundjou, Akoua Maguiloube` Tatcho, Bilaal Djaneye-Boundjou and Jamila Djaneye-Boundjou. iv ACKNOWLEDGMENTS First, I want to thank God for letting me live to see this thesis through. I am thankful to the members of my committee, Dr. Russell Hardie, Dr. Asari Vijayan and Dr. Malcolm Daniels for their time and agreeing to serve as members of my committee. Special thanks to my advisor Dr. Raul´ Ordo´nez˜ for being my advisor, for his exquisite attention to detail and for guiding me throughout my graduate studies and this research. I am also grateful to Dr.Veysel Gazi, who along with Dr. Ordo´nez,˜ helped me a great deal while I was working on my thesis. It is only fitting that I thank Brother Maximin Magnan, SM, and Dr. Amy Anderson without who I would in all probability not be attending this university in the first place. Last but not least, I want to thank family and friends for their love, support and encouragements. v TABLE OF CONTENTS Page Abstract . iii Dedication . iv Acknowledgments . .v List of Figures . viii List of Tables . .x CHAPTERS: I. INTRODUCTION . .1 1.1 Problem Statement and PSO Algorithm . .2 1.2 Literature Study on Stability and Convergence . .3 1.2.1 PSO Models . .3 1.2.2 Velocity Clamping . .4 1.2.3 Inertia Factor . .5 1.2.4 Theoretical Approach . .5 II. MOTIVATION . 10 III. STABILITY ANALYSIS . 13 IV. TEST FUNCTIONS . 25 V. THE ADAPTIVE PSO . 28 5.1 Designing the Adaptive PSO . 28 5.2 Pseudocode . 29 5.3 Understanding the Adaptive PSO . 30 5.3.1 Origin-Bias . 31 5.3.2 Predictor . 33 vi VI. PERFORMANCE EVALUATION . 37 VII. CONCLUSION . 53 7.1 APSO Characteristics . 53 7.2 On Being Biased . 54 7.3 Testing the Predictor . 54 7.4 APSO Performance . 54 7.5 Future Work . 55 Bibliography . 56 Appendices: A. MATLAB CODE FOR ADAPTIVE PSO . 60 1.1 Main program . 60 1.2 Functions used . 65 1.2.1 Search space set up . 65 1.2.2 Computing the fitness . 66 1.2.3 Dead-zone modification . 67 B. MATLAB CODE FOR EXPOSING ORIGIN-BIAS . 69 2.1 Main program . 69 2.2 Functions used . 78 C. MATLAB CODE FOR PERFORMANCE COMPARISON TABLE . 81 3.1 Main program . 81 3.2 Functions Used . 86 D. MATLAB CODE FOR FIGURES OF MERIT: CONVERGENCE AND TRANSIENT COMPARISON . 87 4.1 Main program . 87 4.2 Functions used . 100 vii LIST OF FIGURES Figure Page 4.1 Contour plots of the test functions . 26 5.1 Optimizing f1, n = 2 with the search space origin at xo5 .............. 33 5.2 Optimizing f2, n = 2 with the search space origin at xo5 .............. 34 5.3 Optimizing f3, n = 2 with the search space origin at xo5 .............. 34 5.4 Optimizing f4, n = 2 with the search space origin at xo5 .............. 35 5.5 Optimizing f5, n = 2 with the search space origin at xo5 .............. 35 5.6 Optimizing f6, n = 2 with the search space origin at xo5 .............. 36 6.1 Optimizing f1, n = 30 with the search space origin at xo1 .............. 40 6.2 Optimizing f1, n = 30 with the search space origin at xo2 .............. 40 6.3 Optimizing f1, n = 30 with the search space origin at xo3 .............. 41 6.4 Optimizing f1, n = 30 with the search space origin at xo4 .............. 41 6.5 Optimizing f2, n = 30 with the search space origin at xo1 .............. 42 6.6 Optimizing f2, n = 30 with the search space origin at xo2 .............. 42 6.7 Optimizing f2, n = 30 with the search space origin at xo3 .............. 43 6.8 Optimizing f2, n = 30 with the search space origin at xo4 .............. 43 6.9 Optimizing f3, n = 30 with the search space origin at xo1 .............. 44 viii 6.10 Optimizing f3, n = 30 with the search space origin at xo2 .............. 44 6.11 Optimizing f3, n = 30 with the search space origin at xo3 .............. 45 6.12 Optimizing f3, n = 30 with the search space origin at xo4 .............. 45 6.13 Optimizing f4, n = 30 with the search space origin at xo1 .............. 46 6.14 Optimizing f4, n = 30 with the search space origin at xo2 .............. 46 6.15 Optimizing f4, n = 30 with the search space origin at xo3 .............. 47 6.16 Optimizing f4, n = 30 with the search space origin at xo4 .............. 47 6.17 Optimizing f5, n = 30 with the search space origin at xo1 .............. 48 6.18 Optimizing f5, n = 30 with the search space origin at xo2 .............. 48 6.19 Optimizing f5, n = 30 with the search space origin at xo3 .............. 49 6.20 Optimizing f5, n = 30 with the search space origin at xo4 .............. 49 6.21 Optimizing f6, n = 30 with the search space origin at xo1 .............. 50 6.22 Optimizing f6, n = 30 with the search space origin at xo2 .............. 50 6.23 Optimizing f6, n = 30 with the search space origin at xo3 .............. 51 6.24 Optimizing f6, n = 30 with the search space origin at xo4 .............. 51 ix LIST OF TABLES Table Page 4.1 Test functions . 25 ∗ 5.1 Impact of moving xa inside Sx ............................ 32 6.1 Performance comparison . 38 x CHAPTER I INTRODUCTION Optimizing a multidimensional function – uni-modal or multi-modal – is a problem that regu- larly comes about in engineering and science. Evolutionary Computation (EC) techniques – includ- ing Evolutionary Algorithm and Swarm Intelligence (SI) – [1, 2, 3, 4, 5, 6] are biological systems inspired search methods often used in solving complex optimization problems. Developed in 1995 by Eberhart and Kennedy [7, 8, 9, 10], Particle Swarm Optimization (PSO) is a proven, powerful, efficient and effective SI technique [11, 12]. PSO is a population-based stochastic optimization method encouraged from social interaction observed in bird flocking or fish schooling. Particles, in a swarm optimizer, are potential solutions to the optimization problem. Their positions and ve- locities are randomly initialized at the start of the search. Each particle then dynamically adjusts its velocity based on previous behaviors and moves about the problem space seeking an optimum solution. At every generation, each particle computes the best solution individually achieved so far, referred to as personal best, and the best solution achieved up to that point by the particles in its neighborhood, referred to as local best, when a local neighborhood topology is employed (not all particles coexist in the same neighborhood), or global best, when a global neighborhood topology is employed (all particles are neighbors of one another). Particles are attracted towards weighted averages of their personal best and their local best or global best. 1 1.1 Problem Statement and PSO Algorithm n Let the function f : R ! R be a cost function. It is our desire to optimize the cost f using the PSO technique. To do so we can choose from various PSO algorithms studied in literature. Here, we use the PSO variant with inertia factor, absent from the original algorithm [7, 8]. For i = 1, 2, ::: , N; where N is the number of particles in the swarm, the dynamics of particle i, as described in [13], are given by the equations i i i i i i i i i v (t + 1) = ! v (t) + '1(t)(p (t) − x (t)) + '2(t)(g (t) − x (t)); (1.1) xi(t + 1) = xi(t) + vi(t + 1); where all product operations are performed element-wise. The system in (1.1) is a discrete-time system. The variable t denotes generation or iteration and not continuous time.