PARTICLE SWARM OPTIMIZATION

Thesis

Submitted to

The School of Engineering of the

UNIVERSITY OF DAYTON

In Partial Fulfillment of the Requirements for

The Degree of

Master of Science in Electrical Engineering

By

SaiPrasanth Devarakonda

UNIVERSITY OF DAYTON

Dayton, Ohio

May, 2012 PARTICLE SWARM OPTIMIZATION

Name: Devarakonda, SaiPrasanth

APPROVED BY:

Raul Ordonez, Ph.D. John Loomis, Ph.D. Advisor Committee Chairman Committee Member Associate Professor Associate Professor Electrical & Computer Engineering Electrical & Computer Engineering

Robert Penno, Ph.D. Committee Member Associate Professor Electrical & Computer Engineering

John G. Weber, Ph.D. Tony E. Saliba, Ph.D. Associate Dean Dean, School of Engineering School of Engineering & Wilke Distinguished Professor

ii ABSTRACT

PARTICLE SWARM OPTIMIZATION

Name: Devarakonda, SaiPrasanth University of Dayton

Advisor: Dr. Raul Ordonez

The particle swarm algorithm is a computational method to optimize a problem iteratively. As the neighborhood determines the sufficiency and frequency of information

flow, the static and dynamic neighborhoods are discussed. The characteristics of the different methods for the selection of the algorithm for a particular problem are summarized. The performance of particle swarm optimization with dynamic neighborhood is investigated by three different methods. In the present work two more benchmark functions are tested using the algorithm. Conclusions are drawn by testing the different benchmark functions that reflect the performance of the PSO with dynamic neighborhood. And all the benchmark functions are analyzed by both

Synchronous and Asynchronous PSO algorithms.

iii This thesis is dedicated to my grandmother Jogi Lakshmi Narasamma.

iv ACKNOWLEDGMENTS

I would like to thank my advisor Dr.Raul Ordonez for being my mentor, guide and personally supporting during my graduate studies and while carrying out the thesis work and offering me excellent ideas. I also wish to express my deepest gratitude to Dr.Veysel Gazi, who along with my advisor offered me his help while working on my thesis. I would also like to thank Dr. John Loomis and Dr. Robert Penno for being the committee members. I would like to express my appreciation to my brother who has helped with my work. I would like to thank everyone in the Electrical Department for making me feel comfortable in the two and half year’s of study at University of Dayton. I finally thank my family for their support and love in all activities during my time in the graduate program.

v TABLE OF CONTENTS

Page

ABSTRACT ...... iii

DEDICATION ...... iv

ACKNOWLEDGMENTS ...... v

LIST OF FIGURES ...... viii

LIST OF TABLES ...... xiii

CHAPTER:

1. INTRODUCTION ...... 1

1.1 Particle Swarm Optimization Algorithm ...... 2 1.1.1 Particle Swarm Optimization with Constriction Factor . . . 5 1.2 Hybrid Particle Swarm Optimization Algorithms ...... 7 1.3 Parallel and Distributed Implementation ...... 10 1.4 Multi Objective Optimization ...... 11 1.5 Stability and Convergence Analysis ...... 13 1.6 Application Areas ...... 14 1.6.1 Neural Network Training ...... 14 1.6.2 Dynamic Tracking ...... 14 1.6.3 Multi-Agent Search ...... 15 1.6.4 Wireless-Sensor Networks ...... 19 1.6.5 Optimal Design of Power Grids ...... 20 1.6.6 PSO for Multi User Detection in CDMA ...... 20

2. NEIGHBORHOOD TOPOLOGIES ...... 22

2.1 Static Neighborhood ...... 23 2.2 Dynamic Neighborhood ...... 25

vi 2.2.1 Nearest Neighbors in Search Space ...... 26 2.2.2 Nearest Neighbors in Function Space ...... 27 2.2.3 Random Neighborhood ...... 27 2.3 Synchronous and Asynchronous PSO ...... 29 2.3.1 Synchronous PSO ...... 29 2.3.2 Asynchronous PSO ...... 30

3. RESULTS - I ...... 32

4. RESULTS - II ...... 55

5. RESULTS - III ...... 77

5.1 Synchronous PSO ...... 77 5.2 Asynchronous PSO ...... 82

6. CONCLUSIONS ...... 90

6.1 Design Guidelines ...... 95 6.2 Future Work ...... 95

BIBLIOGRAPHY ...... 96

Appendices:

A. MATLAB CODE FOR SYNCHRONOUS PSO ALGORITHM FOR DYNAMIC NEIGHBORHOOD FOR DEJONGF4 FUNCTION ...... 106

B. MATLAB CODE FOR ASYNCHRONOUS PSO ALGORITHM FOR DYNAMIC NEIGHBORHOOD FOR DEJONGF4 FUNCTION . . . . . 110

C. MATLAB CODE FOR SYNCHRONOUS PSO ALGORITHM FOR NO OF PARTICLES AS PARAMETER FOR DEJONGF4 FUNCTION . . . 114

D. MATLAB CODE FOR ASYNCHRONOUS PSO ALGORITHM FOR DYNAMIC NEIGHBORHOOD FOR DEJONGF4 FUNCTION . . . . . 118

vii LIST OF FIGURES

Figure Page

2.1 Static Neighborhood Topologies...... 24

2.2 Nearest neighbors in search space...... 27

2.3 Nearest neighbors in function space...... 28

3.1 Contour plots of all six benchmark functions...... 33

3.2 Distance between particles in search space against average global value

for a Sphere function...... 35

3.3 Distance between particles in search space against average global value

for a Griewank function...... 35

3.4 Distance between particles in search space against average global value

for Rastrigin function...... 37

3.5 Distance between particles in search space against average global value

for a Rosenbrock function...... 37

3.6 Distance between particles in search space against average global value

for a Ackley function...... 38

3.7 Distance between particles in search space against average global value

for a DejonF4 function...... 41

viii 3.8 Distance between particles in function space against average global

value for a Sphere function...... 41

3.9 Distance between particles in function space against average global

value for a Griewank function...... 42

3.10 Distance between particles in function space against average global

value for a Rastrigin function...... 43

3.11 Distance between particles in function space against average global

value for a Rosenbrock function...... 45

3.12 Distance between particles in function space against average global

value for a Ackley function...... 45

3.13 Distance between particles in function space against average global

value for a DejonF4 function...... 48

3.14 Probability of particles being neighbors against mean global best value

for a Sphere function...... 48

3.15 Probability of particles being neighbors against mean global best value

for a Griewank function...... 49

3.16 Probability of particles being neighbors against mean global best value

for a Rastrigin function...... 50

3.17 Probability of particles being neighbors aganist mean global best value

for a Rosenbrock function...... 51

3.18 Probability of particles being neighbors against mean global best value

for a Ackley function...... 51

3.19 Probability of particles being neighbors against mean global best value

for a DejongF4 function...... 52

ix 3.20 Distance between particles in search space against average global best

value for Synchronous PSO ...... 53

3.21 Distance between particles in search space against average global best

value for Asynchronous PSO ...... 54

4.1 Distance between particles in search space against average global value

for a Sphere function...... 56

4.2 Distance between particles in search space against average global value

for a Griewank function...... 57

4.3 Distance between particles in search space against average global value

for a Rastrigin function...... 58

4.4 Distance between particles in search space against average global value

for a Rosenbrock function...... 59

4.5 Distance between particles in search space against average global value

for a Ackley function...... 61

4.6 Distance between particles in search space against average global value

for a DejonF4 function...... 61

4.7 Distance between particles in function space against average global

value for a Sphere function...... 63

4.8 Distance between particles in function space against average global

value for a Griewank function...... 64

4.9 Distance between particles in function space against average global

value for a Rastrigin function...... 65

4.10 Distance between particles in function space against average global

value for a Rosenbrock function...... 66

x 4.11 Distance between particles in function space against average global

value for a Ackley function...... 68

4.12 Distance between particles in function space against average global

value for a DejonF4 function...... 68

4.13 Distance between particles in Random Neighborhood against average

global value for a Sphere function...... 70

4.14 Distance between particles in Random Neighborhood against average

global value for a Griewank function...... 71

4.15 Distance between particles in random neighborhood against average

global value for a Rastrigin function...... 72

4.16 Distance between particles in random neighborhood against average

global value for a Rosenbrock function...... 73

4.17 Probability of particles being neighbors against mean global best value

for a Ackley function...... 73

4.18 Probability of particles being neighbors against mean global best value

for a DejongF4 function...... 74

4.19 Neighborhood size expressed as percentage of function space against

Average global best value for Synchronous PSO ...... 75

4.20 Neighborhood size expressed as percentage of function space against

Average global best value for Asynchronous PSO ...... 76

5.1 Average global best value versus No.of Neighbors for Sphere function. 79

5.2 Average global best value versus No.of Neighbors for Griewank function. 79

5.3 Average global best value versus No.of Neighbors for Rastrigin function. 81

5.4 Average global best value versus No.of Neighbors for Rosenbrock function. 81

xi 5.5 Average global best value versus No.of Neighbors for Ackley function. 82

5.6 Average global best value versus No.of Neighbors for DejongF4 function. 83

5.7 Average global best value versus No.of Neighbors for Sphere function. 84

5.8 Average global best value versus No.of Neighbors for Griewank function. 85

5.9 Average global best value versus No.of Neighbors for Rastrigin function. 86

5.10 Average global best value versus No.of Neighbors for Rosenbrock function. 87

5.11 Average global best value versus No.of Neighbors for Ackley function. 88

5.12 Average global best value versus No.of Neighbors for DejongF4 function. 89

6.1 Comparison of Synchronous PSO and Asynchronous PSO in Static

Neighborhood ...... 93

6.2 Comparison of Synchronous PSO and Asynchronous PSO in Dynamic

Neighborhood ...... 94

xii LIST OF TABLES

Table Page

3.1 Results for Neighborhood determination based on nearest neighbors in

the search space ...... 34

3.2 Results for Neighborhood determination based on nearest neighbors in

the search space ...... 36

3.3 Results for Neighborhood determination based on nearest neighbors in

the search space ...... 36

3.4 Results for Neighborhood determination based on nearest neighbors in

the search space ...... 38

3.5 Results for Neighborhood determination based on nearest neighbors in

the search space ...... 39

3.6 Results for Neighborhood determination based on nearest neighbors in

the search space ...... 40

3.7 Results for Neighborhood determination based on nearest neighbors in

the function space ...... 42

3.8 Results for Neighborhood determination based on nearest neighbors in

the function space ...... 43

xiii 3.9 Results for Neighborhood determination based on nearest neighbors in

the function space ...... 44

3.10 Results for Neighborhood determination based on nearest neighbors in

the function space ...... 44

3.11 Results for Neighborhood determination based on nearest neighbors in

the function space ...... 46

3.12 Results for Neighborhood determination based on nearest neighbors in

the function space ...... 47

3.13 Results for Neighborhood determination based on random neighborhood 47

3.14 Results for Neighborhood determination based on random neighborhood 49

3.15 Results for Neighborhood determination based on random neighborhood 49

3.16 Results for Neighborhood determination based on random neighborhood 50

3.17 Results for Neighborhood determination based on random neighborhood 50

3.18 Results for Neighborhood determination based on random neighborhood 52

4.1 Results for Neighborhood determination based on nearest neighbors in

the search space ...... 56

4.2 Results for Neighborhood determination based on nearest neighbors in

the search space ...... 57

4.3 Results for Neighborhood determination based on nearest neighbors in

the search space ...... 58

4.4 Results for Neighborhood determination based on nearest neighbors in

the search space ...... 59

4.5 Results for Neighborhood determination based on nearest neighbors in

the search space ...... 60

xiv 4.6 Results for Neighborhood determination based on nearest neighbors in

the search space ...... 62

4.7 Results for Neighborhood determination based on nearest neighbors in

the function space ...... 63

4.8 Results for Neighborhood determination based on nearest neighbors in

the function space ...... 64

4.9 Results for Neighborhood determination based on nearest neighbors in

the function space ...... 65

4.10 Results for Neighborhood determination based on nearest neighbors in

the function space ...... 66

4.11 Results for Neighborhood determination based on nearest neighbors in

the function space ...... 67

4.12 Results for Neighborhood determination based on nearest neighbors in

the function space ...... 69

4.13 Results for Neighborhood determination based on nearest neighbors in

the Random Neighborhood ...... 69

4.14 Results for Neighborhood determination based on nearest neighbors in

the Random Neighborhood ...... 70

4.15 Results for Neighborhood determination based on nearest neighbors in

the Random Neighborhood ...... 71

4.16 Results for Neighborhood determination based on nearest neighbors in

the Random Neighborhood ...... 72

4.17 Results for Neighborhood determination based on random neighborhood 72

4.18 Results for Neighborhood determination based on random neighborhood 74

xv 5.1 Results for particle convergence based on number of neighbors for δ=800 78

5.2 Results for particle convergence based on number of neighbors for δ=2500 78

5.3 Results for particle convergence based on number of neighbors for δ=40 80

5.4 Results for particle convergence based on number of neighbors for δ=10 80

5.5 Results for particle convergence based on number of neighbors for δ=400 82

5.6 Results for particle convergence based on number of neighbors for δ=200 83

5.7 Results for particle convergence based on number of neighbors for δ=800 84

5.8 Results for particle convergence based on number of neighbors for δ=2500 85

5.9 Results for particle convergence based on number of neighbors for δ=40 86

5.10 Results for particle convergence based on number of neighbors for δ=10 87

5.11 Results for particle convergence based on number of neighbors for δ=400 88

5.12 Results for particle convergence based on number of neighbors for δ=200 89

xvi CHAPTER 1

INTRODUCTION

Particle swarm optimization (PSO) is an effective computation technique developed by Kennedy and Eberhart in 1995 Like genetic algorithms, , hill climbing, etc, particle swarm optimization is a population based search algorithm and initialized by random solutions referred to as particles. Unlike the other computation techniques, each particle in PSO has a velocity. With this velocity each particle moves with in the search space and dynamically adjusts its velocity, according to its previous behaviors. Therefore, particles tend to move towards better points within the search space. Since the method is easy to implement and has various application areas, many researchers have conducted studies about PSO. Studies about the method can be categorized as particle swarm optimization algorithms, neighborhood topologies used in the particle swarm optimization, parameter adjustment of particle swarm optimization algorithms, hybrid particle swarm optimization algorithms, stability analysis of the particle swarm optimization, and applications of particle swarm optimization method.

1 1.1 Particle Swarm Optimization Algorithm

The basic particle swarm optimization algorithm is developed exploiting social

model simulations. The method is developed with inspiration from flocking of birds

and schooling of fish. The PSO method was first designed to simulate behavior of

birds searching for food in a bounded area. A single bird would find food through

social cooperation with other birds in the flock, i.e., with its neighbors. Later, the

method was extended for multi-dimensional search, and neighborhood topologies are

considered to determine the relationship between particles in a swarm. The particle

swarm optimization algorithm with dynamic neighborhood topology for every particle

(i = 1,...,N) can be described as

i i i i i v (t + 1) = χ[v (t) + ϕ1(t)(p (t) − x (t))

i i i +ϕ2(t)(g (t) − x (t))], (1.1)

xi(t + 1) = xi(t) + vi(t + 1),

where xi(t) ∈ Rn is the position of ith particle at time t, pi(t) ∈ Rn is the best

position achieved by the ith particle until time t, gi(t) ∈ Rn is the best position

achieved by ith particle and its neighbors until time t, vi(t) ∈ Rn is the rate of position change (velocity) of the ith particle at time t, and N is the number of

i n i n particles in the swarm. The coefficients ϕ1(t) ∈ [0, ϕ¯1] and ϕ2(t) ∈ [0, ϕ¯2] are n- dimensional uniform vectors with random distribution referred to as social and

cognitive learning coefficients, respectively. They determine the relative significance

of social and cognitive components.

2 The first equation in (1.1) shows how particles update their velocities dynamically during search, while the second equation shows how particles adjust their positions according to their updated velocities. The first equation in (1.1) has three components.

The first component is the momentum component, which shows an adjustment of updated velocity according to current velocity prevents a rapid change in velocity and updates the velocity according to the current velocity. The second component is the cognitive component, which shows that particles have memory and are able to use their previous experiences while determining their velocity in search space. The last component is referred to as the social component, which shows social cooperation of particles in swarm ability, i.e., particles ability to exploit their neighbor’s experiences while determining their velocity in search space.

The sum of the three components designated in (1.1) could result in large velocity values. In such cases the algorithm is said to be in “explosion” behavior, where high values of the updated velocity prevent the particles from converging and they scatter through the search space. Vmax is the most significant parameter in the basic

PSO algorithm affecting its performance, and it is the only parameter that needs to be adjusted in order to use the basic PSO algorithm. A large value of Vmax causes the particles to search in a larger area and to move far from the areas having good solutions, while a small value causes the particles to search within a smaller area and to possibly get trapped in local minima. In order to prevent such cases, each particle’s velocity could be limited to a range [−Vmax,Vmax].

Particle swarm optimization algorithms have a simple structure, are easy to implement and have a high computational efficiency. In the basic particle swarm optimization algorithm, each particle in n- dimensional search space is assigned randomly generated

3 position and velocity vectors. A fitness value according to the chosen fitness function

is assigned to each particle according to their initial positions in the search space.

During search, each particle’s fitness value is compared with the best fitness value

achieved until that instant (pbest); the better value is assigned as the best fitness

value achieved until that instant, and its position is recorded as pi(t). If all the particles are connected it is global best, otherwise it is neighborhood best. A better value is assigned as the global best fitness value and the corresponding position is gi(t). After determining the best and neighborhood global best position vectors using (1.1), each particle updates its position and velocity vectors. This situation continues iteratively until it reaches a predefined stopping criterion, which determines the desired performance aspects of the algorithm.

The particle swarm optimization algorithm, like the genetic algorithms, simulated annealing, hill climbing is randomly initialized, where the members of the population interact with each other. Also, the particle swarm optimization can converge to the possible solutions faster than other algorithms, but an incorrect fine tuning of the algorithm parameters could result in a slower convergence [1, 2].

As mentioned before, equation (1) determines the particle’s velocity within the search space and is divided into a momentum part, a cognitive part and a social part. Balance between these three parts determines the method’s global and local search capabilities. The uniform n-dimensional random vectors ϕ1 and ϕ2, which are

referred to as cognitive and social learning coefficients, respectively, greatly influence

the particles’ local and global search capabilities. Increasing the value of cognitive

learning coefficient (ϕ1) results in an increase of the local search capability, while

an increase of the social learning coefficient (ϕ2) results in an increase of the global

4 search capability [3]. The most significant disadvantage of these random coefficients is that the method could exhibit “explosive” behavior. Even though the randomness increases the method’s search capability, it is possible that these particles can attain undesired velocity values due to this randomness. As a result of the above situation, the particles could move in the search space with high velocities and this may not let the particles converge to a common point in the search space. Due to this fact, a constant velocity bound Vmax is used to prevent this situation as mentioned before.

According to some studies the situations where Vmax value is dynamically changing could result in better performance [4].

1.1.1 Particle Swarm Optimization with Constriction Factor

Constriction factor is proposed in some works for convergence of the particle swarm optimization method [5, 6]. For the new parameter the method’s dynamic equations are changed as i i i i i v (t + 1) = χ[v (t) + ϕ1(t)(p (t) − x (t))

i i i +ϕ2(t)(g (t) − x (t))], (1.2)

xi(t + 1) = xi(t) + vi(t + 1), where χ is the constriction factor. The constriction factor is defined as a function of the cognitive and social learning coefficients ϕ1 and ϕ2, as

 2κ  √ if ϕ > 4, χ = ϕ−2+ ϕ2−4ϕ (1.3) √  κ otherwise,

where ϕ = ϕ1 + ϕ2 and κ ∈ [0, 1].

5 In (2), if the inertia weight parameter is equal to the constriction factor and if the learning coefficients ϕ1 and ϕ2 are chosen such that ϕ1 + ϕ2 = ϕ, and if ϕ > 4 is satisfied, the method with inertia weight parameter is equivalent to the method with the constriction factor [6]. In [7] the authors compared the method with inertial weight parameter and constriction factor and provided some guidelines to select the parameters in order to increase the method’s performance. Carlisle and Dozier considered (2) and determined the factors that affect the method’s performance, such as the size of the swarm, the size of the neighborhood, the ratio of the cognitive and social learning coefficients and the velocity bound Vmax. They considered that these factors are in different ranges for different fitness functions. They showed that certain values of the parameters may be advantageous in some problems.

The constriction factor considered in [6], is usually calculated by taking the upper limit of the learning coefficient (ϕ) as 4.1. The cognitive and social learning

i coefficients are taken as uniform n-dimensional random vectors such as ϕ1(t) ∈

n i n [0, 2.05] and ϕ2(t) ∈ [0, 2.05] . After considering the above values and taking κ = 1, the constriction factor (χ) could be calculated as 0.7298. The particle swarm optimization algorithm is a nonlinear algorithm. For this reason it is thought that dynamic change of the algorithm’s parameters could increase the performance.

The studies mentioned above focus on determining which parameters would increase the performance of the particle swarm optimization method and present some guidelines for the parameter adjustment. According to the problem and function to be optimized, different adjustments for parameters are considered. On the other hand, a generalization cannot be made and it is noted that for different problems the values of the parameters yield optimum performance, and so the parameter adjustments are left to the user.

6 1.2 Hybrid Particle Swarm Optimization Algorithms

Hybrid particle swarm are studied, where the particle swarm optimization algorithms are incorporated with other computational techniques. The hypothesis in [8] was a hybrid PSO has the potential to reach a better optimal solution than the standard

PSO. The operators used in the computation methods such as selection, crossover and mutation are widely used with the method. With the selection operator only the particles having the best fitness value are passed to the next generation to increase the chance of finding global optimum points [2]. The crossover operator could be considered as the communication among the particles, where the particles share their information with each other, so that the particles could search different regions in the search space [9]. The most widely used operator is the mutation operator, since it is easy to incorporate with the method. Also this operator increases the diversity of the particles in the search space, which could prevent the particles from getting trapped into local minima [10]. The mutation operator is used for mutating method’s parameters, like the constriction factor (χ), the cognitive and social learning coefficients (ϕ1 and ϕ2) and the inertia weight parameter (w) in some studies [11, 8].

In [8], the authors used the mutation operator to mutate the inertia weight parameter, in order to prevent the particles from clustering in the search space (collision of the particles) and distribute them in the search space. A similar philosophy was used in [12], where the authors have proposed a method to increase the particle diversity, but without using the mutation operator. They compare the difference between the particle’s current fitness value and the best fitness value achieved until that instant, and determine a relocation condition for the particle. With this relocation condition the objective is to prevent the particles converging to local minima points in the search

7 space. There are also other studies where various methods are proposed to prevent particle collisions (the particles whose search regions are close to each other) [13, 14].

The studies show that in some cases the mutation operator increases the performance of the method drastically [15]. Esquivel and Coello have proposed a nonlinear mutation operator used for mutating the particle’s position information [16]. They noted that with the mutation operator the particle diversity is increased, thus the performance of the method is increased. Higashi and Iba used a Gaussian mutation operator to update the equations where the particles update their position and velocity. They concluded that the proposed method performs better than the nominal PSO method and genetic algorithms.

The particle swarm optimization method is also incorporated with other computation methods. Løvbjerg studied the idea of applying particle swarm optimization, genetic algorithm or hill climbing algorithm to every sub-swarm in the search space and proposed a stochastic search method [17]. The change among the applied algorithms is performed considering the fitness value achieved by a sub-swarm. If a certain algorithm at certain number of iterations could not reach a better fitness value, then the sub-swarm is switched to the next algorithm. Since particle swarm optimization has greater global search capability than the other two algorithms, the PSO is used

first. Hendtlass and Randall have used the ant colony optimization algorithm along with the particle swarm optimization method [18].

Some studies have also proposed that the hybrid particle swarm optimization algorithms use non-evolutionary methods. Van den Berg and Engelbrecht developed a cooperative particle swarm optimizer [19]. The philosophy of cooperation among the individuals is adopted, instead of competition among them. In [5], Clerc developed

8 “re-hope” and “no-hope” conditions in order to increase the method’s performance.

A global neighborhood topology is used and the particles are desired to converge to an optimum point in the search space. If the desired case does not happen, “re- hope” is invoked and the particles are re-initialized, close to the neighborhood’s best position. However, if the number of re-initializations exceeds a predefined number and the particles are still searching in a small region far away from an optimum point, the “no-hope”condition is invoked and the particles are not re-initialized for that instant. In contrast to the above approach, Riget and Vesterstørm developed an “Attractive-Repulsive PSO” method [20]. In order to increase the diversity in the swarm, threshold values for low and high diversity conditions are determined. If the threshold values are exceeded, the particles are either attracted to or repulsed from the best fitness value (pbest) in the search space. For attraction, the velocity update equation of the basic particle swarm optimization algorithm is used, while for the repulsion the sign of velocity update equation is changed and used in this way.

With a similar philosophy, in [21] Parsopoulos and Vrahatis developed deflection, the stretching and repulsion methods, in order to prevent the particles from converging to local minima and to continue to search for the global minimum. In this way, the method’s capability to find the global minima is increased. In [22], the authors present particles in the swarm as a dynamic hierarchy in a uniform tree structure.

The particles move vertically in the hierarchy according to their best fitness values so that the particles having the best fitness values are located at the top of the hierarchy, and these particles are more influential to velocity updates of particles in the swarm. Monson and Seppi incorporated Kalman filtering with PSO method in their study [23]. They used Kalman filtering for determining the velocity vector of

9 the particles, instead of using the dynamic update equations of the method. They claim that with this method the particles can perform a detailed search in a specific area, and the method’s fast convergence property to better points in the search space can be preserved as well.

The studies mentioned above deal with the problem of premature convergence

(converging to local minima points) by considering hybrid algorithms and different forms of the PSO method. In order to minimize this problem, it is noted that the diversity of the swarm can be increased. However, increasing the diversity may lead to more search time and not necessarily better results. On the other hand, there is no generalization made, such that the PSO method displays better or worse performance than the genetic algorithms etc. In [24], Vesterstørm and Thomsen compared the differential evolution algorithm, the particle swarm optimization method and genetic algorithm by testing on different benchmark functions. For some functions the particle swarm optimization and differential evolution algorithm show better performance, but the genetic algorithm has better performance in the functions that added noise.

1.3 Parallel and Distributed Implementation

Parallel computing is based on partitioning a large problem into smaller problems.

The distribution of these smaller problems among the processor units and finding the solution of these problems is done at the same time. The parallel processing is widely used in the high performance computation discipline, since it drastically increases computational capacity despite the processor working frequency and physical limitations.

Developing appropriate computer code for parallel computation is a harder task than developing code for a traditional and well known sequential computation. The

10 programming errors, race conditions and communication problems among different processors are widely observed problems, since the task is divided into smaller sub- tasks, each of which is solved by a processor. Despite these issues, parallel computational methods have been developed for solving complex engineering and scientific problems.

Particle swarm optimization is also considered in that sense and many researchers have worked on the parallel and distributed forms of the method.

The particle swarm optimization algorithm is not a time consuming algorithm and can easily be parallelized. On the other hand, the parallel version of the algorithm suffers from some problems such as communication problems, race conditions among the processors, and the decrease of parallel efficiency in some cases. Further research is required to solve these shortcomings.

1.4 Multi Objective Optimization

In the multi-objective optimization problems more than one objective function is being optimized simultaneously. Most of the time, one common solution could not be found for all the objective functions in the search space. In contrast, a set of best points could be achieved. This set of points is referred as the Pareto solution set.

The values of the Pareto optimal points in the objective space form Pareto front in the objective space. The solution for this kind of problems requires points in the objective space should converge to the Pareto front and cover the front.

In [25] two objective functions to be optimized are to be determined and the neighborhood of each particle changes dynamically at every iteration. The first function is used as a reference to determine the neighborhoods in the objective space with respect to the distance between the functions. According to the above,

11 the m nearest particles are considered as a neighborhood. From the neighborhood determined the particles fitness values are calculated according to the second function and the local best value is determined. Later in [26] the particles are allowed to use the points and values in Pareto front, by extending their memory so that they determine their search by considering these points.

Multi objective optimization is useful in optimization of two or more functions at the same time. Different programming techniques have been developed to solve the multi objective optimization problems [27]. But there are certain limitations in these programming techniques as they generate a single solution per run [28]. This initiated the development of other approaches. When compared to the traditional mathematical programming techniques, Evolutionary algorithms are found to be suitable as they are population based and manage a set of solutions at a time, instead of only one. When a new solution improves than the old one then set of such optimal solutions are said to be pareto-optimal set. So, multi-objective optimization is said to be searching for the pareto optimal solutions [28].

The multi-objective optimization problem can be mathematically defined as [28]:

Minimize x in

T F (x) = [F1(x),F2(x), ..., Fk(x)] (1.4)

subject to gj(x) ≤ 0, j = 1, 2, ..., m,

and hl(x) = 0, l = 1, 2, ..., e,

where k is the number of objective functions, m is the number of inequality

constraints, and e is the number of equality constraints.

PSO converges to a global solution rather than a single solution where as a

multi-objective optimization has a set of solution points. So, the multi-objective

12 optimization problems that involve global optimization has to work with solutions that are globally pareto optimal but not locally pareto optimal [29]. In a population based algorithms population represents a group of potential solution points and generation represents an algorithmic iteration [29]. The points that are to be taken into consideration while developing an algorithm for multi-objective problems are, how to evaluate fitness, how to determine which potential solution points that are to be passed to the next generation, and finally how to incorporate the idea of pareto optimality. The authors describe in their paper [29] different techniques that serve the issues stated above.

1.5 Stability and Convergence Analysis

The stability analysis of a system can be found by the Lyapunov stability theorem.

By considering the Lyapunov theorem the stability of a PSO algorithm is discussed in [30]. The authors [30] conclude from an experimental result that the algorithm is stable for certain values of the attraction coefficient which is the combination of the local and global attraction coefficients. The stability analysis of the PSO is dealt with few illustrative examples in [31]. The theoretical analysis of convergence and stability were first given by Ender and Mohan in 1999. In [32] the analysis of stability and convergence were discussed mathematically. The authors in [32] summarizes by using the Banach space and contraction principle that the particle converges.

13 1.6 Application Areas

1.6.1 Neural Network Training

Particle swarm optimization method is used for determining the neural network structure and the weight coefficients between the connections in the network [33].

In the particle swarm optimization algorithm, adjusting the algorithm’s parameters carefully is enough for the neural network training.

In some studies the back propagation algorithm (which is usually used for the neural network training) and the particle swarm optimization algorithm are compared, and it is pointed out that particle swarm optimization algorithm provides faster convergence of error to zero in the linear and non-derivative functions than the back propagation algorithm [33, 34]. Van den Bergh and Engelbrecht used the

“Cooperative particle swarm optimization” algorithm to train neural networks. Input vectors to the network are divided into sub-vectors, and each sub-vector is optimized by its own swarm in the search space [35]. In [36] the authors used the PSO method to develop an adaptable training algorithm, where the network could adapt to changes during the operation. The developed algorithm provides an adaptable neural network training, which could be considered as online training. On the other hand, the authors noted that the adaptation process is slow and may not be appropriate for real time applications.

1.6.2 Dynamic Tracking

Dynamic tracking problems are difficult to solve for computational algorithms.

Since the function being optimized changes with respect to time, the best solution at

14 an instant may not be the best solution at another instant. In the literature, in some studies particle swarm optimization algorithms are used to solve dynamic tracking problems.

Parsoupolis and Vrahatis used the particle swarm optimization algorithm in cases where a transformation matrix is used to describe the fitness function’s change with respect to time, and a Gaussian distributed random term is added to the fitness function [37]. They noted that the method is robust to noise, but should be tested with other real time and dynamic landscapes. In [38], once the function being optimized is changed at an instant, the best solution until that instant is set to zero and it is recalculated again. However, such a solution is only suitable for functions that change slowly with respect to time. Similar to the above philosophy, in [39] a method is presented that observes a change in the function being optimized, and when a change happens it randomly re-initializes the particles in the search space and continues the optimization process.

1.6.3 Multi-Agent Search

In recent years multi-agent search applications have attracted much attention.

In hazardous landscapes, it is difficult for humans to deploy and search for desired targets. Equipping robots with the necessary hardware and developing efficient search algorithms, would result in a better search, and shorter time to search.

There are advantages to perform the search with a decentralized multi-agent system rather than to perform with a single agent or with several robots where there is a master robot. The robustness to failures and flexibility are two of these advantages [40]. In a multi-robot case, when a robot performs a wrong operation,

15 or when a robot is out of order or has communication problems, then the other robots in the system can perform the desired operations. On the other hand, the

flexibility property allows the robots to reorganize and to perform different tasks at different instants. For instance, during the search task the robots might concentrate on a smaller area and perform a detailed search, but at another instant the robots might concentrate on a larger area and perform a rough search or perform different tasks by communicating with each other. Besides these advantages, by designing the control methods appropriately the system becomes scalable, which allows it to work independently either to increase or decrease the number of robots working. Also, each robot could perform similar operations. These facts show that the multi-agent system could be effectively used in the search tasks.

In the search tasks of the multi-agent systems, each agent should perform a search task and determine its movement in the search area. These facts put some requirements on the search algorithm to be used [41]. The algorithm should be a distributable one, since every agent in the system performs their operations by considering the search algorithm. Secondly, the algorithm should not be a time consuming one. On the other hand, the algorithm should provide a minimum amount of communication among the agents in the system in order to preserve the scalability property of the multi-agent system. Finally, the algorithm should be suitable for continuous movements of agents in the system. Even though the computational algorithms perform well in the simulations of multi-agent search task, in real applications agents work in a continuous time, which makes it hard to use the computational algorithms, as these algorithms work in a discrete time space. The particle swarm optimization has basic rules and it is easy to implement them on the multi-agent

16 search task. This fact is noticed by many researchers, and in some studies the PSO algorithm is used in multi-agent search tasks.

Venayagamoorthy and colleagues studied a multi-robot search application using the particle swarm optimization, involving one or more targets [42]. They proposed a two-level hierarchical search algorithm. The inner part is used for adjusting the method’s parameter (effectively the search parameters) like the inertia weight parameter

(w) and the learning coefficients (ϕ ¯1 andϕ ¯2). The outer part on the other hand, is used for determining the position and velocity vectors. In the multiple target case, once a target is found by a robot, then the robots communicate with each other to decide whether to converge to this target or look to another target that radiates stronger identity depending on the identities the robots sensed.

Hereford [41, 43] considered each agent of a multi-agent system as a particle in a swarm, such that each agent in the system performs the PSO iteration. Every agent measures its best fitness value achieved until that time (pbest) and compares this fitness value with the best fitness value of the neighborhood achieved until that time (gbest). If a greater fitness value than the gbest value is found, then this value is assigned as the gbest value and shared with the other agents in the system. The agents communicate with each other only if they find a greater fitness value than the current gbest. Therefore, the communication between the agents is kept at minimum.

Pugh and Martinoli worked on adapting the particle swarm optimization algorithm to multi-agent search applications [44]. They used the e-puck robots [45] and the distance to the target is determined as, the average of the strongest signal strength detected by a robot in the system and by the signal strengths determined by other robots in the system. Two models are used: a realistic model where the sensor

17 errors are considered, and a simple model where the errors are not considered. They examined the two techniques for adapting the multi-agent search application to the

PSO. They considered two cases in which: (i) the agents know their global positions and; (ii) the agents rely on their local knowledge. For the robot communication problems and communication sufficiency, the authors proposed that each robot has a definite communication radius and a dynamic structure that evolves, as different robots are in the communication radius of a robot at different instants. In [46],

Pugh and his colleagues considered the communication structures for the multi-agent systems during the search tasks and suggested two neighborhood structures. The

first one is a circle structure in which a robot could communicate with other robots located at its right and left sides, and the second one is a structure in which each robot has a communication range and could only communicate with the robots in this range.

Marques and his colleagues presented a PSO inspired search method in order to detect the odor sources across large search spaces [47]. They formulated the odor

finding problem and developed a model for the instantaneous odor concentration at ground level. Moreover, they compared the PSO inspired search method with other gradient related methods and observed that the PSO-inspired search method is more successful in an unstable environment with high turbulence, than other search methods. In [48, 49] the authors developed a particle swarm optimization search method that is appropriate for finding odor sources (targets) in a dynamically changing environment. When the changes occur in the environment, in order to detect these changes the diversity of the particles in the swarm is achieved by the charged PSO method. Inspired from the Coulomb’s law, the particle diversity is

18 achieved by considering some particles as charged and others as neutral, and they attract or repulse each other. In [50] the authors developed a two-level hierarchy method as local and global search. If the target’s signal cannot be recognized by the robots, then the robot’s try to percept the target’s signal by performing a local search. Once the robots acquires signals from the target, then they switch to a global search. Also, it is noted that the robot’s have limited communication range and the robots communicate with the other robots in their communication range.

1.6.4 Wireless-Sensor Networks

A wireless sensor network or (WSN) is a network of tiny, inexpensive autonomous nodes that can acquire, process and transmit sensory data wirelessly [51]. The

WSNs face challenges like link failures, limited energy, memory and computational constraints. The particle swarm optimization algorithm has been applied to solve optimization problems in wireless-sensor networks. The WSNs have problems in determining the positions for sensor nodes due to which the desired coverage, connectivity, and energy efficiency are affected [52]. When the WSNs are optimized, they assure the adequate quality of service, long life and financial economy. The traditional analytical optimization techniques need a lot of computations and yet, may not be fully efficient.

The ease of use, the high efficiency and the speed of convergence are the merits of

PSO. Authors in [51] envisioned that the PSO will continue to emerge as an efficient optimization technique in the field of WSN’s.

19 1.6.5 Optimal Design of Power Grids

Micro grids are small electrical distribution systems that connect multiple consumers to multiple distributed systems. The micro grids can be operated in autonomous mode and grid-connected mode. Due to the dynamic nature of the distribution network they face challenges like stability and control. PSO has been applied to the control problem, which is formulated as an optimization problem. The PSO has been applied to many power system problems along with the other computational intelligence algorithms, like genetic algorithms (GA) is found to be a good technique due to its simplicity, efficiency and robustness. The PSO is believed to have a well balanced mechanism, when compared to the other techniques. There are several issues in planning a micro grid like size, location and optimal design of different controllers. PSO is applied to solve these issues at an early stage. A code has been developed in [53] for the optimization, linearizion, and non-linear time domain. A new technique for stability enhancement of a micro grid operating in both autonomous and grid connected modes, is developed in [53]. The PSO-based approach has been implemented with a MATLAB code in [53]. The most effective parameters of the

PSO’s performance in [53] have been found to be the initial inertia weight and the maximum allowable velocity. The results in [53] confirm the effectiveness of the PSO- based approach for optimizing the parameters.

1.6.6 PSO for Multi User Detection in CDMA

CDMA is basically referred as the ’Code Division Multiple Access.’ It is a channel access method used by the radio communication channels. There is a chance of poor reception quality, if the multiple access interface (MAI) is not alleviated properly.

20 And in addition to this, there lies the near-far effect. An optimal multi user detector

(OMUD) is proposed to reduce the effect of the MAI. But, as the number of users increases the OMUD grows exponentially by increasing its complexity. In [54], a

MUD detector based on PSO is proposed. Initially PSO is compared with a genetic algorithm (GA), and it is found that all the particles in the PSO tend to converge to the best solution at a fast rate and this is met with less computational complexity.

The authors in [54] found that PSO is resistant to being trapped in local optima. The

PSO algorithm that is proposed in [54] is known as PS-MUD. The PS-MUD in [54] outperforms the OMUD and is believed to be less complex.

21 CHAPTER 2

NEIGHBORHOOD TOPOLOGIES

Neighborhood topologies in the particle swarm optimization method determine the sufficiency and frequency of information flow, and the computational cost of communication among the particles. Due to this fact in the population based methods like particle swarm optimization, the connections among the population members, the clustering of the population members and the distance between the population members become the significant factors affecting their performance [55].

In [56, 57, 58, 59, 60], Gazi defined the neighborhood topologies mathematically by exploiting graph theory. Each particle in the swarm is defined as a node, and communication lines in the swarm are defined as edges, i.e., a directed arrow in the graph. Edges in the graph determine whether communication among the particles is bi-directional (both particles can communicate with each other) or uni-directional

(only a particle communicates with the other one). In the basic particle swarm optimization algorithm each particle is a neighbor of the other particle. Two widely used neighborhood topologies are the global neighborhood and the local neighborhood.

In the global neighborhood topology all the particles are neighbors of each other, while in the local neighborhood topology some particles are neighbors of some other particles. In [60] authors considered the neighborhood topology of agents as a complete

22 graph. In other words, every agent is not necessarily a neighbor of every other agent as there lies an intermediate agents forming a path from every agent to every other agent.

In the global and local neighborhood topologies, each particle’s velocity is adjusted dynamically by considering its best fitness value and the best fitness value of the particles in the neighborhood (all particles in this case). The neighborhood relations are usually determined by considering the nearest particles in the neighborhood structures. The studies on neighborhood topologies stated that the global neighborhood topology tends to converge faster to a point in search space, but the risk of converging to local minima is high. Meanwhile, the local neighborhood topologies tend to converge slower but have a better chance of converging to the global minimum or better points in the search space [61].

2.1 Static Neighborhood

In most of the PSO implementations the particle neighborhoods are fixed throughout the optimization process. In [61] Kennedy topologies, like the circle topology (a local neighborhood topology) where each particle has two neighbors located at their imaginary right and left sides of the particle, the wheel topology (a local neighborhood topology) where one particle is neighbor of all particles, whereas the rest of the particles constitute a ring among themselves, star topology (a global neighborhood topology) where each particle is neighbor of every other particle and the random neighborhood topology. The representation [62] of the different types of static neighborhood topologies is shown in figure 2.1. It is shown that the method’s performance has a strong relationship between the function to be optimized and the neighborhood topology. On the other hand the local neighborhood topology yields better results

23 in optimization of multi-modal functions, while global neighborhood topology yields better results in the optimization of uni-modal functions.

Figure 2.1: Static Neighborhood Topologies.

Kennedy and Mendes used the uniform neighborhood topologies such as global topology, local topology, pyramid topology, star topology, Von Neumann topology

(square structured topology) and the random neighborhood topology [63]. If topologies having more connections among the particles could be initialized at regions close to better points, then the particles could converge to the desired points quickly.

Otherwise, the particles can converge to local minima points at a fast rate. Also they informed that the Von Neumann neighborhood topology could yield better results than other uniform topologies and advised this topology for the method.

In [64] the authors considered a global neighborhood topology, where each particle is the neighbor of the other particles and determined a new learning coefficient ϕ, which is composed of the cognitive and social learning coefficients ϕ1 and ϕ2 in the basic PSO algorithm and can be updated in the equations. According to the authors it is not correct to assume that a particle having best fitness value in the neighborhood

24 would find a better region in the search space, than other particles having less better

fitness values. Due to this reason in the algorithm developed they assumed that all the particles in the swarm have direct influences on a particle’s velocity update. The direct influence of a particle depends on its fitness value and the size of the swarm, and a weight parameter is proposed to determine this influence.

2.2 Dynamic Neighborhood

The neighborhood topologies that change dynamically with time are also incorporated with the particle swarm optimization method and several studies are conducted about this topic [65]. The PSO method with dynamic neighborhood topology requires more computation time and power than the static neighborhood topologies. The reason for this situation is that at every iteration the neighborhood relations are changing.

A local neighborhood topology is adopted at the beginning of the search and at later stages the neighborhood is extended, and becomes a global one. The particle neighborhoods are defined as the particles located at the top and bottom and located close to a particle in the search space.Later in [25] the authors adopted an idea of division of the swarm into sub-swarms, and then searching these sub-swarms in the search space.After the sub-swarms search in the search space, they come together to create the swarm and exchange information among themselves. The search continues as the swarm is again divided into sub-swarms and new sub-swarms search in the search space by considering recently achieved information. It is noted that the proposed method shows better performance on multi-modal functions than the traditional PSO method.

25 In order to define the neighborhoods dynamically, means that some edges are disconnected from one of their end points and attached elsewhere.Thus the particle neighborhoods are dynamically changing during search.

In [56, 57, 58, 59, 60], Gazi studied the dynamic neighborhood topology and proposed methods to determine the neighborhood topologies dynamically. The neighborhood topologies are considered as graphs changing with respect to the time, and the neighborhood relations are determined according to the random determination of the nearest neighbors in the search space and the nearest neighbors in the function space methods. It is assumed that each particle has a perception area and the neighborhood relations could be bi-directional or uni-directional.

In a dynamic neighborhood the neighbors of the particles are also assumed to be dynamic in nature. Every particle communicates with the other particles i.e. neighbors in the subset that change with time. Three different methods are used for determining the particle neighborhoods [paper]. The results are obtained by investigating the performance of PSO algorithm with the three methods.

2.2.1 Nearest Neighbors in Search Space

In this method the nearest neighbors are determined based on the distance between the particles in the search space. If we consider a particle as i and its radius as

δ(i) then the area of the particle determines its range of communication with other particles. The neighborhood of the particle at time t is determined by the following equation : ℵ(t) = {j, j 6= i | kxi(t) − xj(t)k ≤ δi}

Here ℵ(t) denotes the set of neighbors of particle i at time t .

And the assumption here is δ(i) > 0

26 Figure 2.2: Nearest neighbors in search space.

In the case of equal areas the neighborhood of the particles becomes reciprocal.

2.2.2 Nearest Neighbors in Function Space

In the second method the neighborhood is based on the distances in the function space and is calculated using the equation below: ℵ(t) = {j, j 6= i | kf(xi(t)) − f(xj(t))k ≤ δi}

Here f(.) represents the difference between function values. If δ(i) = δ for all i, the neighborhoods become reciprocal.

2.2.3 Random Neighborhood

The next method for finding the neighbors of the particle dynamically is by random determination. A random number is generated ij in the range [0, 1] at every instance t before the PSO iteration. The number of random numbers generated at each step

27 Figure 2.3: Nearest neighbors in function space.

N is equal to 2( 2 ) = N(N − 1) where N represents the number of particles. The neighborhood of particle i is determined according to the following equation :

ℵi(t) = {j, j 6= i | ij ≤ } It can be understood from the above equation that

information to particle j can be obtained information from particle i at time t. The

neighborhoods are not reciprocal since ij and ji are independent. And for the vice

versa to hold it must satisfy the condition ji ≤ . The number of random numbers

N N(N−1) that can be generated for the neighborhood to be reciprocal is ( 2 ) = 2 The neighborhood topologies used with these methods have both advantages and disadvantages on the method’s performance. Due to that fact, it is important to choose the neighborhood topology that would be used, besides determining the problem to be optimized.

28 2.3 Synchronous and Asynchronous PSO

The complexity of the optimization problems led to the study of parallel optimization algorithms. Due to the ability of PSO for its global search capabilities was found to be useful in dealing with the complex optimization problems. Advancement of the computer and network technologies led to the development of parallel optimization algorithms. But, most of them are synchronous in nature. To easily analyze the particles in a swarm that are independent to each other we need to follow a parallel implementation.

2.3.1 Synchronous PSO

As mentioned above there are different implementations of PSO algorithm. The implementation of PSO algorithm in a parallel manner solves the problem of sequential implementation. Besides the different strategies for determining neighbors the common assumption in the PSO algorithm is the particles have access to the current information of their neighbors [59]. Synchronous PSO is a simple design iteration in which all the particles are assumed to move at the same time and are continuously updated about the positions of their neighbors. But there is a problem with the synchronous implementation. All the processors need to be working until the end of each iteration.

Moreover, the synchronous implementation is not an ideal solution and results in poor parallel efficiency [66]. In order to overcome this problem we need to move on to asynchronous PSO.

29 2.3.2 Asynchronous PSO

In the asynchronous PSO implementation, there are three different kinds, they are

sequential asynchronous PSO, centralized parallel asynchronous PSO and decentralized

asynchronous PSO. If the next iterations are carried on without waiting for the

current design iteration [67] in a sequential manner then it is said to be a sequential

asynchronous implementation. This keeps it moving to the next iteration instead

of having idle processors. The basic rule to apply here is to separate the updated

positions associated with each particle and those associated with the swarm as a

whole. When compared to the sequential synchronous implementation, here each

particle updates the neighborhood gi(t) before updating its estimate. Here the

positions are updated after each particle completes its function evaluation and the

swarm is updated at the end, but in a synchronous PSO all the updates are done

at the end of the iteration. The particles update their positions from the particles

that previously completed their iteration. But still the implementation is not truly

asynchronous as the particle early in the sequence uses more information from previous

iteration than from the current iteration. And if a particle is performing a late update

it gets more information from current iteration and vice versa. This means that the

value of the neighborhood gi(t) is a mixture of information updated from the previous and current iterations, and the particles are unable to perform independent iterations from each other [60]. So, we move on to the centralized parallel asynchronous

PSO. Centralized parallel asynchronous PSO is implemented by collection of the global best value and the updates are performed on a master computer, and slave computers take part in the function evaluations in parallel. The slave computers are assigned the task of function evaluations after providing the necessary information

30 and the values of xi(t) by the master computer. In order to prevent the idle time the master computer assigns the slave computer with the new function evaluation after successfully updating the particle with the global best gi(t) and estimate xi(t).

During this process the master computer communicates with only one slave computer.

But there lies a problem if the master computer fails to respond in time to the slave computers. This brings the necessity of the decentralize parallel asynchronous

PSO. In a decentralized synchronization centralization is not needed as the update of each particles can be performed on a different computer. From [60] a decentralized asynchronous PSO is robust than the centralized PSO that may be prone to failures.

In [60] authors state that decentralized asynchronous PSO in more suitable for parallel implementations. For any PSO algorithm the velocity vector plays a key role. So, it needs to be updated using the following properties: the previous velocity vector, the current velocity vector and the best position found so far. In addition to that, the best position of the swarm as a whole is also needed. There is a similarity between synchronous PSO and asynchronous PSO, except that in an asynchronous PSO we update the information after each design point is analyzed.

31 CHAPTER 3

RESULTS - I

The Contour plots of all the six functions are shown in figure 3.1. The global minimum of all the functions are indicated in the plots.

In this chapter, the simulation results of the PSO algorithm with dynamic neighborhood, where neighborhood is determined in three different methods, are presented. The results are also tabulated along with each graph.

Table 3.1 shows the data obtained for the sphere function using neighborhood based on the search space. The average value of the global best value of the function along with the standard deviation of the mean values across different runs is shown in the table, and plotted in Figure 3.2. For relatively small values of delta, the neighborhood size around each particle is small and hence comprises of relatively few particles. This means that information sharing is restricted to a limited set of neighbors, thus restricting the convergence of the functional value to the absolute global minima possible.This behavior can be seen for a range of values of delta close to zero and correspondingly, the mean value of the global best does not change. For values of delta exceeding 200, the size of the neighborhood starts to grow significantly to include more particles so that information sharing across particles in the search space is easily facilitated. As a result, if one of the particles discovers a point in the

32 Figure 3.1: Contour plots of all six benchmark functions.

search space which achieves global minima, it helps to guide the entire neighborhood in the direction of the minima. When the neighborhood size gets large, the probability of one of the particles locating close to the the global minima goes up. This in turn improves the probability of locating the global minima in the search space leading to

33 the minimum value of the function, as can be seen in Figure 3.2. For values of delta exceeding 400, the sphere function always converges to the global minima which is at the origin for the function, with the corresponding functional value being zero.

Table 3.1: Results for Neighborhood determination based on nearest neighbors in the search space

δi Sphere Average Sphere Standard Deviation 0 6.31e+004 1.36e+003 100 6.33e+004 1.39e+003 200 5.99e+004 4.44e+003 220 2.32e+004 4.47e+003 242 6.50e+003 1.87e+003 264 9.21e+002 8.81e+002 286 8.20e+001 2.49e+002 300 9e+000 9e+000 400 1.80e+001 1.4e+001

Table 3.2 shows the data obtained for Griewank function using search space based neighborhood. Figure 3.3 gives the plot of the data in Table 3.2. The algorithm performs well for Griewank function as well, as evidenced from Figure 3.3. Again, the mean global best converges to zero which is the global minima for the Griewank function in the search space.

Results for Rastrigin function based on search space neighborhood is given in Table

3.3 and Figure 3.4. Similarly, Table 3.4 and Figure 3.5 gives results for Rosenbrock function. Search space based optimization works well in these two cases as well with the simulation converging to the respective global minima values.

In my work i implemented two more functions that are the Ackley and DejongF4 functions. The figure 3.6 refers to the Ackley function and the results are shown in

34 Figure 3.2: Distance between particles in search space against average global value for a Sphere function.

Figure 3.3: Distance between particles in search space against average global value for a Griewank function.

35 Table 3.2: Results for Neighborhood determination based on nearest neighbors in the search space

δi Griewank Average Griewank Standard Deviation 0 5.71e+002 1.44e+001 540 5.69e+002 1.11e+001 1080 5.67e+002 1.53e+001 1215 5.04e+002 7.32e+001 1350 1.67e+002 4.00e+001 1485 3.6e+001 1.3e+001 1620 4.6e+000 5.58e+000 1755 1.14e+000 1.01e-001 1890 1.20e+000 1.6e-001 2025 1.16e+000 1.1e-001 2430 1.15e+001 1.4e-001

Table 3.3: Results for Neighborhood determination based on nearest neighbors in the search space

δi Rastrigin Average Rastrigin Standard Deviation 0 3.27e+002 4.5e+000 9 3.25e+002 3.85e+000 18 3.49e+001 9.81e+000 27 4.10e+001 1.58e+001 36 4.26e+001 1.70e+001 45 4.16e+001 1.8e+001

the table 3.5. From the figure 3.6 it can be seen that the function almost converges for delta value greater than 90.

The figure 3.7 shows the nature of the DejongF4 function for different values of delta in a search space neighborhood. The function characteristics are similar to that of a sphere. But it converges at a faster rate than the sphere function. For delta values greater than 3 the function converges to origin and this shows that the

36 Figure 3.4: Distance between particles in search space against average global value for Rastrigin function.

Figure 3.5: Distance between particles in search space against average global value for a Rosenbrock function.

37 Table 3.4: Results for Neighborhood determination based on nearest neighbors in the search space

δi Rosenbrock Average Rosenbrock Standard Deviation 0 8.56e+003 3.07e+002 3.0 8.57e+003 2.58e+002 3.5 8.67e+003 2.35e+002 4.0 8.38e+003 3.56e+002 4.5 3.94e+003 5.88e+002 5.0 1.00e+003 4.01e+002 5.5 1.22e+002 1.32e+002 6.0 1.84e+001 1.1e+000 10.0 1.92e+001 1.9e+000

Figure 3.6: Distance between particles in search space against average global value for a Ackley function.

coordination between particles is very well. This trait of the function can also be well observed from the table 3.6. For delta values greater than 3 the global best average of the function is zero.

38 Table 3.5: Results for Neighborhood determination based on nearest neighbors in the search space

δi Ackley Average Ackley Standard Deviation 0 20.8002 0.0280 14 20.8085 0.0254 28 20.7912 0.0330 42 20.8151 0.0299 56 20.7987 0.0229 70 14.5170 2.3098 84 6.2066 1.5467 98 5.5372 1.1402 112 6.3579 1.0941 126 5.6874 1.5407 140 5.9699 1.4273 154 5.7115 1.1480 168 5.3948 1.2432 182 6.6119 2.0414 196 5.9192 1.1172 210 6.4334 1.2651 224 6.3376 1.6329 238 6.1011 1.1729 252 5.7391 1.3655 266 5.6286 1.1732

The next set of results show the performance of the optimization algorithm when the neighborhood is defined based on the distance in the function space. Since, the overall goal is to minimize the mean value of the global best function value, functional space based neighborhood would inherently perform better than search space based neighborhood, for a given delta. This is because, if a point in search space is located such that the function achieves its minimum value, it is able to quickly draw other points towards itself compared to distances defined in the search space. Table 3.7 shows the data obtained for the sphere function and Figure 3.8 gives a pictorial representation. Table 3.8 and Figure 3.9 show the data for the Griewank function

39 Table 3.6: Results for Neighborhood determination based on nearest neighbors in the search space

δi DejongF4 Average DejongF4 Standard Deviation 0 101.8167 3.4119 1 101.9992 4.5226 2 101.4855 3.7272 3 20.0157 5.7023 4 0.0000 0.0000 5 0.0000 0.0001 6 0.0001 0.0002 7 0.0001 0.0001 8 0.0001 0.0001 9 0.0001 0.0001 10 0.0001 0.0002 11 0.0001 0.0001 12 0.0001 0.0001 13 0.0001 0.0001 14 0.0002 0.0003 15 0.0001 0.0001 16 0.0001 0.0001 17 0.0000 0.0001 18 0.0001 0.0001 19 0.0001 0.0001 20 0.0000 0.0000

under similar conditions of search space in functional neighborhood. Table 3.9 and

Figure 3.10 present the data for Rastrigin function while Table 3.10 and Figure 3.11 present the data for Rosenbrock function. In all the cases, it can be seen that the convergence to global minima is successfully achieved by the optimization algorithm and the corresponding delta values required is also less than the delta values required for the search space based neighborhood.

40 Figure 3.7: Distance between particles in search space against average global value for a DejonF4 function.

Figure 3.8: Distance between particles in function space against average global value for a Sphere function.

41 Table 3.7: Results for Neighborhood determination based on nearest neighbors in the function space

δi Sphere Average Sphere Standard Deviation 0 6.31e+004 1.52e+003 20 5.23e+004 2.04e+003 40 3.95e+004 4.03e+003 60 3.02e+004 3.51e+003 80 2.09e+004 4.23e+003 100 1.76e+004 3.02e+003 120 1.43e+004 2.51e+003 140 1.25e+004 3.11e+003 160 1.01e+004 1.71e+003 180 9.16e+003 2.06e+003 200 7.57e+003 1.53e+003

Figure 3.9: Distance between particles in function space against average global value for a Griewank function.

In the function space this function responds well when compared to the search space. The function tends to converge for delta values greater than 50. The functionality can be observed from the table 3.11. 42 Table 3.8: Results for Neighborhood determination based on nearest neighbors in the function space

δi Griewank Average Griewank Standard Deviation 0 5.69e+002 1.29e+001 0.2 4.54e+002 2.01e+001 0.4 3.49e+002 3.71e+001 0.6 2.32e+002 4.52e+001 0.8 1.87e+002 3.16e+001 1.0 1.31e+002 2.48e+001 1.2 1.16e+002 1.43e+001 1.4 9.69e+001 2.43e+001 1.6 7.46e+001 2.19e+001 1.8 6.45e+001 2.19e+001 2.0 6.29e+001 1.95e+001

Figure 3.10: Distance between particles in function space against average global value for a Rastrigin function.

Finally, results based on the random neighborhood are presented for the same set of functions considered earlier. In this case, even for seemingly low probabilities, the

43 Table 3.9: Results for Neighborhood determination based on nearest neighbors in the function space

δi Rastrigin Average Rastrigin Standard Deviation 0 3.26e+002 2.89e+000 0.0313 2.99e+002 6.99e+000 0.0625 2.59e+002 1.31e+001 0.0938 2.11e+002 1.01e+001 0.1250 1.89e+002 9.19e+000 0.1563 1.69e+002 7.70e+000 0.1875 1.56e+002 5.91e+000 0.2188 1.52e+002 5.87e+000 0.2500 1.46e+002 3.63e+000

Table 3.10: Results for Neighborhood determination based on nearest neighbors in the function space

δi Rosenbrock Average Rosenbrock Standard Deviation 0 8.69e+003 3.14e+002 3 7.53e+003 3.68e+002 6 6.14e+003 5.97e+002 9 4.89e+003 5.12e+002 12 3.70e+003 6.06e+002 15 3.16e+003 5.65e+002 18 2.66e+003 3.32e+002

convergence happens relatively quickly to the global minimum value for each of the functions. The results can be noticed in Figure 3.14 for sphere function, Figure 3.15 for Griewank function, Figure 3.16 for Rastrigin function and finally Figure 3.17 for

Rosenbrock function.

In random neighborhood the Ackley function when compared to all other functions seems to be relatively slow i.e. it tends to be constant from global best values less

44 Figure 3.11: Distance between particles in function space against average global value for a Rosenbrock function.

Figure 3.12: Distance between particles in function space against average global value for a Ackley function.

45 Table 3.11: Results for Neighborhood determination based on nearest neighbors in the function space

δi Ackley Average Ackley Standard Deviation 0 20.8011 0.0286 14 6.0091 1.5484 28 6.0354 1.4377 42 6.5134 1.4194 56 5.4700 1.4457 70 6.9050 1.4057 84 6.3899 1.4443 98 6.3404 1.1135 112 5.9825 1.5130 126 5.8478 0.9679 140 5.5838 1.4174 154 6.0202 1.2228 168 5.2830 0.7869 182 5.6019 1.5726 196 5.5472 1.1594 210 5.8432 1.3762 224 6.1966 1.6107 238 5.4550 1.0891 252 5.7598 1.1718 266 6.0422 1.7545

than 4. The function values can be observed in the table 3.17. The function when plotted seems to be as shown in the figure 3.18

The DejongF4 function behaves like every other function i.e. it reaches the global minima very quickly when compared to the Ackley function. The figure 3.19 shows that function converges to global minima very quickly. The table for the function is shown in 3.18.

The Figures 3.20 and 3.21 show the normalized plots of all the six functions.

46 Table 3.12: Results for Neighborhood determination based on nearest neighbors in the function space

δi DejongF4 Average DejongF4 Standard Deviation 0 103.6579 2.9344 1 1.3377 1.0497 2 0.7699 0.5886 3 0.7971 0.4349 4 0.3373 0.4346 5 0.3874 0.4312 6 0.2252 0.2439 7 0.1547 0.2378 8 0.1954 0.4215 9 0.0295 0.1317 10 0.1069 0.2893 11 0.0343 0.1534 12 0.0000 0.0000 13 0.0668 0.2985 14 0.0209 0.0934 15 0.0000 0.0000 16 0.0000 0.0000 17 0.0433 0.1936 18 0.0000 0.0000 19 0.0000 0.0000 20 0.0000 0.0000

Table 3.13: Results for Neighborhood determination based on random neighborhood

δi Sphere Average Sphere Standard Deviation 0 6.29e+004 1.28e+003 0.01 4.60e+001 2.50e+001 0.02 4.00e+001 3.10e+001 0.03 1.70e+001 1.60e+001 0.04 1.30e+001 1.20e+001 0.05 1.20e+001 1.20e+001 0.06 1.10e+001 9.00e+000

47 Figure 3.13: Distance between particles in function space against average global value for a DejonF4 function.

Figure 3.14: Probability of particles being neighbors against mean global best value for a Sphere function.

48 Table 3.14: Results for Neighborhood determination based on random neighborhood

i Griewank Average Griewank Standard Deviation 0 5.66e+002 1.28e+001 0.01 1.45e+000 3.56e-001 0.02 1.22e+000 1.43e-001 0.03 1.12e+000 1.00e-001 0.04 1.15e+000 2.12e-001 0.05 1.08e+000 1.20e-001 0.06 1.12e+000 2.56e-001

Figure 3.15: Probability of particles being neighbors against mean global best value for a Griewank function.

Table 3.15: Results for Neighborhood determination based on random neighborhood

i Rastrigin Average Rastrigin Standard Deviation 0 3.27e+002 4.19e+000 0.01 3.35e+001 1.15e+001 0.02 2.54e+001 8.68e+000 0.03 2.49e+001 9.05e+000 0.04 2.14e+001 6.09e+000 0.05 2.39e+001 7.21e+000 0.06 2.13e+001 7.69e+001

49 Figure 3.16: Probability of particles being neighbors against mean global best value for a Rastrigin function.

Table 3.16: Results for Neighborhood determination based on random neighborhood

i Rosenbrock Average Rosenbrock Standard Deviation 0 8.69e+003 3.68e+002 0.01 1.99e+001 2.20e+000 0.02 1.90e+001 1.20e+000 0.03 1.90e+001 1.20e+000 0.04 1.95e+001 1.10e+000 0.05 1.89e+001 9.00e-001 0.06 1.84e+001 1.2e+000

Table 3.17: Results for Neighborhood determination based on random neighborhood

i Ackley Average Ackley Standard Deviation 0 20.8004 0.0242 0.01 3.9898 0.8737 0.02 3.6448 0.7658 0.03 3.3537 0.7507 0.04 3.4740 0.7050 0.05 3.4634 0.6849 0.06 3.6665 0.6852

50 Figure 3.17: Probability of particles being neighbors aganist mean global best value for a Rosenbrock function.

Figure 3.18: Probability of particles being neighbors against mean global best value for a Ackley function.

51 Table 3.18: Results for Neighborhood determination based on random neighborhood

i DejongF4 Average DejongF4 Standard Deviation 0 99.6273 2.8532 0.01 0.0003 0.0005 0.02 0.0000 0.0000 0.03 0.0001 0.0001 0.04 0.0000 0.0001 0.05 0.0000 0.0000 0.06 0.0000 0.0001

Figure 3.19: Probability of particles being neighbors against mean global best value for a DejongF4 function.

52 Figure 3.20: Distance between particles in search space against average global best value for Synchronous PSO

53 Figure 3.21: Distance between particles in search space against average global best value for Asynchronous PSO

54 CHAPTER 4

RESULTS - II

The results in this chapter are for an Asynchronous PSO. The dynamic neighborhood for an Asynchronous PSO are simulated to evaluate the performance of the algorithm for the Asynchronous nature.

The figure 4.1 denotes the search space neighborhood for a Sphere function. If we observe the figure 4.1 carefully the function converges for a delta value of 300 and more. This describes the Asynchronous nature of the algorithm. Where as in the figure 3.2 the function converges for delta values less than 300. This explains the asynchronous nature of the PSO algorithm. The function has a delay for the

Asynchronous nature. The function values can be seen in the table 4.1.

The Griewank function is shown in the figure 4.2 and its corresponding function values are shown in the table 4.2. We can observe a delay in the convergence of the function due to the Asynchronous characteristic of the algorithm.

The Rastrigin function is shown in the figure 4.3 and the table values are shown in the table 4.3. The global best minimum value is almost the same for the delta values more than 15.

The plot for Rosenbrock function can be observed from the figure 4.4. There is a delay for the convergence i.e. the global minima is obtained from delta values equal

55 Table 4.1: Results for Neighborhood determination based on nearest neighbors in the search space

δi Sphere Average Sphere Standard Deviation 0 6.3316e+004 0.1444e+004 100 6.3667e+004 0.1410e+004 200 6.0741e+004 0.3857e+004 300 0.0240e+004 0.0107e+004 400 0.0226e+004 0.0126e+004 500 0.0242e+004 0.0125e+004 600 0.0241e+004 0.0133e+004 700 0.0240e+004 0.0106e+004 800 0.0270e+004 0.0134e+004

Figure 4.1: Distance between particles in search space against average global value for a Sphere function.

to 6 and more. The table 4.4 provides with the details about the average global best values and the standard deviation for different values of delta.

56 Table 4.2: Results for Neighborhood determination based on nearest neighbors in the search space

δi Griewank Average Griewank Standard Deviation 0 571.3779 11.9409 540 570.0917 10.9047 1080 570.2146 10.4516 1215 540.1840 29.4727 1350 257.1394 69.2675 1485 62.6737 29.4400 1620 10.9745 8.2421 1755 3.3324 1.0514 1890 2.9250 0.7459 2025 3.2388 1.1559 2430 3.1799 1.4158

Figure 4.2: Distance between particles in search space against average global value for a Griewank function.

The Asynchronous PSO algorithm for the Ackley function determines the nature of the function and can be determined from the table 4.5. The function tends to

57 Table 4.3: Results for Neighborhood determination based on nearest neighbors in the search space

δi Rastrigin Average Rastrigin Standard Deviation 0 330.6161 3.2219 5 330.3835 5.2924 10 329.2015 5.1083 15 66.9794 23.3730 20 67.4158 18.7955 25 75.3494 26.1219 30 69.1914 15.7561 35 73.6616 23.4680 40 70.5188 20.3036 45 74.5787 21.1036 50 66.9199 15.8701

Figure 4.3: Distance between particles in search space against average global value for a Rastrigin function.

converge with some delay when compared with the Synchronous PSO algorithm.

The characteristic of the functions can be seen from the figure 4.5.

58 Table 4.4: Results for Neighborhood determination based on nearest neighbors in the search space

δi Rosenbrock Average Rosenbrock Standard Deviation 0 8.7242e+003 0.2349 1 8.8231e+003 0.3610 2 8.7505e+003 0.3024 3 8.7269e+003 0.3254 4 8.3930e+003 0.4003 5 1.2786e+003 0.3860 6 0.0439e+003 0.0498 7 0.0287e+003 0.0079 8 0.0313e+003 0.0078 9 0.0314e+003 0.0078 10 0.0337e+003 0.0145

Figure 4.4: Distance between particles in search space against average global value for a Rosenbrock function.

DejongF4 performs well for the Asynchronous PSO algorithm. The function attains convergence for delta value of 3 and more. Figure 4.6 provides the detailed

59 Table 4.5: Results for Neighborhood determination based on nearest neighbors in the search space

δi Ackley Average Ackley Standard Deviation 0 20.8811 0.0321 14 20.8754 0.0299 28 20.8650 0.0276 42 20.8761 0.0287 56 20.8705 0.0367 70 19.4570 1.7016 84 8.4161 0.9767 98 8.3832 1.1241 112 7.7498 1.1821 126 7.7994 1.2432 140 8.1253 1.3366 154 8.0348 1.3959 168 7.7525 1.4586 182 8.2070 1.2138 196 7.8427 1.5255 210 7.9681 1.0402 224 7.8250 1.1760 238 7.8091 1.1239 252 7.8267 1.1723 266 7.5802 1.1897

information about the function performance for the Asynchronous PSO algorithm.

The observational values of the function can be obtained from the table 4.6.

The sphere function in random neighborhood tends to converge to origin from epsilon values greater than 0.01 and this can be observed from figure 4.13. The values of the function for random neighborhood can be observed from the table 4.13.

The distance between particles in Random neighborhood for the Griewank function can be seen from the figure 4.14. Then average global best values and the standard deviation can be observed from the table 4.14.

60 Figure 4.5: Distance between particles in search space against average global value for a Ackley function.

Figure 4.6: Distance between particles in search space against average global value for a DejonF4 function.

61 Table 4.6: Results for Neighborhood determination based on nearest neighbors in the search space

δi DejongF4 Average DejongF4 Standard Deviation 0 66.5686 2.6081 1 67.7789 2.5408 2 67.3189 2.4454 3 23.4444 4.5818 4 0.1027 0.0705 5 0.1642 0.1282 6 0.2103 0.2519 7 0.3440 0.6958 8 0.1342 0.1658 9 0.1600 0.1541 10 0.1616 0.1304 11 0.1712 0.1715 12 0.2963 0.4791 13 0.1885 0.2732 14 0.1620 0.1154 15 0.1085 0.0980 16 0.1551 0.1698 17 0.2284 0.2449 18 0.1488 0.1897 19 0.1843 0.2865 20 0.1068 0.0878

The Rosenbrock converges well for the Asynchronous PSO. The figure 4.16 demonstrates the function behavior. The table 4.16 provides the average values of the global best.

The probability of particles being neighbors against the average global values for a Ackley function can be seen from figure 4.17 and its corresponding values can be seen from the table 4.17.

The DejongF4 function converges to origin for a random neighborhood and the

figure 4.18 shows the convergence clearly. The function average global best values can be obtained from the table 4.17.

62 Table 4.7: Results for Neighborhood determination based on nearest neighbors in the function space

δi Sphere Average Sphere Standard Deviation 0 6.3586e+004 0.1563e+004 100 4.7675e+004 0.2103e+004 200 4.1029e+004 0.2733e+004 300 3.4958e+004 0.3292e+004 400 3.2650e+004 0.2177e+004 500 2.9739e+004 0.2265e+004 600 2.7985e+004 0.2276e+004 700 2.5243e+004 0.2149e+004 800 2.3929e+004 0.1949e+004

Figure 4.7: Distance between particles in function space against average global value for a Sphere function.

The Figures 4.19 and 4.20 show the normalized plots of all the six functions.

63 Table 4.8: Results for Neighborhood determination based on nearest neighbors in the function space

δi Griewank Average Griewank Standard Deviation 0.0 573.1486 11.9657 0.2 528.8343 15.6901 0.4 497.5373 16.5225 0.6 467.0901 17.5689 0.8 448.9577 21.3740 1.0 422.6330 23.4547 1.2 415.1648 16.2671 1.4 398.6623 21.1827 1.6 384.1906 23.4488 1.8 368.1292 20.7247 2.0 357.4898 22.8019

Figure 4.8: Distance between particles in function space against average global value for a Griewank function.

64 Table 4.9: Results for Neighborhood determination based on nearest neighbors in the function space

δi Rastrigin Average Rastrigin Standard Deviation 0.00 331.6112 3.5545 0.05 321.6755 5.1924 0.10 316.3325 4.7620 0.15 310.0599 6.8977 0.20 304.3408 5.2595 0.25 299.9657 4.9468

Figure 4.9: Distance between particles in function space against average global value for a Rastrigin function.

65 Table 4.10: Results for Neighborhood determination based on nearest neighbors in the function space

δi Rosenbrock Average Rosenbrock Standard Deviation 0 8.7253e+003 0.2789 2 8.2791e+003 0.4009 4 8.0037e+003 0.3420 6 7.8770e+003 0.4289 8 7.6624e+003 0.4523 10 7.4413e+003 0.3750 12 7.2348e+003 0.4180 14 7.0375e+003 0.5031 16 6.9293e+003 0.3077 18 6.7216e+003 0.3733 20 6.5674e+003 0.5382

Figure 4.10: Distance between particles in function space against average global value for a Rosenbrock function.

66 Table 4.11: Results for Neighborhood determination based on nearest neighbors in the function space

δi Ackley Average Ackley Standard Deviation 0 20.8744 0.0251 14 8.0538 1.3107 28 8.5209 1.5953 42 8.0578 1.4818 56 7.5965 1.1630 70 8.1897 1.6452 84 7.7160 0.8330 98 7.7924 0.9746 112 7.5266 1.3178 126 7.7498 1.3374 140 8.1370 1.2853 154 7.8489 1.5198 168 8.1526 1.2600 182 7.7348 1.1805 196 7.6465 1.5611 210 7.4146 1.4053 224 7.9201 1.4060 238 7.8236 0.9473 252 7.8558 1.5060 266 7.5815 1.5235

67 Figure 4.11: Distance between particles in function space against average global value for a Ackley function.

Figure 4.12: Distance between particles in function space against average global value for a DejonF4 function.

68 Table 4.12: Results for Neighborhood determination based on nearest neighbors in the function space

δi DejongF4 Average DejongF4 Standard Deviation 0 67.5594 2.0002 1 8.6158 1.6660 2 5.2597 0.8660 3 3.8965 0.9899 4 3.7299 0.6800 5 3.0997 0.9028 6 2.8710 0.7374 7 2.4245 0.5538 8 2.3220 0.6580 9 1.8058 0.4510 10 2.1143 0.5994 11 1.8735 0.5638 12 1.7956 0.5500 13 1.6887 0.6554 14 1.3973 0.5457 15 1.4229 0.5308 16 1.2841 0.3509 17 1.1743 0.3689 18 1.0867 0.5678 19 0.9457 0.3828 20 0.9985 0.3606

Table 4.13: Results for Neighborhood determination based on nearest neighbors in the Random Neighborhood

i Sphere Average Sphere Standard Deviation 0.00 6.3957e+004 0.1146e+004 0.01 0.2351e+004 0.0979e+004 0.02 0.1554e+004 0.0775e+004 0.03 0.0873e+004 0.0329e+004 0.04 0.0913e+004 0.0468e+004 0.05 0.0699e+004 0.0428e+004 0.06 0.0596e+004 0.0239e+004

69 Figure 4.13: Distance between particles in Random Neighborhood against average global value for a Sphere function.

Table 4.14: Results for Neighborhood determination based on nearest neighbors in the Random Neighborhood

i Griewank Average Griewank Standard Deviation 0.00 575.7934 12.2256 0.01 22.7818 7.6930 0.02 12.4716 3.1203 0.03 10.8748 4.0529 0.04 7.5485 2.3189 0.05 8.1106 3.8985 0.06 7.6893 2.0401

70 Figure 4.14: Distance between particles in Random Neighborhood against average global value for a Griewank function.

Table 4.15: Results for Neighborhood determination based on nearest neighbors in the Random Neighborhood

i Rastrigin Average Rastrigin Standard Deviation 0.00 330.3911 3.7036 0.01 150.1788 18.6396 0.02 121.2920 20.9745 0.03 111.4441 17.5327 0.04 95.6824 19.2642 0.05 94.1507 21.5747 0.06 91.4594 18.9888

71 Figure 4.15: Distance between particles in random neighborhood against average global value for a Rastrigin function.

Table 4.16: Results for Neighborhood determination based on nearest neighbors in the Random Neighborhood

i Rosenbrock Average Rosenbrock Standard Deviation 0.00 8.7085e+003 0.2655 0.01 0.1507e+003 0.0488 0.02 0.0726e+003 0.0282 0.03 0.0563e+003 0.0199 0.04 0.0550e+003 0.0202 0.05 0.0535e+003 0.0194 0.06 0.0483e+003 0.0157

Table 4.17: Results for Neighborhood determination based on random neighborhood

i Ackley Average Ackley Standard Deviation 0.00 12.5035 0.0000 0.01 6.3425 0.0341 0.02 6.2404 0.0094 0.03 6.2160 0.0051 0.04 6.2078 0.0033 0.05 6.2028 0.0023 0.06 6.2012 0.0021

72 Figure 4.16: Distance between particles in random neighborhood against average global value for a Rosenbrock function.

Figure 4.17: Probability of particles being neighbors against mean global best value for a Ackley function.

73 Table 4.18: Results for Neighborhood determination based on random neighborhood

i DejongF4 Average DejongF4 Standard Deviation 0 67.0792 1.7892 0.01 0.2967 0.1585 0.02 0.1061 0.0683 0.03 0.0500 0.0366 0.04 0.0607 0.0503 0.05 0.0437 0.0340 0.06 0.0549 0.0796

Figure 4.18: Probability of particles being neighbors against mean global best value for a DejongF4 function.

74 Figure 4.19: Neighborhood size expressed as percentage of function space against Average global best value for Synchronous PSO

75 Figure 4.20: Neighborhood size expressed as percentage of function space against Average global best value for Asynchronous PSO

76 CHAPTER 5

RESULTS - III

In this chapter, we discuss the results obtained by parameterizing the number of neighbors in the swarm for a given value of the neighborhood size (characterized by delta). The goal of this study is to understand the role played by the number of neighbors in the convergence of both the synchronous and the asynchronous versions of the PSO algorithm.

5.1 Synchronous PSO

In this section, results are first presented for the Synchronous version of the

PSO algorithm. The results below show the performance of the Synchronous PSO algorithm with the number of neighbors as the test parameter with the delta value

fixed. The behavior of the different benchmark functions is simulated and is observed to converge as the number of neighbors increase. This clearly shows that the convergence is improved with the communication between the particles. As the number of neighbors increases, the communication between the particles also increases and hence the particles converge. In other words, as the number of neighbors goes up, the probability of locating more particles in a given delta sized neighborhood (under all definitions of the neighborhood - search space as well as functional) goes up. With a higher

77 number of particles in the neighborhood, the probability of on one of the particles in the neighborhood being close to the global minima goes up as well. This in turn ensures that the entire swarm is directed towards the global minima faster. This trend can also been in the simulation results presented in this chapter for all the functions under consideration.

The figure 5.1 shows the behavior of the sphere function in dynamic neighborhood.

The function clearly converges to the origin. The function values can be obtained from the table 5.1.

Table 5.1: Results for particle convergence based on number of neighbors for δ=800

NoofNeighbors Sphere Average Sphere Standard Deviation 50 258.6537 192.4482 100 13.8606 13.5307 150 1.2731 1.1156 200 0.1600 0.2124 250 0.0406 0.0938 300 0.0028 0.0055

The Griewank function also converges to its global minima and the trend can be observed from Figure 5.2. The corresponding values are recorded in Table 5.2.

Table 5.2: Results for particle convergence based on number of neighbors for δ=2500

NoofNeighbors Griewank Average Griewank Standard Deviation 50 2.8116 0.9349 100 0.8726 0.1989 150 0.5037 0.1719 200 0.2412 0.1149 250 0.1980 0.1230 300 0.1501 0.1046

78 Figure 5.1: Average global best value versus No.of Neighbors for Sphere function.

Figure 5.2: Average global best value versus No.of Neighbors for Griewank function.

79 The performance of the rest of the functions i.e. Rastrigin, Rosenbrock, Ackley and DejongF4 can be observed in figures 5.3, 5.4, 5.5, and 5.6. The global minima for all the functions under consideration is at origin and as seen from the figures, all the functions including Rastrigin, Rosenbrock, Ackley and DejongF4 converge to the origin. The simulation results for the functions are also tabulated and can be seen from the following tables 5.3, 5.4, 5.5, and 5.6.

Table 5.3: Results for particle convergence based on number of neighbors for δ=40

NoofNeighbors Rastrigin Average Rastrigin Standard Deviation 50 57.3733 20.6049 100 46.5096 16.4829 150 36.8503 9.9199 200 36.2793 11.2540 250 42.9332 19.0923 300 37.5599 12.5638

Table 5.4: Results for particle convergence based on number of neighbors for δ=10

NoofNeighbors Rosenbrock Average Rosenbrock Standard Deviation 50 28.8054 9.2648 100 18.9470 1.0289 150 18.3895 1.0129 200 17.6449 1.2547 250 18.0652 1.1154 300 17.6476 1.4344

80 Figure 5.3: Average global best value versus No.of Neighbors for Rastrigin function.

Figure 5.4: Average global best value versus No.of Neighbors for Rosenbrock function.

81 Table 5.5: Results for particle convergence based on number of neighbors for δ=400

NoofNeighbors Ackley Average Ackley Standard Deviation 50 8.2362 1.6062 100 6.5000 1.3727 150 5.3167 1.3937 200 4.3502 1.1679 250 3.6113 1.0246 300 4.3938 0.7360

Figure 5.5: Average global best value versus No.of Neighbors for Ackley function.

5.2 Asynchronous PSO

The results in this section are for an Asynchronous PSO algorithm. The functions that are simulated for a Synchronous PSO are considered here. We observe a delay in the behavior of the functions. But they tend to converge to the origin there by depicting the performance of the algorithm.

82 Table 5.6: Results for particle convergence based on number of neighbors for δ=200

NoofNeighbors DejongF4 Average DejongF4 Standard Deviation 50 307.8588 384.5656 100 3.0061 6.4979 150 0.5337 1.6040 200 0.0084 0.0162 250 0.0008 0.0013 300 0.0001 0.0001

Figure 5.6: Average global best value versus No.of Neighbors for DejongF4 function.

The functions here are tested by increasing the number of neighbors to about 500 to clearly show the difference in delay between the Synchronous and Asynchronous nature of the algorithm. The figures that come under Asynchronous PSO are 5.7,

5.8, 5.9, 5.10, 5.11, and 5.12. The corresponding simulation values for these functions are tables 5.7, 5.8, 5.9, 5.10, 5.11, and 5.12.

83 Table 5.7: Results for particle convergence based on number of neighbors for δ=800

NoofNeighbors Sphere Average Sphere Standard Deviation 50 71203282 251.8716 100 186.3587 91.0392 150 82.9078 58.9767 200 30.8095 26.4038 250 14.7593 8.4294 300 7.3920 4.7486 350 4.8881 4.7779 400 2.7815 2.4342 450 1.4279 1.1595 500 0.6411 0.3692

Figure 5.7: Average global best value versus No.of Neighbors for Sphere function.

84 Table 5.8: Results for particle convergence based on number of neighbors for δ=2500

NoofNeighbors Griewank Average Griewank Standard Deviation 50 9.5110 4.4340 100 3.0487 1.1829 150 1.7115 0.3821 200 1.1996 0.2996 250 1.0811 0.1819 300 0.9697 0.2152 350 0.8697 0.2052 400 0.7807 0.1624 450 0.7337 0.1826 500 0.5583 0.1936

Figure 5.8: Average global best value versus No.of Neighbors for Griewank function.

85 Table 5.9: Results for particle convergence based on number of neighbors for δ=40

NoofNeighbors Rastrigin Average Rastrigin Standard Deviation 50 101.7170 31.0366 100 66.5266 29.6108 150 55.1467 15.4988 200 59.7622 17.9724 250 45.7039 14.2258 300 51.7633 14.1896 350 45.8639 16.6343 400 44.3222 14.1996 450 41.4221 12.8100 500 46.8980 16.8383

Figure 5.9: Average global best value versus No.of Neighbors for Rastrigin function.

86 Table 5.10: Results for particle convergence based on number of neighbors for δ=10

NoofNeighbors Rosenbrock Average Rosenbrock Standard Deviation 50 62.4817 22.9402 100 28.9207 29.6108 150 21.5823 6.0055 200 20.0470 3.3195 250 18.8465 1.1049 300 18.7952 1.0283 350 18.6121 0.6430 400 18.3842 0.8716 450 18.2564 1.0482 500 18.0269 1.1798

Figure 5.10: Average global best value versus No.of Neighbors for Rosenbrock function.

87 Table 5.11: Results for particle convergence based on number of neighbors for δ=400

NoofNeighbors Ackley Average Ackley Standard Deviation 50 10.2856 1.5980 100 8.2525 1.4318 150 6.5514 1.2435 200 6.1386 1.1067 250 5.4081 1.1700 300 5.3456 1.2409 350 5.2894 1.7048 400 5.1688 1.0354 450 5.0305 0.7915 500 5.2436 1.3445

Figure 5.11: Average global best value versus No.of Neighbors for Ackley function.

88 Table 5.12: Results for particle convergence based on number of neighbors for δ=200

NoofNeighbors DejongF4 Average DejongF4 Standard Deviation 50 2.4346 1.5974 100 0.3190 0.3850 150 0.0446 0.0337 200 0.0086 0.0096 250 0.0045 0.0057 300 0.0012 0.0017 350 0.0004 0.0004 400 0.0002 0.0002 450 0.0001 0.0001 500 0.0001 0.0002

Figure 5.12: Average global best value versus No.of Neighbors for DejongF4 function.

89 CHAPTER 6

CONCLUSIONS

In this work, we have studied the Particle Swarm Optimization (PSO) as applied to dynamic neighborhood with a goal to minimize certain benchmark functions. The benchmark functions identified for optimization include sphere, Griewank, Rastrigin,

Rosenbrock, Ackley and the DejongF4. The formula describing the functions are reproduced below for convenience.

Function Mathematical Expression Search Range

Pn 2 Sphere i=1 xi [-100 100] n x2 n Griewank P i − Q cos( √xi ) + 1 [-600 600] i=1 4000 i=1 i Pn 2 Rastrigin i=1(xi − 10cos(2πxi) + 10) [-5.12 5.12] Pn−1 2 2 2 Rosenbrock i=1 ((1 − xi) + 100(xi+1 − xi ) ) [-2.048 2.048] √ − 1 1 Pn x2 − 1 Pn cos(2πx ) Ackley f(x) = 20 + e − 20e 5 n i i − e n i i [-32 32]

Pn 4 DejongF4 f(x) = i nxi [-1.28 1.28] The PSO algorithm is studied in both the synchronous and the asynchronous versions. In the synchronous version, the algorithm has been studied as a function of different parameters including the neighborhood size (delta) for a given number of neighbors and also different number of neighbors for a given neighborhood size.

Using the synchronous version of the algorithm first, the number of neighbors was

fixed at 100 particles and three different definitions for the neighborhood have been

90 studied. The definitions used for neighborhood are based on the distance between the particles in the search space (with the euclidean norm metric), the distance in the functional space (defined by the global function being optimized) and a probabilistic definition of the neighborhood. Under all conditions, it is shown that the PSO algorithm converges for the functions under consideration. In particular, for smooth and well behaved functions such as the sphere, the convergence is quicker while the convergence time is longer for functions that may have multiple local minima. In this work, we have included results for Ackley and DejongF4 which are not found in the literature. We have also shown the role played by the number of particles in the neighborhood in defining the convergence behavior of the algorithm. In particular, it can be seen that for a given value of delta (under all definitions of the neighborhood), higher number of particles in the search space leads to faster convergence. This is due to the fact that a larger number of particles in the search space ensures a higher probability of finding more particles in the neighborhood defined by a certain delta.

As the number of particles in the neighborhood goes up, the probability of the one of the particles being closer to the global minima and hence driving the entire swarm to the global minima also goes up. This is also evidenced in the performance plots presented in the results sections of this work.

The PSO algorithm for the asynchronous version has also been studied and results compared with the synchronous version. The asynchronous version of the algorithm models delay in passing information across the neighbors and captures any process delay in naturally occurring particle swarms. Since the information passing is slowed down and occurs only once in several passes as opposed to every single iteration in the synchronous version, the convergence of the algorithm is also correspondingly slower

91 with the asynchronous version. The important conclusion of the asynchronous version is that even if the particles in the swarm do not exchange information after every iteration, the exchange of accumulated information over several iterations is good enough to lead to the convergence of the algorithm even if the rate of convergence is affected by the slowing of information exchange. Similarly, for a given delta, when the number of particles in the swarm is used as a parameter, it can be noticed that a larger number of particles is required to achieve a given level of convergence (as measured by the mean of the global minima achieved after a given number of iterations). This is again in accordance with the conclusions about the rate of convergence.

In conclusion, different variants of the PSO algorithm have been studied in this work and its properties analyzed as a function of several underlying parameters. It is shown that the PSO algorithm converges to the desired solution under all the conditions for the functions considered in this work and asynchronous variants which may be more suitable to implement on practical processors with inter-processor delay also leads to good convergence.

The computation time for both Synchronous and Asynchronous PSO for the

DeJongF4 function for δ = 20 is given below and from the results below it can be inferred that Asynchronous PSO takes less time to compute than Synchronous

PSO.The elapsed time shown in the table corresponds to the time obtained using the tic and toc functions in Matlab. Matlab program used to generate results in this thesis is running on an HP PC using intel core2 duo processor and 3GB RAM. The elapsed time is shown for a fixed delta value as a representative number. The computation time for different values of delta was observed to follow the same relationship.

92 Neighborhood Elapsed Time (sec) PSO Nature Type of Strategy

Static 209.960546 Synchronous Fixed Neighborhood

Static 23.149013 Asynchronous Fixed Neighborhood

Dynamic 256.254812 Synchronous Search space

Dynamic 31.334100 Asynchronous Search space

Dynamic 1788.731225 Synchronous Function space

Dynamic 160.005547 Asynchronous Function space

Dynamic 230.065358 Synchronous Random

Dynamic 19.677330 Asynchronous Random

The following plots 6.1 and 6.2 shows the comparison of Synchronous PSO and

Asynchronous PSO in static and dynamic neighborhoods for DeJongF4 function to demonstrate the functionality of the PSO algorithm for static and dynamic neighborhoods.

Figure 6.1: Comparison of Synchronous PSO and Asynchronous PSO in Static Neighborhood

93 Figure 6.2: Comparison of Synchronous PSO and Asynchronous PSO in Dynamic Neighborhood

94 6.1 Design Guidelines

1. From the computation time shown in the table, it can be seen that the time

taken for the convergence of the asynchronous algorithm is an order of magnitude

smaller than the synchronous case.

2. The final value achieved for global minima is comparable for both synchronous

and asynchronous in the case of DejongF4 function as seen in the plots. From the

comparison of the computation times and the convergence plots, it is clear that

asynchronous algorithm provides a useful tradeoff between the total computation

time and the final global minima achieved.

3. In addition, asynchronous algorithm also helps overcome the basic issue of

synchronization between multiple processors, if each particle can be modeled

as running on a different processor.

4. Taking all these aspects into consideration, this study suggests that the asynchronous

algorithm provides several advantages over the synchronous algorithm and is

recommended for primary consideration among the algorithms considered in

this study.

6.2 Future Work

The PSO algorithm can be tested for performance on other benchmark functions with dynamic neighborhood using the three different methods.

95 BIBLIOGRAPHY

[1] P. J. Angeline, “Using selection to improve particle swarm optimization,”

in Proceedings of IEEE Congress on Evolutionary Computation (CEC-1998).,

Anchorage, AK, USA., 1998, pp. 84–89.

[2] ——, “Evolutionary optimization versus particle swarm optimization:

philosophy and performance differeneces,” in Proceedings of the Seventh Annual

Conference on Evolutionary Programming., San Diego, California, USA., 1998,

pp. 601–610.

[3] J. Kennedy, “Minds and cultures: Particle swarm implication,” in Socially

Intelligent Agents: Papers from the 1997 Fall Symposium., Menlo Park, CA.,

1997, pp. 67–72.

[4] H. Fan and Y. Shi, “Study on Vmax of particle swarm optimization,” in

Proceedings of IEEE Swarm Intelligence Symposium (SIS-2001)., Indianapolis,

Indiana, USA., 2003, pp. 193–197.

[5] M. Clerc, “The swarm and the queen towards a deterministic and adaptive

particle swarm optimization,” in Proceedings of IEEE Congress on Evolutionary

Computation (CEC-1999)., Washington, DC, USA., 1999, pp. 1951–1957.

96 [6] M. Clerc and J. Kennedy, “The particle swarm optmization: explosion, stability,

and adaptive particle swarm optimization,” IEEE Transactions on Evolutionary

Computation., vol. 6, pp. 58–73, 2002.

[7] R. C. Eberhart and Y. Shi, “Comparing inertia weights and constriction factors in

particle swarm optimization,” in Proceedings of IEEE Congress on Evolutionary

Computation (CEC-2000)., vol. 1, La Jolla, CA, USA., 2000, pp. 84–88.

[8] M. Løvbjerg and T. Krink, “Extending particle swarms with self-organized

critically,” in Proceedings of Fourth Congress on Evolutionary Computation

(CEC-2002), vol. 2, New York, NY, USA., 2002, pp. 1588–1593.

[9] M. Løvbjerg, T. Rasmussen, and T. Krink, “Hybrid particle swarm optimizer

with breeding and subpopulations,” in Proceedings of the Third Genetic and

Evolutionary Computation Conference (GECCO-2001), vol. 1, 2001, pp. 469–

476.

[10] R. L. Haupt and S. E. Haupt, Practical Genetic Algorithms, 2nd ed. Wilwey-

IEEE, 2004.

[11] V. Miranda and N. Fonseca, “New evolutionary particle swarm algorithm

(EPSO) applied to voltage/VAR control,” in Proceedings of 14th Power Systems

Computation Conference (PSSC’02), Seville, Spain., 2002, pp. 745–750.

[12] X. Xie, W. Zhang, and Z. Yang, “Adaptive particle swarm optimization on

individual level,” in Proceedings of 6th International Conference on Signal

Processing (ICSP-2002), vol. 2, Beijing, China., 2002, pp. 1215–1218.

97 [13] T. Blackwell and P. J. Bentley, “Don’t push me! collision-avoiding swarms,” in

Proceedings of IEEE Congress on Evolutionary Computation (CEC-2002), vol. 2,

Honolulu, HI, USA., 2002, pp. 1691–1696.

[14] T. Krink, J. S. Verterstørm, and J. Riget, “Particle swarm optimization with

spatial particle extension,” in Proceedings of Fourth Congress on Evolutionary

Computation, vol. 2, Honolulu, HI, USA., 2002, pp. 1474–1479.

[15] T. Stacey, M. Jancic, and I. Grundy, “Particle swarm optimization with

mutation,” in Proceedings of IEEE Congress on Evolutionary Computation

(CEC-2003), vol. 2, Canberra, Australia., 2003, pp. 1425–1430.

[16] S. C. Esquivel and C. A. C. Coello, “On the use of particle swarm optimization

with multimodal functions,” in Proceedings of IEEE Congress on Evolutionary

Computation (CEC-2002), vol. 2, Honolulu, HI, USA., 2003, pp. 1130–1136.

[17] T. Krink and M. Løvbjerg, “The LifeCycle model: combining particle swarm

optimisation, genetic algortihms and hill climbers,” in Proceedings of Parallel

Problem Solving Nature (PPSN), Granada, Spain., 2002, pp. 621–630.

[18] T. Hendtlass and M. Randall, “A survey of ant colony and particle swarm

and their application to discrete optimization problems,” in

Proceedings of Inaugural Workshop on Artificial Life (AL’01), Melbourne,

Australia., 2001, pp. 15–25.

[19] F. Vandenbergh and A. P. Engelbrecht, “A cooperative approach to particle

swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 8,

pp. 225–239, 2004.

98 [20] J. Riget and J. S. Vesterstrøm, “A diversity-guided particle swarm optimizer-

the ARPSO,” 2002, eVALife Technical Report, Department of Computer Science

University of Aarhus.

[21] K. E. Parsopoulos and M. Vrahatis, “On the computation of all global minimizers

through particle swarm optimization,” IEEE Transactions on Evolutionary

Computation, vol. 8, pp. 221–224, 2004.

[22] S. Janson and M. Middendorf, “A hierarchical particle swarm optimization,” in

Proceedings of IEEE Congress on Evolutionary Computation (CEC-2003), vol. 2,

Canberra, Australia., 2003, pp. 770–776.

[23] C. K. Monson and K. D. Seppi, “The kalman swarm a new approach to particle

motion in swarm optimization,” in Proceedings of the Genetic and Evolutionary

Computation Conference (GECCO), June 2004, pp. 140–150.

[24] J. Vesterstørm and R. Thomsen, “A comparative study of differential evolution,

particle swarm optimization, and evolutionary algorithms on numerical

benchmark problems,” in Proceedings of IEEE Congress on Evolutionary

Computation (CEC-2004), vol. 2, Portland, OR, USA., 2004, pp. 1980–1987.

[25] J. J. Liang and N. Suganthan, “Dynamic particle swarm optimizer,” in

Procedings of IEEE Swarm Intelligence Symposium (SIS), May 2005, pp. 124–

129.

[26] X. Hu, R. C. Eberhart, and Y. Shi, “Particle swarm with extended memory

for multiobjective optimization,” in Proceedings of IEEE Swarm Intelligence

Symposium (SIS-2003)., Indianapolis, Indiana, USA., 2003, pp. 193–197.

99 [27] K. Miettinen, “Nonlinear multiobjective optimization,” 1999.

[28] A. C. C. Coello, “Evolutionary multi-objective optimization: Basic concepts

and some applications in pattern recognition,” in IEEE Swarm Intelligence

Symposium, Mexico, 2011.

[29] R. Marler and J. Arora, “Survey of multi-objective optimization methods for

engineering,” in Structural and Multidisciplinary Optimization, vol. 26, num. 6,

USA, 2004, pp. 369–395.

[30] J. Liu, H. Liu, and W. Shen, “Stability analysis of particle swarm optimization,”

in Advanced Intelligent Computing Theories And Applications With Aspects of

Artificial Intelligence, volume 4682, 2007, pp. 781–790.

[31] V. Kadirkamanathan, K. Selvarajah, and P. Fleming, “Stability analysis of

the particle dynamics in particle swarm optimizer,” in IEEE Transactions on

Evolutionary Computation, vol. 10, Issue 3, 2006, pp. 245–255.

[32] J. Shen and W. Kan, “The convergence basis of particle swarm optimization,” in

American Journal of Engineering and Technology Research, vol.11, no. 9, 2011,

pp. 1138–1144.

[33] V. Gudise and G. K. Venayagamoorthy, “Comparison of particle swarm

optimization and backpropagation as training algorithms for neural networks,” in

Proceedings of IEEE Symposium on Swarm Intelligence (SIS-2003), Indianapolis,

Indiana, USA., 2003, pp. 110–117.

100 [34] C. Zhang, H. Shao, and Y. Li, “Particle swarm optimisation for evolving neural

networks,” in Proceedings of IEEE Symposium on Systems, Man and Cybernetics

(SIS-2003), Washington DC, USA., 2000, pp. 2487–2490.

[35] F.Vandenbergh and A.P.Engelbrecht, “Cooperative learning in neural networks

using particle swarm optimizers,” South African Computer Journal., vol. 26, pp.

84–90, 2000.

[36] R. A. Conradie, R. Miikkulainen, and C. Aldrich, “Adaptive control utilising

’Neural Swarming’,” in Proceedings of Genetic and Evolutionary Computation

Conference., New York, NY, USA., 2002, pp. 60–67.

[37] K. E. Parsopoulos and M. N. Vrahatis, “Particle swarm optimizer in noisy and

continuously changing envoirments,” in Proceedings of International Conference

on Artificial Intelligence and Soft Computing., Cancun, Mexico., 2001, pp. 289–

294.

[38] A. Carlisle and G. Dozier, “Adapting particle swarm optimization to

dynamic environments,” in Proceedings of International Conference on Artificial

Intelligence., Las Vegas Nevada USA., 2000, pp. 429–434.

[39] X. Hu and R. C. Eberhart, “Adaptive particle swarm optimization: detection and

response to dynamic systems,” in Proceedings of IEEE Congress on Evolutionary

Computation., vol. 2, Honolulu, Hawaii, USA., 2002, pp. 1666–1670.

[40] E. Sahin, “Swarm robotics: From sources of inspiration to domains of

application,” in Swarm Robotics: State-of-the-art Survey, ser. Lecture Notes

101 in Computer Science (LNCS 3342), E. Sahin and W. Spears, Eds. Berlin

Heidelberg: Springer-Verlag, 2005, pp. 10–20.

[41] J. M. Hereford, “A distributed particle swarm algorithm for swarm robotic

applications,” in Proceedings of IEEE Congress on Evolutionary Computation

(CEC-2006)., vol. 2, Vancouver, BC, Canada., 2004, pp. 1678–1685.

[42] G. K. Venayagamoorthy and A. V. Gudise, “Optimal PSO for collective

robotic search applications,” in Proceedings of IEEE Congress on Evolutionary

Computation (CEC-2004)., vol. 2, Portland, OR, USA., 2004, pp. 1390–1395.

[43] J. M. Hereford, M. Siebold, and S. Nichols, “Using the particle swarm

optimization algorithm for robotic search applications,” in Proceedings of IEEE

Symposium on Swarm Intelligence (SIS-2007)., Honolulu, Hawaii, USA., 2007,

pp. 53–59.

[44] J. Pugh and A. Martinoli, “Inspiring and modelling multi-robot search with

particle swarm optimization,” in Proceedings of IEEE Congress on Evolutionary

Computation (CEC-2002)., vol. 2, Honolulu, Hawaii, USA., 2002, pp. 1666–1670.

[45] Mondada, Francesco et al., “The e-puck, a robot designed for education in

engineering,” in Proceedings of the 9th Conference on Autonomous Robot Systems

and Competitions, vol. 1, num. 1, May 2009, pp. 59–65.

[46] J. Pugh, L. Segapelli, and A. Martinoli, “Applying aspects of multi-robot search

to particle swarm optimization,” in International Workshop on Ant Colony

Optimization and Swarm Intelligence Computation., Brussels, Belgium., 2006,

pp. 506–515.

102 [47] L. Marques, U. Nunes, and A. T. de Almedia, “Particle swarm-based olfactory

guided search,” Autonomous Robots, vol. 20, no. 3, pp. 277–287, May 2006.

[48] W. Jatmiko, K. Sekiyama, and F. Toshio, “A PSO-based mobile sensor network

for odor source localization in dynamic envoirment: Theory, simulation and

measurement,” in Proceedings of IEEE Congress on Evolutionary Computation,

Vancouver, BC, Canada, July 2006, pp. 1036–1043.

[49] ——, “A PSO-based mobile sensor network for odor source localization in

dynamic advection-diffusion with obstacles environment: Theory, simulation

and measurement,” IEEE Computational Intelligence Magazine, pp. 37–51, July

2007.

[50] S. Xue and J. Zeng, “Sense limitedly, interact locally: the control strategy

for swarm robots search,” in Proceedings of IEEE International Conference on

Networking, Sensing and Control (ICNSC), April 2008, pp. 402–407.

[51] R. V. Kulkarni and G. K. Venayagamoorthy, “Particle swarm optimization in

wireless network sensors: A brief survey,” IEEE transactions on systems, man,

and cybernetics-part c : applications and reviews, Vol.41,No.2, pp. 262–267,

March 2011.

[52] Z. Bojkovic and B. Bakmaz, “A survey on wireless sensor networks deployment,”

WSEAS Trans. Commun.,, Vol.12, pp. 1172–1181, 2008.

[53] M.A.Hassan and M. Abido, “Optimal design of microgrids in autonomous and

grid-connected modes using particle swarm optimization,” IEEE Transactions

on power electronics, Vol.26, No.3, pp. 755–769, March 2011.

103 [54] H. H. E. Mora, A. Zerguine, and A. U. Sheikh, “Optimal multi user detection

in CDMA using particle swarm optimization,” The Arabian journal for science

and engineering, Vol.34, No.1B, pp. 197–202, April 2009.

[55] D. J. Watts and S. H. Strogatz, “Collective dynamics of small world networks,”

Nature, pp. 440–442, July 1999.

[56] V. Gazi, “Deiken komuluklu parack sr eniyileme yntemi.” in Sinyal leme ve letiim

Uygulamalar Kurultay (SU-2007)., June 2007.

[57] ——, “Ezamanl olmayan parack sr eniyileme yntemi.” in Sinyal leme ve letiim

Uygulamalar Kurultay (SU-2007)., Eskiehir, June 2007.

[58] S. Akat and V. Gazi, “Particle swarm optimization with dynamic neighborhood

topology: Three neighborhood strategies and preliminary results,” in IEEE

Swarm Intelligence Symposium, St.Louis, Missouri, USA, September 2008.

[59] ——, “Decentralized asynchronous particle swarm optimization,” in IEEE

Swarm Intelligence Symposium, St.Louis, Missouri, USA, September 2008.

[60] V. Gazi and K. Passino, “Swarm stability and optimization,” January 2011.

[61] J. Kennedy, “Small worlds and mega minds: Effects of neighborhood topology on

particle swarm optimization,” in Proceedings of IEEE Congress on Evolutionary

Computation (CEC-1999)., vol. 3, Washington, DC, USA., 1999, pp. 1931–1938.

[62] “Heuristic and Evolutionary Algorithms laboratory,”

http://dev.heuristiclab.com/trac/hl/core/wiki/Particle Swarm Optimization,

2011.

104 [63] J. Kennedy and R. Mendes, “Population structure and particle swarm

performance,” in Proceedings of IEEE Congress on Evolutionary Computation

(CEC-2002)., vol. 2, Honolulu, Hawaii, USA., 2002, pp. 1671–1676.

[64] ——, “The fully informed particle swarm: Simpler, maybe better,” IEEE

Transactions on Evolutionary Computation., vol. 8, pp. 204–210, 2004.

[65] S. Devarakonda, R. Ordonez, S. B. Akat, and V. Gazi, “An empirical study

of particle swarm optimization with dynamic neighborhood topology,” in Bio-

Inspired Algorithms with Structured Populations, 2013.

[66] J. F. Schutte, J. A. Reinbolt, B. J. Fregly, R. T. Haftka, and A. D. George,

“Parallel global optimization with the particle swarm algorithm,” International

Journal of Numerical Methods in Engineering, 2003.

[67] G. Venter and J. S. Sobieski, “Parallel particle swarm optimization algorithm

accelerated by asynchronous evaluations,” Journal of Aerospace Computing,

2006.

105 APPENDIX A

MATLAB CODE FOR SYNCHRONOUS PSO

ALGORITHM FOR DYNAMIC NEIGHBORHOOD FOR

DEJONGF4 FUNCTION

%Define matrix X with 100 particles and 20 dimensions for each particle

%Range of each dimension is from -20 to 20 range_min=-1.28;range_max=1.28;% DejongF4 fun numParticles=100; numDimensions=20; num_initial_conditions=20; pFactor=1:1:numDimensions; pFactor=pFactor’*ones(1,numParticles); del=[0:1:20];

Fdel=zeros(length(del),2);

F=zeros(length(del),num_initial_conditions); for d=1:length(del)

for count=1:num_initial_conditions

fVector=zeros(1,num_initial_conditions);

X=rand(numDimensions,numParticles)*(range_max-range_min)+range_min;

106 %Velocity vector initialization

V=rand(numDimensions,numParticles)*(range_max-range_min)+range_min;

%Vector P to indicate each particles best position over time

P=zeros(numDimensions,numParticles);

%Vector G to indicate each neighorhood’s best position over time

Niter= 400;

delta = del(d);

f_best=inf(1,numParticles);

chi=0.7298;

phi_max=2.05;

g_best=inf(1,numParticles);

for i=1:Niter

f=sum(pFactor.*(X.^4));

%Compute the neighborhood for each particle for a given delta

N=zeros(numParticles,numParticles);

for j=1:numParticles

neighborhood_best=inf;

if f(j)

P(:,j)=X(:,j);

f_best(j)=f(j);

end;

for k=1:numParticles

107 distance=norm(X(:,j)-X(:,k));% Search space

%f1=sum([1:1:numDimensions]’.*(X(:,j).^4));

%f2=sum([1:1:numDimensions]’.*(X(:,k).^4));

%distance=abs(f1-f2);% Function space

%distance=rand; % Random Neighborhood

if distance <= delta || (j==k)

N(j,k)=1;

if f(k)

G(:,j)=X(:,k);

g_best(j)=f(k);

end;

end

end; end

%Now update V and X for jj=1:numParticles

phi1(:,jj)=rand()*phi_max*ones(numDimensions,1);

phi2(:,jj)=rand()*phi_max*ones(numDimensions,1); end;

V=chi*(V+phi1.*(P-X)+phi2.*((G-X))); %Updated velocity vector

X=X+V; %Updated position vector fVector(i)=min(f);

VVector(:,i)=V(:,10); if (mod(i,100)==0)

108 g_vector(i/100+1)=mean(g_best)

end;

F(d,count)=mean(g_best)

end

end

Fdel(d,:)=[mean(F(d,:)) std(F(d,:))]; end;

109 APPENDIX B

MATLAB CODE FOR ASYNCHRONOUS PSO

ALGORITHM FOR DYNAMIC NEIGHBORHOOD FOR

DEJONGF4 FUNCTION

%Define matrix X with 100 particles and 20 dimensions for each particle

%Range of each dimension is from -100 to 100 range_min=-1.28;range_max=1.28; numParticles=100; numDimensions=20; num_initial_conditions=20; pFactor=1:1:numParticles; pFactor=ones(numDimensions,1)*sqrt(pFactor); del=[0:1:20];

Fdel=zeros(length(del),2);

F=zeros(length(del),num_initial_conditions); for d=1:length(del)

for count=1:num_initial_conditions

fVector=zeros(1,num_initial_conditions);

X=rand(numDimensions,numParticles)*(range_max-range_min)+range_min;

110 V=rand(numDimensions,numParticles)*(range_max-range_min)+range_min;

%Vector P to indicate each particles best position over time

P=zeros(numDimensions,numParticles);

%Vector G to indicate each neighorhood’s best position over time

Niter=400; delta = del(d); f_best=inf(1,numParticles); chi=0.7298; phi_max=2.05; g_best=inf(1,numParticles);

T=randi(ceil(Niter/10),1,numParticles); for i=1:Niter

f=sum(pFactor.*(X.^4));% Search space

%f1=norm([1:1:numDimensions]’.*(X(:,j).^4));

%f2=norm([1:1:numDimensions]’.*(X(:,k).^4));

%distance=abs(f1-f2);% Function space

%distance=rand;% Random

%Compute the neighborhood for each particle for a given delta

N=zeros(numParticles,numParticles);

for j=1:numParticles

if (mod(i,T(j))==0)

neighborhood_best=inf;

if f(j)

P(:,j)=X(:,j);

111 f_best(j)=f(j);

end;

for k=1:numParticles

distance=norm(X(:,j)-X(:,k));

if distance <= delta || (j==k)

N(j,k)=1;

if f(k)

G(:,j)=X(:,k);

g_best(j)=f(k);

end;

end

end;

end end

%Now update V and X for jj=1:numParticles

if (mod(i,T(jj))==0)

phi1(:,jj)=rand()*phi_max*ones(numDimensions,1);

phi2(:,jj)=rand()*phi_max*ones(numDimensions,1);

V(:,jj)=chi*(V(:,jj)+phi1(:,jj).*(P(:,jj)-X(:,jj))+phi2(:,jj).*((G(:,jj)-X(:,jj))));

X(:,jj)=X(:,jj)+V(:,jj);

end end;

112 %Updated position vector

fVector(i)=min(f);

VVector(:,i)=V(:,10);

if (mod(i,100)==0)

g_vector(i/100+1)=mean(g_best)

end;

end

F(d,count)=mean(g_best)

end

Fdel(d,:)=[mean(F(d,:)) std(F(d,:))]; end;

113 APPENDIX C

MATLAB CODE FOR SYNCHRONOUS PSO

ALGORITHM FOR NO OF PARTICLES AS

PARAMETER FOR DEJONGF4 FUNCTION

%Define matrix X with 100 particles and 20 dimensions for each particle

%Range of each dimension is from -20 to 20 clear all; range_min=-20;range_max=20;% DejongF4 fun numDimensions=20; num_initial_conditions= 20; numP=50:50:300; delta=200;

Fdel=zeros(length(numP),2);

F=zeros(length(numP),num_initial_conditions); for d=1:length(numP)

for count=1:num_initial_conditions

numParticles=numP(d);

fVector=zeros(1,num_initial_conditions);

X=rand(numDimensions,numParticles)*(range_max-range_min)+range_min;

114 %Velocity vector initialization

V=rand(numDimensions,numParticles)*(range_max-range_min)+range_min;

%Vector P to indicate each particles best position over time

P=zeros(numDimensions,numParticles);

%Vector G to indicate each neighorhood’s best position over time

Niter= 500; pFactor=1:1:numDimensions; pFactor=pFactor’*ones(1,numParticles);

f_best=inf(1,numParticles); chi=0.7298; phi_max=2.05; g_best=inf(1,numParticles);

for i=1:Niter

f=sum(pFactor.*(X.^4));

%Compute the neighborhood for each particle for a given delta

N=zeros(numParticles,numParticles);

for j=1:numParticles

neighborhood_best=inf;

if f(j)

P(:,j)=X(:,j);

f_best(j)=f(j);

end;

115 for k=1:numParticles

distance=norm(X(:,j)-X(:,k));

if distance <= delta || (j==k)

N(j,k)=1;

if f(k)

G(:,j)=X(:,k);

g_best(j)=f(k);

end;

end

end; end

%Now update V and X for jj=1:numParticles

phi1(:,jj)=rand()*phi_max*ones(numDimensions,1);

phi2(:,jj)=rand()*phi_max*ones(numDimensions,1); end;

V=chi*(V+phi1.*(P-X)+phi2.*((G-X))); %Updated velocity vector

X=X+V; %Updated position vector fVector(i)=min(f);

VVector(:,i)=V(:,10); if (mod(i,100)==0)

g_vector(i/100+1)=mean(g_best) end;

116 F(d,count)=mean(g_best)

end

end

Fdel(d,:)=[mean(F(d,:)) std(F(d,:))]; end;

117 APPENDIX D

MATLAB CODE FOR ASYNCHRONOUS PSO

ALGORITHM FOR DYNAMIC NEIGHBORHOOD FOR

DEJONGF4 FUNCTION

%Define matrix X with 100 particles and 20 dimensions for each particle

%Range of each dimension is from -100 to 100 range_min=-20;range_max=20; numDimensions=20; num_initial_conditions=20; numP=50:50:500; delta=200;

Fdel=zeros(length(numP),2);

F=zeros(length(numP),num_initial_conditions); for d=1:length(numP)

for count=1:num_initial_conditions

numParticles=numP(d);

fVector=zeros(1,num_initial_conditions);

X=rand(numDimensions,numParticles)*(range_max-range_min)+range_min;

118 %Velocity vector initialization

V=rand(numDimensions,numParticles)*(range_max-range_min)+range_min;

%Vector P to indicate each particles best position over time

P=zeros(numDimensions,numParticles);

%Vector G to indicate each neighorhood’s best position over time

Niter=500; pFactor=1:1:numDimensions; pFactor=pFactor’*ones(1,numParticles); f_best=inf(1,numParticles); chi=0.7298; phi_max=2.05; g_best=inf(1,numParticles);

T=randi(ceil(Niter/10),1,numParticles); for i=1:Niter

f=sum(pFactor.*(X.^4));

%Compute the neighborhood for each particle for a given delta

N=zeros(numParticles,numParticles);

for j=1:numParticles

if (mod(i,T(j))==0)

neighborhood_best=inf;

if f(j)

P(:,j)=X(:,j);

f_best(j)=f(j);

119 end;

for k=1:numParticles

distance=norm(X(:,j)-X(:,k));

if distance <= delta || (j==k)

N(j,k)=1;

if f(k)

G(:,j)=X(:,k);

g_best(j)=f(k);

end;

end

end;

end

end

%Now update V and X for jj=1:numParticles

if (mod(i,T(jj))==0)

phi1(:,jj)=rand()*phi_max*ones(numDimensions,1);

phi2(:,jj)=rand()*phi_max*ones(numDimensions,1);

V(:,jj)=chi*(V(:,jj)+phi1(:,jj).*(P(:,jj)-X(:,jj))+phi2(:,jj)

.*((G(:,jj)-X(:,jj))));

X(:,jj)=X(:,jj)+V(:,jj);

end

end;

120 %Updated position vector

fVector(i)=min(f);

VVector(:,i)=V(:,10);

if (mod(i,100)==0)

g_vector(i/100+1)=mean(g_best)

end;

end

F(d,count)=mean(g_best)

end

Fdel(d,:)=[mean(F(d,:)) std(F(d,:))]; end;

121