Particle Swarm Optimization for Constrained and Multiobjective Problems: a Brief Review

Particle Swarm Optimization for Constrained and Multiobjective Problems: a Brief Review

2011 International Conference on Management and Artificial Intelligence IPEDR vol.6 (2011) © (2011) IACSIT Press, Bali, Indonesia Particle Swarm Optimization for Constrained and Multiobjective Problems: A Brief Review Nor Azlina Ab. Aziz Mohamad Yusoff Alias Faculty of Engineering and Technology Faculty of Engineering Multimedia University Multimedia University Malaysia Malaysia [email protected] [email protected] Ammar W. Mohemmed Kamarulzaman Ab. Aziz Knowledge Engineering & Discovery Research Institute Faculty of Management Auckland University of Technology Multimedia University New Zealand Malaysia [email protected] [email protected] Abstract— Particle swarm optimization (PSO) is an of PSO. Particles neighbourhood in PSO had been studied optimization method that belongs to the swarm intelligence from two perspectives; global neighbourhood (gBest) and family. It was initially introduced as continuous problem local neighbourhood (lBest). In gBest the particles are fully optimization tool. It has evolved to being applied to more connected therefore the particles search is directed by the complex multiobjective and constrained problem. This paper best particle of the swarm. While in lBest the particles are presents a systematic literature review of PSO for constrained connected to their neighbours only and their search is and multiobjective optimization problems. conducted by referring to the neighbourhood best. A particle in PSO has a position (X ) and velocity (V ). The position Keywords- Constrained optimization problems; multiobjective i i represents a solution suggested by the particle while velocity optimization problems; particle swarm optimization is the rate of changes of the next position with respect to I. INTRODUCTION current position. Initially these two values (position and velocity) are randomly initialised. In the subsequent One of successful optimization algorithms is particle iterations the search process is conducted by updating these swarm optimization (PSO). PSO main attractive feature is its values using the following equations: simple and straightforward implementation. PSO has been 1 2 2 = × 1 −××+−××+ (1) applied in multiple fields such as human tremor analysis for i w VV i randc XP ii )(() randc XP ig )(() biomedical engineering, electric power and voltage += (2) management and machine scheduling [1]. XX ii Vi The original PSO is proposed for optimization of single where i is particle’s number (i = 1,..,N; N: number of objective continuous problem. However the concept of PSO particles in the swarm). has been expanded to allow it to handle other optimization As what can be observed in Eq.1 velocity is influenced problems such as; binary, discrete, combinatorial, by; Pi – the best position found so far by the particle and Pg – constrained and multiobjective optimization. This paper will the best position found by the neighbouring particles. Pg review some of the works conducted in constrained and could be either the position of gBest or lBest. The Vi value is multiobjective PSO in section 4 and 5. However before that clamped to ±Vmax to prevent explosion. If the value of Vmax is the concept of PSO is discussed in the next section followed too large the exploration range is too wide however if it is by the concept of multiobjective and constrained problems. too small particles will favour local search [3]. c1 and c2 are The paper is then concluded in section 6. the learning factors to control the effect of the “best” factors of particles; Pi and Pg, typically both are set to 2 [1]. rand1() II. PARTICLE SWARM OPTIMIZATION and rand2() are two independent random numbers in the In 1995, James Kennedy and Russell Eberhart introduced range of [0.0,1.0]. The randomness provides energy to the particle swarm optimization (PSO). It is a swarm based particles. w is known as inertia weight, a term added to algorithm that mimics the social behaviour of organisms like improve PSO’s performance. The inertia weight controls birds and fishes. The success of an individual in these particles momentum so that, they can avoid continuing to communities is affected not only by its own effort but also explore the wide search space and switch to fine tuning when by the information shared by its surrounding neighbours. a good area is found [4]. This nature of the social behaviour is imitated by PSO using The particle position is updated using Eq.2 where the swarm of agents called particles [2]. The interaction of the velocity is added to the previous position. This shows how a particles with their neighbours is the key to the effectiveness 146 particle next search is launched from its previous position hk(X) are inequality and equality constraints enforced on the and the new search is influenced by its pass search [5]. problem, respectively. X is the search variables vector that The quality of the solution is evaluated by a fitness is bounded in the search space within the xmin and xmax values. function, which is a problem-dependent function. If the A constrained optimization could have only one objective, current solution is better than the fitness of Pi or Pg or both, meanwhile multiobjective optimization could be limited by the best value will be replaced by current solution its search space only. accordingly. This update process continues until stopping criterion is met, usually when either maximum iteration is IV. PSO FOR CONSTRAINED OPTIMIZATION PROBLEMS achieved or target solution is attained. When the stopping In their work, [6] categorised the evolutionary algorithm criterion is satisfied, the best particle found so far is taken as optimization methods for constrained problem into four the optimal solution (near optimal). The PSO algorithm is types: presented in Fig. 1. Initialize particles population; A) Preserve feasibility of solutions Do{ According to this approach only feasible solutions are Calculate fitness values of each particles using fitness function; generated to maintain feasibility. The search process is also Update pid if the current fitness value is better than pid; Determine pgd : choose the particle position with the best fitness restricted within only the feasible area. value of all the neighbors as the pgd; For each particle { B) Penalty functions Calculate particle velocity according to (1); In the penalty functions method, the constrained Update particle position according to (2); optimization problem is solved using unconstrained } } While maximum iteration or ideal fitness is not attained; optimization method by incorporating the constraints into the objective function thus transforming it into an unconstrained Fig. 1: PSO Algorithm problem. III. MULTIOBJECTIVE AND CONSTRAINED OPTIMIZATION C) Differentiate the feasible and infeasible solutions PROBLEMS Among common methods belonging to this group are Multiobjective optimization is a problem with many repairing infeasible solutions so that a feasible solution is objectives to be fulfilled and most of the time these produced and preferring feasible solutions over infeasible objectives are in conflict with each other. Whereas solutions. constrained optimization is an optimization problem with D) Hybrid methods one or more constraints to be obeyed. These types of problems are commonly faced in A hybrid method is a combination of at least two of the everyday life, for example in this situation: three methods described above. Mr. ABC wants to buy a new computer. He wants a Among the early research works on PSO for constrained computer with the best specifications but with the least optimization problem is by [7] where the simple idea of feasible solution preservation is used. There are two points cost. that differentiate this work from the basic PSO. The first one The problem faced by Mr. ABC is a multiobjective is that the particles are initialised with only feasible solutions. problem with two objectives; the first objective is to get a This step speeds up the search process. The second point is computer with the best specifications while the second the update of the best solutions (gBest and pBest). Only objective is to spend the least amount of money. Following is feasible solutions will be selected as the best values. This a similar problem in constrained optimization problem form: approach is simple and able to handle a wide variety of Ms. XYZ also wants to buy a computer, but she has a constrained problems. However, for problems with a small limited budget. She is aiming to get the best computer feasible region the initialisation process tends to consume a possible within her budget. lot of time thus delaying the search [8]. Ms. XYZ is facing a constrained optimization problem Penalty function technique is usually used in where she needs to buy a good computer but subject to a evolutionary algorithm for constrained optimization limited budget. problems [9]. The penalty function combines the constraints Mathematically these optimization problems can be with the objective function by adding penalty value to presented as follows: infeasible solutions (in minimization problems). In [10] the optimize :()X i= 1,..., l f i penalty functions methods are grouped into three classes: subject to : (X )≤ 0j = 1,..., p (3) c j (a) Static penalty functions (X )= 0 k = p +1,..., q This is the simplest approach for penalty functions. It hk gives fixed penalty value once a constraint is violated no T X = X ∈[,] matter how large or small the violations are. A better method (x1 , x 2 ,..., xn ) xmin xmax is to penalise the constraints violation based on how far the solution is from the feasible region (distance based penalty). f (X) are the l objectives to be optimized while c (X) and i j This can be done by multiplying the fixed penalty 147 coefficients with their constraint functions or by dividing the the constrained optimization problem into min-max problem violation into several levels with each level having its own by transforming the primal problem into its dual problem, penalty coefficient. e.g., for a minimization primal problem, the dual problem is maximization.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us