A Double-Multiplicative Dynamic Penalty Approach for Constrained Evolutionary Optimization
Total Page:16
File Type:pdf, Size:1020Kb
Struct Multidisc Optim (2008) 35:431–445 DOI 10.1007/s00158-007-0143-1 RESEARCH PAPER A double-multiplicative dynamic penalty approach for constrained evolutionary optimization Simone Puzzi & Alberto Carpinteri Received: 13 July 2006 /Revised: 5 February 2007 /Accepted: 25 March 2007 / Published online: 2 June 2007 # Springer-Verlag 2007 Abstract The present paper proposes a double-multiplica- include genetic algorithms (GAs; Holland 1975; Goldberg tive penalty strategy for constrained optimization by means 1989), evolutionary programming (Fogel 1966), evolution of genetic algorithms (GAs). The aim of this research is to strategies (Rechenberg 1973; Schwefel 1981), and biolog- provide a simple and efficient way of handling constrained ical evolution methods (Arutyunyan and Drozdov 1988; optimization problems in the GA framework without the Azegami 1990; Mattheck and Burkhardt 1990). The present need for tuning the values of penalty factors for any given trend is to decrease the differences among these paradigms optimization problem. After a short review on the most and refer, in generic terms, to evolutionary algorithms popular and effective exterior penalty formulations, the (EAs) or evolution programs (EPs) when talking about any proposed penalty strategy is presented and tested on five of them (Michalewicz 1996). different benchmark problems. The obtained results are The reason behind the success of these approaches is that compared with the best solutions provided in the literature, traditional nonlinear programming methods are local search showing the effectiveness of the proposed approach. techniques and are, thus, prone to converge toward local optima. This is a serious drawback in many practical Keywords Genetic algorithms . Exterior penalty function . structural optimization problems where the design space Constraint handling . Constrained optimization may be nonconvex or even disjoint (Hajela 1990). In particular, GAs are very popular due to their ability to naturally handle design variables of discrete type. GAs have 1 Introduction been successfully applied to optimization problems in different fields (Fogel 1995; Gen and Cheng 1997). Their Over the last decades, interest in search algorithms that rely advantages are related to their speed, compared to enumer- on analogies to natural processes, for instance simulated ative techniques, and to the lack of the drawbacks annealing and classifier systems, has been continuously concerning gradient-based search algorithms in nonconvex growing. Among these methods, one important class is that and multi-modal spaces. based on some evolution and hereditary mimicking. The GAs are a particular class of EAs that uses techniques most popular and widespread techniques in this group inspired by evolutionary biology, such as inheritance, mu- tation, selection, and reproduction (in this context, also called crossover or recombination). Although the analogy with biology is far from strict, GAs mimic and exploit the : S. Puzzi (*) A. Carpinteri genetic dynamics that underlie natural evolution to search Department of Structural Engineering and Geotechnics, for optimal solutions of general optimization problems. To Politecnico di Torino, explain this, let us consider an optimization problem in Corso Duca degli Abruzzi, 24, n 10129 Turin, Italy which a generic function F(x), with x 2 R (but could also e-mail: [email protected] be x 2 Nn, x 2 Zn,orx 2 Cn), has to be maximized. The A. Carpinteri variable x may be subject to any kind of constraint; thus, e-mail: [email protected] the search space may be bounded. To deal with this 432 S. Puzzi, A. Carpinteri problem, a GA (as any other EP) must have the following the classical representation scheme (e.g., the binary repre- components: sentation used in traditional GAs) might not work satisfac- torily. The drawback of this technique is that it is necessary – A genetic representation (encoding) for potential to design special genetic operators that work in a similar solutions, x, to the problem way to the traditional operators used with a binary – A criterion for creating an initial population of potential representation. The main drawback of the use of special solutions (both random and deterministic criteria are representations and operators is that their generalization to possible) other problems, even similar to the one tackled, might be – A fitness function, F(x), that plays the role of the very difficult. environment; by rating the candidate solutions in terms Repair algorithms have the same drawback; in several of their fitness, higher chances of survival can be optimization problems, for instance, the travelling salesman assigned to better solutions so that their features are problem or the knapsack problem, it is relatively easy to more likely to be transmitted to the next generation “repair” an infeasible individual (i.e., to make an infeasible – Evolutionary operators that alter the population (usual- individual feasible). Such a repaired version can be used ly crossover and mutation) either for evaluation only, or it can also replace (with some – Parameters for the algorithm itself (probabilities of probability) the original individual in the population. applying the genetic operators, population size, etc.) Obviously, this approach is problem dependent, as a specific repair algorithm has to be designed for each The random aspects of the GA result in the search particular problem. In addition, the “repair” operation algorithm exploring the whole solution space without might, in some cases, be even harder than the optimization focusing on small portions of it. Obviously, the number of itself and harm the evolutionary process by introducing a individuals in the population, the number of generations, bias into the search (Smith and Coit 1997). Nevertheless, and the parameters that control the “genetic” operators, repair algorithms have been successfully used and are together with the implementation of the procedures, greatly always a good choice when the “repair” may be done easily influence the quality of the obtained results. (Coello 2002). In GA-based optimization, the main problem usually Probably, the most promising trend is the last approach, concerns the constraint treatment; GAs, as well as other which considers constraints and objectives separately. The EPs, do not directly take constraints into account. That is, use of multi-objective optimization techniques seems to be the regular search operators, mutation, and recombination particularly encouraging in terms of obtained results. are “blind” to constraints. Consequently, although the Nevertheless, Michalewicz and Schoenauer (1996) noted parents satisfy some constraints, they might lead to that, although many sophisticated penalty-based methods offspring that violate them. Technically, this means that have been reported as promising tools for constrained the EAs perform an unconstrained search, as quoted by evolutive searches, the traditional static penalty function Coello (2002). This observation suggests that the presence method is very often used, as it is relatively reliable and of constraints in a problem makes GAs intrinsically robust due to the simplicity of the penalty function. A unsuited to handle this problem. further reason for the success of this approach is the fact Therefore, it is necessary to find ways of incorporating the that this technique can be equally applied, without any constraints (that exist in almost any real-world application) modification, to other global optimization algorithms that into the fitness function. The most common way of do not use derivatives, such as evolution strategies incorporating constraints into a GA has been penalty (Rechenberg 1973; Schwefel 1981), simulated annealing functions, but several approaches have been proposed: (Kirkpatrick et al. 1983), particle swarm optimization (Eberhart and Kennedy 1995), or Direct (Jones et al. 1. Penalty functions 1993), just to mention a few. 2. Special representations and operators In this paper, we propose a new double-multiplicative 3. Repair algorithms penalty strategy to deal with constrained optimization. As 4. Separation of objectives and constraints will be shown, coupling this approach to a real-coded GA provides very good results in several benchmark tests. The penalty function approach is the most diffused, and attention will be focused on it in the next section. The use of special representation and operators is aimed 2 Traditional penalty function strategies at simplifying the shape of the search space in certain problems (usually the most difficult and which are The use of penalty functions is the most common way characterized by very small feasibility domains) for which of treating constrained optimization problems. The idea A double-multiplicative dynamic penalty approach 433 behind this method is to transform a constrained- premature convergence to local optima. On the other hand, optimization problem into an unconstrained one by sub- if the penalty is too low, a great deal of the search time is tracting (or adding) a certain value from (or to) the objective spent exploring the infeasible region because the penalty function based on the amount of constraint violation present becomes too small compared to the objective function in a certain solution. (Smith and Coit 1997). Thus, finding a suitable value for In classical optimization, two main kinds of penalty the penalty is a difficult task, which is very often problem functions have been considered: