
1 Adaptation in Evolutionary Computation: A Survey Rob ert Hinterding , Zbigniew Michalewicz , and Agoston E. Eib en parameter tuning costs a lot of time; Abstract | Adaptation of parameters and op erators is one of the most imp ortant and promising areas of research in the optimal parameter value may vary during the evo- evolutionary computation; it tunes the algorithm to the lution. problem while solving the problem. In this pap er we develop Therefore it is a natural idea to try to mo dify the values a classi cation of adaptation on the basis of the mechanisms 1 used, and the level at which adaptation op erates within the of strategy parameters during the run of the algorithm. It evolutionary algorithm. The classi cation covers all forms is p ossible to do this by using some (p ossibly heuristic) rule, of adaptation in evolutionary computation and suggests fur- by taking feedback from the current state of the search, or ther research. by employing some self-adaptive mechanism. Note that these changes may a ect a single comp onent of a chro- I. Introduction mosome, the whole chromosome (individual), or even the As evolutionary algorithms (EAs) implement the idea of whole p opulation. Clearly, by changing these values while evolution, and as evolution itself must have evolved to reach the algorithm is searching for the solution of the problem, its current state of sophistication, it is natural to exp ect further eciencies can b e gained. adaptation to b e used not only for nding solutions to a Self-adaptation, based on the evolution of evolution, was problem, but also for tuning the algorithm to the particular develop ed in Evolution Strategies to adapt mutation pa- problem. rameters to suit the problem during the run. The metho d In EAs, we not only need to cho ose the algorithm, repre- was very successful in improving eciency of the algorithm sentation, and op erators for the problem, but we also need for some problems. This technique has b een extended to to cho ose parameter values and op erator probabilities for other areas of evolutionary computation, but xed repre- the evolutionary algorithm so that it will nd the solution sentations, op erators, and control parameters are still the and, what is also imp ortant, nd it eciently. This pro cess norm. of nding appropriate parameter values and op erator prob- Other research areas based on the inclusion of adapting abilities is a time-consuming task and considerable e ort mechanisms are the following. has gone into automating this pro cess. Representation of individuals (as prop osed by Shae- Researchers have used various ways of nding go o d val- fer [32]; the Dynamic Parameter Enco ding technique, ues for the strategy parameters as these can a ect the p er- Schraudolph & Belew [29] and messy genetic algo- formance of the algorithm in a signi cant way. Many re- rithms, Goldb erg et al.[16] also fall into this category). searchers exp erimented with various problems from a par- Op erators. It is clear that di erent op erators play dif- ticular domain, tuning the strategy parameters on the basis ferent roles at di erent stages of the evolutionary pro- of such exp erimentation (tuning \by hand"). Later, they cess. The op erators should adapt (e.g., adaptive cross- rep orted their results of applying a particular EA to a par- over, Scha er & Morishima [27], Sp ears [34]). This is ticular problem, stating: true esp ecially for time-varying tness landscap es. For these exp eriments, we have used the following Control parameters. There have b een various exp eri- parameters: p opulation size = 80, probability of ments aimed at adaptive probabilities of op erators [7], crossover = 0:7, etc. [22], [35], [36]. However, much more remains to b e without much justi cation of the choice made. done. Note that (a run of ) an EA is an intrinsically dynamic, In this pap er we develop a comprehensive classi cation adaptive pro cess. The use of rigid, i.e. constant, parame- of adaptation and give examples of their use. The classi - ters is thus in contrast to the general evolutionary spirit. cation is based on the mechanism of adaptation and level Besides, there are also technical drawbacks to the tradi- (in the EA) it o ccurs. Such a classi cation can b e useful to tional approach: the evolutionary computation community, since many re- the users' mistakes in setting the parameters can b e searchers use the terms \adaptation" or \self-adaptation" sources of errors and/or sub-optimal p erformance; in an arbitrary way; in a few instances some authors (in- cluding ourselves!) used the term \self-adaptation" where R. Hinterding is with the Department of Computer and Mathemati- cal Sciences, Victoria University of Technology, PO Box 14428 MMC, there was a simple (deterministic and heuristic) rule for Melb ourne 3000, Australia. email: [email protected] changing some parameter of the pro cess. Z. Michalewicz is with the Department of Computer Science, Uni- The pap er is organised as follows: the next section we versity of North Carolina, Charlotte, NC 28223, USA, and with In- stitute of Computer Science, Polish Academy of Sciences, ul. Ordona develop classi cation of adaptation in Evolutionary Algo- 21, 01-237 Warsaw, Poland. email:[email protected] 1 By strategy parameters, we mean the parameters of the EA, not A.E. Eib en is with the Department of Computer Science, Leiden those of the problem University, Leiden, The Netherlands. email:[email protected] 2 Type Static Dynamic Level Deterministic Adaptive Self-adaptive Environment S E-D E-A E-SA Population S P-D P-A P-SA Individual S I-D I-A I-SA Comp onent S C-D C-A C-SA TABLE I Classification of adaptation in EAs this happ ens by running numerous tests and trying to nd rithms (EAs). Section I I I lo oks at typ es of adaptation, a link b etween parameter values and EA p erformance. This whereas Section IV | at the levels of adaptation. Section metho d is commonly used for most of the strategy param- V discusses the combination of typ es and levels of adapta- eters. tion and Section VI presents the discussion and conclusion. De Jong [9] put considerable e ort into nding param- I I. Classification of Adaptation eter values which were go o d for a numb er of numeric test problems using a traditional GA. He determined exp eri- The action of determining the variables and parameters mentally recommended values for the probability of using of an EA to suit the problem has b een termed adapting single-p oint crossover and bit mutation. Grefenstette [17] the algorithm to the problem, and in EAs this can b e done used a GA as a meta-algorithm to optimise some of the while the algorithm is searching for a problem solution. parameter values. We give classi cations of adaptation in Table I; this clas- si cation is based on the mechanism of adaptation (adap- B. Dynamic tation type) and on which level inside the EA adaptation Dynamic adaptation happ ens if there is some mechanism o ccurs (adaptation level). These classi cations are orthog- which mo di es a strategy parameter without external con- onal and encompass all forms of adaptation within EAs. trol. The class of EAs that use dynamic adaptation can b e Angeline's classi cation [1] is from a di erent p ersp ective sub-divided further into three classes where the mechanism and forms a subset of our classi cations. of adaptation is the criterion. The Type of adaptation consists of two main categories: static (no change) and dynamic, with the latter divided fur- B.1 Deterministic ther into deterministic (D), adaptive (A), and self-adaptive (SA) mechanisms. In the following section we discuss these Deterministic dynamic adaptation takes place if the typ es of adaptation. value of a strategy parameter is altered by some deter- ministic rule; this rule mo di es the strategy parameter de- The Level of adaptation consists of four categories: en- vironment (E), p opulation (P), individual (I), and com- terministically without using any feedback from the EA. p onent (C). These categories indicate the scop e of the Usually, a time-varying schedule is used, i.e. the rule will changed parameter; we discuss these typ es of adaptation b e used when a set numb er of generations have elapsed in Section IV. since the last time the rule was activated. This metho d of adaptation can b e used to alter the prob- Whether examples are discussed in Section I I I or in Sec- tion IV is completely arbitrary. An example of adaptive ability of mutation so that the probability of mutation individual level adaptation (I-A) could have b een discussed changes with the numb er of generations. For example: in Section I I I as an example of adaptive dynamic adapta- g p = 0:5 0:3 ; m tion or in Section IV as an example of individual level of G adaptation. where g is the generation numb er from 1 :::G. Here the mutation probability mut% will decrease from 0:5 to 0:2 as I I I. Types of Adaptation the numb er of generations increases to G. Early examples The classi cation of the type of adaptation is made on the of this approach are the varying mutation rates as used by basis of the mechanism of adaptation used in the pro cess; Fogarty [13], or Hesser & Manner [18] in GAs.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages5 Page
-
File Size-