<<

Introduction to the Special Issue: Self-

Thomas B¨ack [email protected] NuTech Solutions GmbH, Martin-Schmeisser-Weg 15, D-44227 Dortmund, Germany and Downloaded from http://direct.mit.edu/evco/article-pdf/9/2/iii/1493529/106365601750190361.pdf by guest on 27 September 2021 Leiden University, LIACS, Niels Bohrweg 1, NL-2333 CA Leiden, The Netherlands

Today, it is widely accepted in the evolutionary computation community that the principle of self-adaptation of strategy parameters, as proposed by Schwefel (1992) is one of the most sophisticated methods to tackle the problem of adjusting the control pa- rameters (e.g., rates or mutation step sizes) of an evolutionary dur- ing the course of the optimization process. Essentially, the distinguishing feature of self-adaptive parameter control mechanisms is that the control parameters (also called strategy parameters) are evolved by the , rather than exoge- nously defined or modified according to some fixed schedule. Following classifications offered by Angeline (1995) and Hinterding et al. (1997), the existing approaches for strategy parameter control (as opposed to static parameter settings, i.e., using no control at all) in evolutionary can be classified as follows: • Dynamic parameter control: Parameter settings are modified according to a deter- ministic schedule prescribed by the developer of the evolutionary algorithm. • Adaptive parameter control: New values of control parameters are obtained by a mechanism that monitors and rewards or punishes parameter settings according to whether they have caused an improvement or deterioration of the objective function value. • Self-adaptive parameter control: Control parameter values are evolved by the evolu- tionary algorithm by applying evolutionary operators to the control parameters in a similar way as to the solution representations. The competitive process of evolu- tionary algorithms is then exploited to determine if the changes of the parameters are advantageous concerning their impact on the fitness of individuals. The purpose of this special issue is to shed more light on the working principle of self-adaptation and to present some possible ways of implementing it in evolutionary algorithms. At present, very little analytical work on self-adaptation is available (e.g., the analysis of Beyer (1995)), such that most of our knowledge comes from empirical investigations and practical applications, which clearly demonstrate the usefulness of the approach. In this special issue, four contributions deal with various aspects of the mutation operator and the self-adaptation principle. Strictly speaking, Agapie’s analysis con- cerns an adaptive rather than a self-adaptive mutation rate control mechanism in ge- netic algorithms, but he develops an innovative approach towards analyzing conver-

°c 2001 by the Massachusetts Institute of Technology Evolutionary Computation 9(2): iii-iv T. Back¨

gence properties of this kind of parameter control mechanism. Greenwood and Zhu present a modified version of the 1/5-success rule, the original adaptive mutation rate control method for (1+1)-evolution strategies, and a corresponding global convergence proof. The paper by Hansen and Ostermeier presents the working principle of so-called completely derandomized self-adaptation in evolution strategies, which is one of the most recent developments to further enhance the principle in the context of evolution strategies. Deb and Beyer present a study where self-adaptation is introduced into real-parameter genetic algorithms using the simulated binary crossover operator, with-

out mutation. It is very interesting to see their conclusion regarding the similarity in Downloaded from http://direct.mit.edu/evco/article-pdf/9/2/iii/1493529/106365601750190361.pdf by guest on 27 September 2021 the working principle between self-adaptive genetic algorithms and evolution strate- gies. Kita’s paper also focuses on the relationship between self-adaptation in evolution strategies and in real-coded genetic algorithms, concluding that both mechanisms work quite well in function optimization. Finally, the paper by Schell and Wegenkittl deals with an analysis of selection methods that goes beyond selection probabilities and takes the properties of the applied sampling into account. The main topic of this paper is implicitly re- lated to self-adaptation, as we know that the interaction between selection and self- adaptation is quite important, and a strong selective pressure is needed in combination with self-adaptive parameter control methods. We are just at the beginning to explore this relationship in more detail. Personally, I am convinced that self-adaptive strategy parameter control is an ab- solutely interesting and fascinating aspect of evolutionary computation – not only be- cause it works so well, but also because it uses evolutionary learning principles on two levels at the same time: The level of solutions and the level of the search strategy itself. I am eager to see further development of this field and to test and extend its power under a variety of conditions (e.g., dynamic environments). I would like to thank all authors who submitted papers to this special issue for their contribution, their reviews, and their patience during this process. References Angeline, P. J. (1995). Morphogenic evolutionary computations: Introduction, issues and exam- ple. In McDonnell, J. R. et al., editors, Proceedings of the Fourth Annual Conference on Evolu- tionary Programming, pages 387–402, MIT Press, Cambridge, Massachusetts.

Beyer, H.-G. (1995). Toward a theory of evolution strategies: Self-adaptation. Evolutionary Com- putation, 3(3):311–348.

Hinterding, R., Michalewicz, Z., and Eiben, A. E. (1997). Adaptation in evolutionary compu- tation: A survey. In Proceedings of the Fourth IEEE Conference on Evolutionary Computation, pages 65–69, IEEE Press, Piscataway, New Jersey.

Schwefel, H.-P. (1992). Imitating evolution: Collective, two-level learning processes. In Witt, U., editor, Explaining Process and Change — Approaches to , pages 49–63, University of Michigan Press, Ann Arbor, Michigan.

iv Evolutionary Computation Volume 9, Number 2