Covariance Matrix Adaptation for Multi-objective Optimization
Christian Igel [email protected] Institut fur ¨ Neuroinformatik, Ruhr-Universit¨at Bochum, 44780 Bochum, Germany Nikolaus Hansen [email protected] Computational Laboratory (CoLab), Institute of Computational Science, ETH Zurich, 8092 Zurich, Switzerland Stefan Roth [email protected] Institut fur ¨ Neuroinformatik, Ruhr-Universit¨at Bochum, 44780 Bochum, Germany
Abstract The covariance matrix adaptation evolution strategy (CMA-ES) is one of the most pow- erful evolutionary algorithms for real-valued single-objective optimization. In this pa- per, we develop a variant of the CMA-ES for multi-objective optimization (MOO). We first introduce a single-objective, elitist CMA-ES using plus-selection and step size con- trol based on a success rule. This algorithm is compared to the standard CMA-ES. The elitist CMA-ES turns out to be slightly faster on unimodal functions, but is more prone to getting stuck in sub-optimal local minima. In the new multi-objective CMA- ES (MO-CMA-ES) a population of individuals that adapt their search strategy as in the elitist CMA-ES is maintained. These are subject to multi-objective selection. The selection is based on non-dominated sorting using either the crowding-distance or the contributing hypervolume as second sorting criterion. Both the elitist single-objective CMA-ES and the MO-CMA-ES inherit important invariance properties, in particular invariance against rotation of the search space, from the original CMA-ES. The bene- fits of the new MO-CMA-ES in comparison to the well-known NSGA-II and to NSDE, a multi-objective differential evolution algorithm, are experimentally shown.
Keywords Multi-objective optimization, evolution strategy, covariance matrix adaptation.
1 Introduction The covariance matrix adaptation evolution strategy (CMA-ES) is one of the most pow- erful evolutionary algorithms for real-valued optimization (Hansen and Ostermeier, 2001; Hansen et al., 2003; Hansen and Kern, 2004; Hansen, 2006b) with many successful applications (for an overview see Hansen, 2005). The main advantages of the CMA-ES lie in its invariance properties, which are achieved by carefully designed variation and selection operators, and in its efficient (self-) adaptation of the mutation distribution. The CMA-ES is invariant against order-preserving transformations of the fitness func- tion value and in particular against rotation and translation of the search space—apart from the initialization. If either the strategy parameters are initialized accordingly or the time needed to adapt the strategy parameters is neglected, any affine transforma- tion of the search space does not affect the performance of the CMA-ES. Rotation of the search space to test invariance and to generate non-separable functions was proposed