Improved Step Size Adaptation for the MO-CMA-ES Thomas Voß, Nikolaus Hansen, Christian Igel

Total Page:16

File Type:pdf, Size:1020Kb

Load more

Improved Step Size Adaptation for the MO-CMA-ES Thomas Voß, Nikolaus Hansen, Christian Igel To cite this version: Thomas Voß, Nikolaus Hansen, Christian Igel. Improved Step Size Adaptation for the MO-CMA-ES. Genetic And Evolutionary Computation Conference, Jul 2010, Portland, United States. pp.487-494, 10.1145/1830483.1830573. hal-00503251 HAL Id: hal-00503251 https://hal.archives-ouvertes.fr/hal-00503251 Submitted on 18 Jul 2010 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Improved Step Size Adaptation for the MO-CMA-ES Thomas Voß Nikolaus Hansen Christian Igel Institut für Neuroinformatik Université de Paris-Sud Institut für Neuroinformatik Ruhr-Universität Bochum Centre de recherche INRIA Ruhr-Universität Bochum 44780 Bochum, Germany Saclay – Íle-de-France 44780 Bochum, Germany [email protected] F-91405 Orsay Cedex, France [email protected] [email protected] ABSTRACT 1. INTRODUCTION The multi-objective covariance matrix adaptation evolution The multi-objective covariance matrix adaptation evolu- strategy (MO-CMA-ES) is an evolutionary algorithm for tion strategy (MO-CMA-ES, [14, 16, 19]) is an extension continuous vector-valued optimization. It combines indica- of the CMA-ES [12, 11] for real-valued multi-objective opti- tor-based selection based on the contributing hypervolume mization. It combines the mutation and strategy adaptation with the efficient strategy parameter adaptation of the elitist of the (1+1)-CMA-ES [14, 15, 19] with a multi-objective se- covariance matrix adaptation evolution strategy (CMA-ES). lection procedure based on non-dominated sorting [6] and Step sizes (i.e., mutation strengths) are adapted on indivi- the contributing hypervolume [2] acting on a population of dual-level using an improved implementation of the 1/5-th individuals. success rule. In the original MO-CMA-ES, a mutation is In the MO-CMA-ES, step sizes (i.e., mutation strengths) regarded as successful if the offspring ranks better than its are adapted on individual-level. The step size update pro- parent in the elitist, rank-based selection procedure. In con- cedure originates in the well-known 1/5-th rule originally trast, we propose to regard a mutation as successful if the presented by [18] and extended by [17]. If the success rate, offspring is selected into the next parental population. This that is, the fraction of successful mutations, is high, the step criterion is easier to implement and reduces the computa- size is increased, otherwise it is decreased. In the original tional complexity of the MO-CMA-ES, in particular of its MO-CMA-ES, a mutation is regarded as successful if the re- steady-state variant. The new step size adaptation improves sulting offspring is better than its parent. In this study, we the performance of the MO-CMA-ES as shown empirically propose to replace this criterion and to consider a mutation using a large set of benchmark functions. The new update as being successful if the offspring becomes a member of the scheme in general leads to larger step sizes and thereby coun- next parent population. We argue that this notion of success teracts premature convergence. The experiments comprise is easier to implement, computationally less demanding, and the first evaluation of the MO-CMA-ES for problems with improves the performance of the MO-CMA-ES. more than two objectives. In the next section, we briefly review the MO-CMA-ES. In Sec. 3, we discuss our new notion of success for the step Categories and Subject Descriptors size adaptation. Then, we empirically evaluate the resulting algorithms. In this evaluation, the MO-CMA-ES is for the G.1.6 [Optimization]: Global Optimization; I.2.8 [Problem first time benchmarked on functions with more than two Solving, Control Methods, and Search]: Heuristic meth- objectives. As a baseline, we consider a new variant of the ods NSGA-II, in which the crowding distance is replaced by the contributing hypervolume for sorting individuals at the same General Terms level of non-dominance. Algorithms, Performance 2. THE MO-CMA-ES Keywords In the following, we briefly outline the MO-CMA-ES ac- multi-objective optimization, step size adaptation, covari- cording to [14, 16, 19], see Algorithm 1. For a detailed ance matrix adaptation, evolution strategy, MO-CMA-ES description and a performance evaluation on bi-objective benchmark functions we refer to [14, 21]. We consider ob- n m T jective functions f : R R , x (f1(x), . , fm(x)) . → 7→ (g) In the MO-CMA-ES, a candidate solution ai in generation g is a tuple x(g), p¯(g) , σ(g), p(g), C(g) , where x(g) Rn is i succ,i i i,c i i ∈ Permission to make digital or hard copies of all or part of this work for (g) the currenth search point,p ¯ ,i [0, 1] isi the smoothed suc- personal or classroom use is granted without fee provided that copies are succ ∈ cess probability, σ(g) R+ is the global step size, p(g) Rn not made or distributed for profit or commercial advantage and that copies i ∈ 0 i,c ∈ bear this notice and the full citation on the first page. To copy otherwise, to (g) n×n is the cumulative evolution path, Ci R is the covari- republish, to post on servers or to redistribute to lists, requires prior specific ance matrix of the search distribution.∈ For an individual permission and/or a fee. GECCO’10, July 7–11, 2010, Portland, Oregon, USA. a encoding search point x, we write f(a) for f(x) with a Copyright 2010 ACM 978-1-4503-0072-8/10/07 ...$10.00. slight abuse of notation. We first describe the general ranking procedure and sum- Algorithm 1: (µ +λ)-MO-CMA-ES marize the other parts of the MO-CMA-ES. The MO-CMA- ES relies on the non-dominated sorting selection scheme [6]. 1 g 0, initialize parent population Q(0); As in the SMS-EMOA [2], the hypervolume-indicator serves 2 repeat← as second-level sorting criterion to rank individuals at the 3 for k = 1,...,λ do (g) same level of non-dominance. Let A be a population, and let 4a ik 1, ndom Q ; a, a′ be two individuals in A. Let the non-dominated solu- ←U | | ′ ′ 4b ik k;“ “ ” ” tions in A be denoted by ndom(A) = a A ∄a A : a ←g g { ∈ ∈ ≺ 5 a′( +1) a( ) ; a , where denotes the Pareto-dominance relation. The k ← ik } ≺ ˛ ′(g+1) (g) (g) (g) elements in ndom(A) are assigned a level of non-dominance˛ 6 x x + σ 0, C ; k ik ik ik of 1. The other ranks of non-dominance are defined recur- ∼ N 7 Q(g) Q(g) a′(g+1) “; ” sively by considering the set A without the solutions with ← ∪ k lower ranks [6]. Formally, let dom0(A) = A, doml(A) = 8 for k = 1,...,λ don o doml−1(A) ndoml(A), and ndoml(A) = ndom(doml−1(A)) (g+1) \ 9 p¯′ for l 1. For a A we define the level of non-dominance succ,k ← ≥ ∈ ′(g+1) (g) ′(g+1) rank(a, A) to be i iff a ndomi(A). (1 cp)p¯ + cp succ (g) a , a ; ∈ − succ,k Q ik k The hypervolume measure or -metric was introduced p¯′(g+1)“ −ptarget ” S 10 ′(g+1) ′(g+1) 1 succ,k succ in the domain of evolutionary multi-objective optimization σ k σ k exp d target ; ← 1−psucc (MOO) in [26]. It is defined as „ « 11 ¯′(g+1) if p succ,k < pthresh then 12 ′(g+1) ref ref p c,k f ref (A)=Λ f1(a), f1 fm(a), fm , ← ′ S ×···× x (g+1)−x(g) a∈A ! ′(g+1) k ik h i h i (1 cc) p + cc(2 cc) ; [ c,k σ(g) (1) − − ik ref m with f R referring to an appropriately chosen refer- 13 ′(g+1) p ∈ C k ence point and Λ( ) being the Lebesgue measure. The con- ← ′(g+1) ′(g+1) ′(g+1)T · ′ C p p tributing hypervolume of a point a A = ndom(A) is given (1 ccov) k + ccov c,k c,k ; 14 − by ∈ else ′ ′ ′ ′ g ′ g ∆S (a, A ) = f ref (A ) f ref (A a ) . (2) 15 ( +1) ( +1) S − S \{ } p c,k (1 cc) p c,k ; ′ ← − Now we define the contribution rank cont(a, A ) of a. This is 16 ′(g+1) ′(g+1) C k (1 ccov) C k + again done recursively. The element, say a, with the smallest ′(←g+1) −′(g+1)T ′(g+1) ccov p p + cc (2 cc) C ; contributing hypervolume is assigned contribution rank 1. c,k c,k − k ′ The next rank is assigned by considering A a etc. More (g) “ (g) (g) ′(g+1)” 17 p¯ (1 cp)¯p + cp succ (g) a , a ; ′ ′ \{ } ik ik Q ik k precisely, let c0(A ) = argmina∈A′ ∆S (a, A ) and ← − p(g) −ptarget (g) (g) ¯succ,i succ “ ” − 18 1 k i 1 σi σi exp d target ; ′ ′ ′ k ← k 1−psucc ci(A ) = c0 A cj (A ) (3) „ « \ 19 g g + 1; j=0 ! ← [ ˘ ¯ (g) (g−1) 20 Q Q≺ 1 i µ ; for i > 0.
Recommended publications
  • Metaheuristics1

    Metaheuristics1

    METAHEURISTICS1 Kenneth Sörensen University of Antwerp, Belgium Fred Glover University of Colorado and OptTek Systems, Inc., USA 1 Definition A metaheuristic is a high-level problem-independent algorithmic framework that provides a set of guidelines or strategies to develop heuristic optimization algorithms (Sörensen and Glover, To appear). Notable examples of metaheuristics include genetic/evolutionary algorithms, tabu search, simulated annealing, and ant colony optimization, although many more exist. A problem-specific implementation of a heuristic optimization algorithm according to the guidelines expressed in a metaheuristic framework is also referred to as a metaheuristic. The term was coined by Glover (1986) and combines the Greek prefix meta- (metá, beyond in the sense of high-level) with heuristic (from the Greek heuriskein or euriskein, to search). Metaheuristic algorithms, i.e., optimization methods designed according to the strategies laid out in a metaheuristic framework, are — as the name suggests — always heuristic in nature. This fact distinguishes them from exact methods, that do come with a proof that the optimal solution will be found in a finite (although often prohibitively large) amount of time. Metaheuristics are therefore developed specifically to find a solution that is “good enough” in a computing time that is “small enough”. As a result, they are not subject to combinatorial explosion – the phenomenon where the computing time required to find the optimal solution of NP- hard problems increases as an exponential function of the problem size. Metaheuristics have been demonstrated by the scientific community to be a viable, and often superior, alternative to more traditional (exact) methods of mixed- integer optimization such as branch and bound and dynamic programming.
  • Removing the Genetics from the Standard Genetic Algorithm

    Removing the Genetics from the Standard Genetic Algorithm

    Removing the Genetics from the Standard Genetic Algorithm Shumeet Baluja Rich Caruana School of Computer Science School of Computer Science Carnegie Mellon University Carnegie Mellon University Pittsburgh, PA 15213 Pittsburgh, PA 15213 [email protected] [email protected] Abstract from both parents into their respective positions in a mem- ber of the subsequent generation. Due to random factors involved in producing “children” chromosomes, the chil- We present an abstraction of the genetic dren may, or may not, have higher fitness values than their algorithm (GA), termed population-based parents. Nevertheless, because of the selective pressure incremental learning (PBIL), that explicitly applied through a number of generations, the overall trend maintains the statistics contained in a GA’s is towards higher fitness chromosomes. Mutations are population, but which abstracts away the used to help preserve diversity in the population. Muta- crossover operator and redefines the role of tions introduce random changes into the chromosomes. A the population. This results in PBIL being good overview of GAs can be found in [Goldberg, 1989] simpler, both computationally and theoreti- [De Jong, 1975]. cally, than the GA. Empirical results reported elsewhere show that PBIL is faster Although there has recently been some controversy in the and more effective than the GA on a large set GA community as to whether GAs should be used for of commonly used benchmark problems. static function optimization, a large amount of research Here we present results on a problem custom has been, and continues to be, conducted in this direction. designed to benefit both from the GA’s cross- [De Jong, 1992] claims that the GA is not a function opti- over operator and from its use of a popula- mizer, and that typical GAs which are used for function tion.
  • Genetic Algorithm: Reviews, Implementations, and Applications

    Genetic Algorithm: Reviews, Implementations, and Applications

    Paper— Genetic Algorithm: Reviews, Implementation and Applications Genetic Algorithm: Reviews, Implementations, and Applications Tanweer Alam() Faculty of Computer and Information Systems, Islamic University of Madinah, Saudi Arabia [email protected] Shamimul Qamar Computer Engineering Department, King Khalid University, Abha, Saudi Arabia Amit Dixit Department of ECE, Quantum School of Technology, Roorkee, India Mohamed Benaida Faculty of Computer and Information Systems, Islamic University of Madinah, Saudi Arabia How to cite this article? Tanweer Alam. Shamimul Qamar. Amit Dixit. Mohamed Benaida. " Genetic Al- gorithm: Reviews, Implementations, and Applications.", International Journal of Engineering Pedagogy (iJEP). 2020. Abstract—Nowadays genetic algorithm (GA) is greatly used in engineering ped- agogy as an adaptive technique to learn and solve complex problems and issues. It is a meta-heuristic approach that is used to solve hybrid computation chal- lenges. GA utilizes selection, crossover, and mutation operators to effectively manage the searching system strategy. This algorithm is derived from natural se- lection and genetics concepts. GA is an intelligent use of random search sup- ported with historical data to contribute the search in an area of the improved outcome within a coverage framework. Such algorithms are widely used for maintaining high-quality reactions to optimize issues and problems investigation. These techniques are recognized to be somewhat of a statistical investigation pro- cess to search for a suitable solution or prevent an accurate strategy for challenges in optimization or searches. These techniques have been produced from natu- ral selection or genetics principles. For random testing, historical information is provided with intelligent enslavement to continue moving the search out from the area of improved features for processing of the outcomes.
  • A Sequential Hybridization of Genetic Algorithm and Particle Swarm Optimization for the Optimal Reactive Power Flow

    A Sequential Hybridization of Genetic Algorithm and Particle Swarm Optimization for the Optimal Reactive Power Flow

    sustainability Article A Sequential Hybridization of Genetic Algorithm and Particle Swarm Optimization for the Optimal Reactive Power Flow Imene Cherki 1,* , Abdelkader Chaker 1, Zohra Djidar 1, Naima Khalfallah 1 and Fadela Benzergua 2 1 SCAMRE Laboratory, ENPO-MA National Polytechnic School of Oran Maurice Audin, Oran 31000, Algeria 2 Departments of Electrical Engineering, University of Science and Technology of Oran Mohamed Bodiaf, Oran 31000, Algeria * Correspondence: [email protected] Received: 21 June 2019; Accepted: 12 July 2019; Published: 16 July 2019 Abstract: In this paper, the problem of the Optimal Reactive Power Flow (ORPF) in the Algerian Western Network with 102 nodes is solved by the sequential hybridization of metaheuristics methods, which consists of the combination of both the Genetic Algorithm (GA) and the Particle Swarm Optimization (PSO). The aim of this optimization appears in the minimization of the power losses while keeping the voltage, the generated power, and the transformation ratio of the transformers within their real limits. The results obtained from this method are compared to those obtained from the two methods on populations used separately. It seems that the hybridization method gives good minimizations of the power losses in comparison to those obtained from GA and PSO, individually, considered. However, the hybrid method seems to be faster than the PSO but slower than GA. Keywords: reactive power flow; metaheuristic methods; metaheuristic hybridization; genetic algorithm; particles swarms; electrical network 1. Introduction The objective of any company producing and distributing electrical energy is to ensure that the required power is available at all points and at all times.
  • An Improved Adaptive Genetic Algorithm for Two-Dimensional Rectangular Packing Problem

    An Improved Adaptive Genetic Algorithm for Two-Dimensional Rectangular Packing Problem

    applied sciences Article An Improved Adaptive Genetic Algorithm for Two-Dimensional Rectangular Packing Problem Yi-Bo Li 1, Hong-Bao Sang 1,* , Xiang Xiong 1 and Yu-Rou Li 2 1 State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China; [email protected] (Y.-B.L.); [email protected] (X.X.) 2 Canterbury School, 101 Aspetuck Ave, New Milford, CT 06776, USA; [email protected] * Correspondence: [email protected] Abstract: This paper proposes the hybrid adaptive genetic algorithm (HAGA) as an improved method for solving the NP-hard two-dimensional rectangular packing problem to maximize the filling rate of a rectangular sheet. The packing sequence and rotation state are encoded in a two- stage approach, and the initial population is constructed from random generation by a combination of sorting rules. After using the sort-based method as an improved selection operator for the hybrid adaptive genetic algorithm, the crossover probability and mutation probability are adjusted adaptively according to the joint action of individual fitness from the local perspective and the global perspective of population evolution. The approach not only can obtain differential performance for individuals but also deals with the impact of dynamic changes on population evolution to quickly find a further improved solution. The heuristic placement algorithm decodes the rectangular packing sequence and addresses the two-dimensional rectangular packing problem through continuous iterative optimization. The computational results of a wide range of benchmark instances from zero- waste to non-zero-waste problems show that the HAGA outperforms those of two adaptive genetic algorithms from the related literature.
  • Chapter 12 Gene Selection and Sample Classification Using a Genetic Algorithm/K-Nearest Neighbor Method

    Chapter 12 Gene Selection and Sample Classification Using a Genetic Algorithm/K-Nearest Neighbor Method

    Chapter 12 Gene Selection and Sample Classification Using a Genetic Algorithm/k-Nearest Neighbor Method Leping Li and Clarice R. Weinberg Biostatistics Branch, National Institute of Environmental Health Sciences, Research Triangle Park, NC 27709, USA, e-mail: {li3,weinberg}@niehs.nih.gov 1. INTRODUCTION Advances in microarray technology have made it possible to study the global gene expression patterns of tens of thousands of genes in parallel (Brown and Botstein, 1999; Lipshutz et al., 1999). Such large scale expression profiling has been used to compare gene expressions in normal and transformed human cells in several tumors (Alon et al., 1999; Gloub et al., 1999; Alizadeh et al., 2000; Perou et al., 2000; Bhattacharjee et al., 2001; Ramaswamy et al., 2001; van’t Veer et al., 2002) and cells under different conditions or environments (Ooi et al., 2001; Raghuraman et al., 2001; Wyrick and Young, 2002). The goals of these experiments are to identify differentially expressed genes, gene-gene interaction networks, and/or expression patterns that may be used to predict class membership for unknown samples. Among these applications, class prediction has recently received a great deal of attention. Supervised class prediction first identifies a set of discriminative genes that differentiate different categories of samples, e.g., tumor versus normal, or chemically exposed versus unexposed, using a learning set with known classification. The selected set of discriminative genes is subsequently used to predict the category of unknown samples. This method promises both refined diagnosis of disease subtypes, including markers for prognosis and better targeted treatment, and improved understanding of disease and toxicity processes at the cellular level.
  • A Note on Evolutionary Algorithms and Its Applications

    A Note on Evolutionary Algorithms and Its Applications

    Bhargava, S. (2013). A Note on Evolutionary Algorithms and Its Applications. Adults Learning Mathematics: An International Journal, 8(1), 31-45 A Note on Evolutionary Algorithms and Its Applications Shifali Bhargava Dept. of Mathematics, B.S.A. College, Mathura (U.P)- India. <[email protected]> Abstract This paper introduces evolutionary algorithms with its applications in multi-objective optimization. Here elitist and non-elitist multiobjective evolutionary algorithms are discussed with their advantages and disadvantages. We also discuss constrained multiobjective evolutionary algorithms and their applications in various areas. Key words: evolutionary algorithms, multi-objective optimization, pareto-optimality, elitist. Introduction The term evolutionary algorithm (EA) stands for a class of stochastic optimization methods that simulate the process of natural evolution. The origins of EAs can be traced back to the late 1950s, and since the 1970’s several evolutionary methodologies have been proposed, mainly genetic algorithms, evolutionary programming, and evolution strategies. All of these approaches operate on a set of candidate solutions. Using strong simplifications, this set is subsequently modified by the two basic principles of evolution: selection and variation. Selection represents the competition for resources among living beings. Some are better than others and more likely to survive and to reproduce their genetic information. In evolutionary algorithms, natural selection is simulated by a stochastic selection process. Each solution is given a chance to reproduce a certain number of times, dependent on their quality. Thereby, quality is assessed by evaluating the individuals and assigning them scalar fitness values. The other principle, variation, imitates natural capability of creating “new” living beings by means of recombination and mutation.
  • Genetic Algorithm by Ben Shedlofsky Executive Summary the Genetic

    Genetic Algorithm by Ben Shedlofsky Executive Summary the Genetic

    Genetic Algorithm By Ben Shedlofsky Executive Summary The Genetic algorithm is a way using computers to figure out how a population evolves genetically. These understandings of how a population evolves towards a more healthy population; as well as how it evolves through breeding purposes. Each generation develops a new offspring and some offspring converge towards a healthier population and some don’t at all. This algorithm helps figure out in mathematical terms how a population breeds through a computer simulation. The major problem was understanding step by step how a Genetic algorithm works and how it doesn’t work. The other problem was figuring out to present it to the audience without the audience not understanding what I presented or some magic show of how it works. In a computational format the Genetic Algorithm uses binary or base 2 instead of using base 10. This makes it a little harder to read and understand from a business end user. Once you understand that there are just 1’s and 0’s instead of 0 thru 9 digits, you start to recognize that this uses a computational system to derive populations. Breeding for each generation happens either by two parents or by one parent. The two parents are called crossover and usually happen the most. While the one parent or mutation usually happen very low percentage. This is a very similar makeup to a biological population like monkey breeding amongst themselves or single cell organisms breeding amongst themselves too. Each one of us has genes or trait that makes us who we are and these differences in our genes or traits make us unique to the individual.
  • Self-Adaptation of Mutation Operator and Probability for Permutation Representations in Genetic Algorithms

    Self-Adaptation of Mutation Operator and Probability for Permutation Representations in Genetic Algorithms

    Self-Adaptation of Mutation Operator and Probability for Permutation Representations in Genetic Algorithms Martin Serpell [email protected] Department of Computer Science, University of the West of England, Bristol, BS161QY, United Kingdom James E. Smith [email protected] Department of Computer Science, University of the West of England, Bristol, BS161QY, United Kingdom Abstract The choice of mutation rate is a vital factor in the success of any genetic algorithm (GA), and for permutation representations this is compounded by the availability of several alternative mutation operators. It is now well understood that there is no one “optimal choice”; rather, the situation changes per problem instance and during evolution. This paper examines whether this choice can be left to the processes of evolution via self- adaptation, thus removing this nontrivial task from the GA user and reducing the risk of poor performance arising from (inadvertent) inappropriate decisions. Self-adaptation has been proven successful for mutation step sizes in the continuous domain, and for the probability of applying bitwise mutation to binary encodings; here we examine whether this can translate to the choice and parameterisation of mutation operators for permutation encodings. We examine one method for adapting the choice of operator during runtime, and several different methods for adapting the rate at which the chosen operator is applied. In order to evaluate these algorithms, we have used a range of benchmark TSP prob- lems. Of course this paper is not intended to present a state of the art in TSP solvers; rather, we use this well known problem as typical of many that require a permutation encoding, where our results indicate that self-adaptation can prove beneficial.
  • A Paradigm for Genetically Breeding Populations of Computer Programs to Solve Problems

    A Paradigm for Genetically Breeding Populations of Computer Programs to Solve Problems

    Computer Science Department June 1990 GENETIC PROGRAMMING: A PARADIGM FOR GENETICALLY BREEDING POPULATIONS OF COMPUTER PROGRAMS TO SOLVE PROBLEMS John R. Koza ([email protected]) Computer Science Department Stanford University Margaret Jacks Hall Stanford, CA 94305 TABLE OF CONTENTS 1............ INTRODUCTION AND OVERVIEW 1 1.1. ........EXAMPLES OF PROBLEMS REQUIRING DISCOVERY OF A COMPUTER PROGRAM 1 1.2. ........SOLVING PROBLEMS REQUIRING DISCOVERY OF A COMPUTER PROGRAM 3 2............ BACKGROUND ON GENETIC ALGORITHMS 6 3............ THE ÒGENETIC PROGRAMMINGÓ PARADIGM 8 3.1. ........THE STRUCTURES UNDERGOING ADAPTATION 8 3.2. ........THE SEARCH SPACE 10 3.3. ........THE INITIAL STRUCTURES 10 3.4. ........THE FITNESS FUNCTION 10 3.5. ........THE OPERATIONS THAT MODIFY THE STRUCTURES 11 3.5.1....... THE FITNESS PROPORTIONATE REPRODUCTION OPERATION 11 3.5.2....... THE CROSSOVER (RECOMBINATION) OPERATION 12 3.6. ........THE STATE OF THE SYSTEM 13 3.7. ........IDENTIFYING THE RESULTS AND TERMINATING THE ALGORITHM 14 3.8. ........THE PARAMETERS THAT CONTROL THE ALGORITHM 14 4............ EXPERIMENTAL RESULTS 16 4.1. ........MACHINE LEARNING OF A FUNCTION 16 4.1.1....... BOOLEAN 11-MULTIPLEXER FUNCTION 16 4.1.2 ......THE BOOLEAN 6-MULTIPLEXER AND 3-MULTIPLEXER FUNCTIONS 24 4.1.3 ......NON-RANDOMNESS OF THESE RESULTS 24 4.2. ........PLANNING 27 4.2.1. .....BLOCK STACKING 27 4.2.2. .....CORRECTLY STACKING BLOCKS 29 4.2.3....... EFFICIENTLY STACKING BLOCKS 31 4.2.4. .....A PARSIMONIOUS EXPRESSION FOR STACKING BLOCKS 32 4.2.5. .....ARTIFICIAL ANT - TRAVERSING THE JOHN MUIR TRAIL 33 4.3. ........SYMBOLIC FUNCTION IDENTIFICATION 35 4.3.1....... SEQUENCE INDUCTION 35 4.3.1.1...
  • Self-Adaptation Mechanism to Control

    Self-Adaptation Mechanism to Control

    International Journal of Computer Science & Information Technology (IJCSIT) Vol 3, No 4, August 2011 SELF -ADAPTATION MECHANISM TO CONTROL THE DIVERSITY OF THE POPULATION IN GENETIC ALGORITHM Chaiwat Jassadapakorn 1 and Prabhas Chongstitvatana 2 Department of Computer Engineering, Chulalongkorn University, Bangkok, Thailand [email protected], [email protected] ABSTRACT One of the problems in applying Genetic Algorithm is that there is some situation where the evolutionary process converges too fast to a solution which causes it to be trapped in local optima. To overcome this problem, a proper diversity in the candidate solutions must be determined. Most existing diversity- maintenance mechanisms require a problem specific knowledge to setup parameters properly. This work proposes a method to control diversity of the population without explicit parameter setting. A self- adaptation mechanism is proposed based on the competition of preference characteristic in mating. It can adapt the population toward proper diversity for the problems. The experiments are carried out to measure the effectiveness of the proposed method based on nine well-known test problems. The performance of the adaptive method is comparable to traditional Genetic Algorithm with the best parameter setting. KEYWORDS Genetic Algorithm, Population Diversity, Diversity Control 1. INTRODUCTION Genetic Algorithm (GA) is a probabilistic search and optimization algorithm. The GA begins with a random population -- a set of solutions. A solution (or an individual) is represented by a fixed-length binary string. A solution is assigned a fitness value that indicates the quality of solution. The high-quality solutions are more likely to be selected to perform solution recombination.
  • Evolution Strategies

    Evolution Strategies

    Evolution Strategies Nikolaus Hansen, Dirk V. Arnold and Anne Auger February 11, 2015 1 Contents 1 Overview 3 2 Main Principles 4 2.1 (µ/ρ +; λ) Notation for Selection and Recombination.......................5 2.2 Two Algorithm Templates......................................6 2.3 Recombination Operators......................................7 2.4 Mutation Operators.........................................8 3 Parameter Control 9 3.1 The 1/5th Success Rule....................................... 11 3.2 Self-Adaptation........................................... 11 3.3 Derandomized Self-Adaptation................................... 12 3.4 Non-Local Derandomized Step-Size Control (CSA)........................ 12 3.5 Addressing Dependencies Between Variables............................ 14 3.6 Covariance Matrix Adaptation (CMA)............................... 14 3.7 Natural Evolution Strategies.................................... 15 3.8 Further Aspects............................................ 18 4 Theory 19 4.1 Lower Runtime Bounds....................................... 20 4.2 Progress Rates............................................ 21 4.2.1 (1+1)-ES on Sphere Functions............................... 22 4.2.2 (µ/µ, λ)-ES on Sphere Functions.............................. 22 4.2.3 (µ/µ, λ)-ES on Noisy Sphere Functions........................... 24 4.2.4 Cumulative Step-Size Adaptation.............................. 24 4.2.5 Parabolic Ridge Functions.................................. 25 4.2.6 Cigar Functions.......................................