A Self-Feedback Strategy Differential Evolution with Fitness Landscape
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Completely Derandomized Self-Adaptation in Evolution Strategies
Completely Derandomized Self-Adaptation in Evolution Strategies Nikolaus Hansen [email protected] Technische Universitat¨ Berlin, Fachgebiet fur¨ Bionik, Sekr. ACK 1, Ackerstr. 71–76, 13355 Berlin, Germany Andreas Ostermeier [email protected] Technische Universitat¨ Berlin, Fachgebiet fur¨ Bionik, Sekr. ACK 1, Ackerstr. 71–76, 13355 Berlin, Germany Abstract This paper puts forward two useful methods for self-adaptation of the mutation dis- tribution – the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation dis- tributions are developed. Applying arbitrary, normal mutation distributions is equiv- alent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigor- ously, a completely derandomized self-adaptation scheme results, which adapts arbi- trary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation – utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their per- formances are comparable only on perfectly scaled functions. On badly scaled, non- separable functions usually a speed up factor of several orders of magnitude is ob- served. On moderately mis-scaled functions a speed up factor of three to ten can be expected. Keywords Evolution strategy, self-adaptation, strategy parameter control, step size control, de- randomization, derandomized self-adaptation, covariance matrix adaptation, evolu- tion path, cumulation, cumulative path length control. -
Noisy Evolutionary Optimization Algorithms-A Comprehensive Survey
Author’s Accepted Manuscript Noisy Evolutionary Optimization Algorithms-A Comprehensive Survey Pratyusha Rakshit, Amit Konar, Swagatam Das www.elsevier.com/locate/swevo PII: S2210-6502(16)30308-X DOI: http://dx.doi.org/10.1016/j.swevo.2016.09.002 Reference: SWEVO233 To appear in: Swarm and Evolutionary Computation Received date: 23 May 2016 Revised date: 26 September 2016 Accepted date: 29 September 2016 Cite this article as: Pratyusha Rakshit, Amit Konar and Swagatam Das, Noisy Evolutionary Optimization Algorithms-A Comprehensive Survey, Swarm and Evolutionary Computation, http://dx.doi.org/10.1016/j.swevo.2016.09.002 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. Noisy Evolutionary Optimization Algorithms-A Comprehensive Survey Pratyusha Rakshit, Amit Konar and Swagatam Das Abstract— Noisy optimization is currently receiving increasing popularity for its widespread applications in engineering optimization problems, where the objective functions are often found to be contaminated with noisy sensory measurements. In absence of knowledge of the noise-statistics, discriminating better trial solutions from the rest becomes difficult in the “selection” step of an evolutionary optimization algorithm with noisy objective/s. This paper provides a thorough survey of the present state-of-the-art research on noisy evolutionary algorithms for both single and multi-objective optimization problems. -
A New Biogeography-Based Optimization Approach Based on Shannon-Wiener Diversity Index to Pid Tuning in Multivariable System
ABCM Symposium Series in Mechatronics - Vol. 5 Section III – Emerging Technologies and AI Applications Copyright © 2012 by ABCM Page 592 A NEW BIOGEOGRAPHY-BASED OPTIMIZATION APPROACH BASED ON SHANNON-WIENER DIVERSITY INDEX TO PID TUNING IN MULTIVARIABLE SYSTEM Marsil de Athayde Costa e Silva, [email protected] Undergraduate course of Mechatronics Engineering Camila da Costa Silveira, [email protected] Leandro dos Santos Coelho, [email protected] Industrial and Systems Engineering Graduate Program, PPGEPS Pontifical Catholic University of Parana, PUCPR Imaculada Conceição, 1155, Zip code 80215-901, Curitiba, Parana, Brazil Abstract. Proportional-integral-derivative (PID) control is the most popular control architecture used in industrial problems. Many techniques have been proposed to tune the gains for the PID controller. Over the last few years, as an alternative to the conventional mathematical approaches, modern metaheuristics, such as evolutionary computation and swarm intelligence paradigms, have been given much attention by many researchers due to their ability to find good solutions in PID tuning. As a modern metaheuristic method, Biogeography-based optimization (BBO) is a generalization of biogeography to evolutionary algorithm inspired on the mathematical model of organism distribution in biological systems. BBO is an evolutionary process that achieves information sharing by biogeography-based migration operators. This paper proposes a modification for the BBO using a diversity index, called Shannon-wiener index (SW-BBO), for tune the gains of the PID controller in a multivariable system. Results show that the proposed SW-BBO approach is efficient to obtain high quality solutions in PID tuning. Keywords: PID control, biogeography-based optimization, Shannon-Wiener index, multivariable systems. -
A Novel Particle Swarm Optimizer with Multi-Stage Transformation And
NOVEMBER 2018 1 A novel particle swarm optimizer with multi-stage transformation and genetic operation for VLSI routing Genggeng Liu, Zhen Zhuang, Wenzhong Guo, Naixue Xiong, and Guolong Chen Abstract—As the basic model for very large scale integration Constructing the non-Manhattan Steiner tree is a NP hard (VLSI) routing, the Steiner minimal tree (SMT) can be used problem [17]. On one hand, the above researches on non- in various practical problems, such as wire length optimization, Manhattan Steiner tree [9-16] are both based on exact algo- congestion, and time delay estimation. In this paper, a novel particle swarm optimization (PSO) algorithm based on multi- rithm and traditional heuristic algorithm. However, the time stage transformation and genetic operation is presented to complexity of the exact algorithm increases exponentially with construct two types of SMT, including non-Manhattan SMT the scale of the problem. Most of the traditional heuristic and Manhattan SMT. Firstly, in order to be able to handle algorithms are based on greedy strategy and easy to fall into two types of SMT problems at the same time, an effective local minima. The two types of methods in the construction of edge-vertex encoding strategy is proposed. Secondly, a multi- stage transformation strategy is proposed to both expand the Steiner tree, did not make full use of the geometric properties algorithm search space and ensure the effective convergence. of non-Manhattan architecture, and cannot guarantee the qual- We have tested three types from two to four stages and various ity of the Steiner tree. Those methods provided less suitable combinations under each type to highlight the best combination. -
Genetic Algorithms for Applied Path Planning a Thesis Presented in Partial Ful Llment of the Honors Program Vincent R
Genetic Algorithms for Applied Path Planning A Thesis Presented in Partial Ful llment of the Honors Program Vincent R. Ragusa Abstract Path planning is the computational task of choosing a path through an environment. As a task humans do hundreds of times a day, it may seem that path planning is an easy task, and perhaps naturally suited for a computer to solve. This is not the case however. There are many ways in which NP-Hard problems like path planning can be made easier for computers to solve, but the most signi cant of these is the use of approximation algorithms. One such approximation algorithm is called a genetic algorithm. Genetic algorithms belong to a an area of computer science called evolutionary computation. The techniques used in evolutionary computation algorithms are modeled after the principles of Darwinian evolution by natural selection. Solutions to the problem are literally bred for their problem solving ability through many generations of selective breeding. The goal of the research presented is to examine the viability of genetic algorithms as a practical solution to the path planning problem. Various modi cations to a well known genetic algorithm (NSGA-II) were implemented and tested experimentally to determine if the modi cation had an e ect on the operational eciency of the algorithm. Two new forms of crossover were implemented with positive results. The notion of mass extinction driving evolution was tested with inconclusive results. A path correction algorithm called make valid was created which has proven to be extremely powerful. Finally several additional objective functions were tested including a path smoothness measure and an obstacle intrusion measure, the latter showing an enormous positive result. -
The Roles of Evolutionary Computation, Fitness Landscape, Constructive Methods and Local Searches in the Development of Adaptive Systems for Infrastructure Planning
International Symposium for Next Generation Infrastructure October 1-4, 2013, Wollongong, Australia The roles of evolutionary computation, fitness landscape, constructive methods and local searches in the development of adaptive systems for infrastructure planning Mehrdad Amirghasemi a* Reza Zamani a Abstract: Modelling and systems simulation for improving city planning, and traffic equilibrium are involved with the development of adaptive systems that can learn and respond to the environment intelligently. By employing sophisticated techniques which highly support infrastructure planning and design, evolutionary computation can play a key role in the development of such systems. The key to presenting solution strategies for these systems is fitness landscape which makes some problems hard and some problems easy to tackle. Moreover, constructive methods and local searches can assist evolutionary searches to improve their performance. In this paper, in the context of infrastructure, in general, and city planning, and traffic equilibrium, in particular, the integration of these four concepts is briefly discussed. Key words: Evolutionary computation; Local search; Infrastructure; City planning; Traffic equilibrium. I. Introduction Infrastructure is a term used to indicate both organizational and physical structures a society or an enterprise needs to function. In effect, all facilities and services needed for the operation of an economy is gathered under the umbrella term of “Infrastructure”. Each Infrastructure includes connected elements which are structurally related and each element can affect the other elements. It is through this interconnection that these elements collectively provide a basis for structural development of a society or an enterprise. Viewed in an operative level, infrastructure makes the production of goods and services required in the society more straightforward and the distribution of finished products to market easier. -
Designing Controllers for Computer Generated Forces with Evolutionary Computing: Experiments in a Simple Synthetic Environment
Designing controllers for computer generated forces with evolutionary computing: Experiments in a simple synthetic environment A. Taylor Defence Research and Development Canada – Ottawa Technical Memorandum DRDC Ottawa TM 2012-162 October 2013 Designing controllers for computer generated forces with evolutionary computing: Experiments in a simple synthetic environment A. Taylor Defence Research and Development Canada – Ottawa Technical Memorandum DRDC Ottawa TM 2012-162 October 2013 c Her Majesty the Queen in Right of Canada as represented by the Minister of National Defence, 2013 c Sa Majesté la Reine (en droit du Canada), telle que représentée par le ministre de la Défense nationale, 2013 Abstract Military exercises and experiments are increasingly relying on synthetic environments to reduce costs and enable realistic and dynamic scenarios. However current simulation tech- nology is limited by the simplicity of the controllers for the entities in the environment. Realistic simulations thus require human operators to play the roles of red and white forces, increasing their cost and complexity. Applied research project 13oc aims to re- duce that human workload by improving artificial intelligence in synthetic environments. One approach identified early in the project was to use learning in artificial intelligence (AI). Further work identified evolutionary computing as a method of simplifying the design of AI. Described herein are experiments using evolutionary computing to design controllers in a simple synthetic environment. Controllers with a simple framework are evolved with genetic algorithms and compared to a hand-crafted finite-state-machine controller. Given careful parameter choices, the evolved controller is able to meet the performance of the hand-crafted one. -
Sustainable Growth and Synchronization in Protocell Models
life Article Sustainable Growth and Synchronization in Protocell Models Roberto Serra 1,2,3 and Marco Villani 1,2,* 1 Department of Physics, Informatics and Mathematics, Modena and Reggio Emilia University, Via Campi 213/A, 41125 Modena, Italy 2 European Centre for Living Technology, Ca’ Bottacin, Dorsoduro 3911, Calle Crosera, 30123 Venice, Italy 3 Institute for Advanced Study, University of Amsterdam, Oude Turfmarkt 147, 1012 GC Amsterdam, The Netherlands * Correspondence: [email protected] Received: 19 July 2019; Accepted: 14 August 2019; Published: 21 August 2019 Abstract: The growth of a population of protocells requires that the two key processes of replication of the protogenetic material and reproduction of the whole protocell take place at the same rate. While in many ODE-based models such synchronization spontaneously develops, this does not happen in the important case of quadratic growth terms. Here we show that spontaneous synchronization can be recovered (i) by requiring that the transmembrane diffusion of precursors takes place at a finite rate, or (ii) by introducing a finite lifetime of the molecular complexes. We then consider reaction networks that grow by the addition of newly synthesized chemicals in a binary polymer model, and analyze their behaviors in growing and dividing protocells, thereby confirming the importance of (i) and (ii) for synchronization. We describe some interesting phenomena (like long-term oscillations of duplication times) and show that the presence of food-generated autocatalytic cycles is not sufficient to guarantee synchronization: in the case of cycles with a complex structure, it is often observed that only some subcycles survive and synchronize, while others die out. -
Arxiv:1803.03453V4 [Cs.NE] 21 Nov 2019
The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities Joel Lehman1†, Jeff Clune1, 2†, Dusan Misevic3†, Christoph Adami4, Lee Altenberg5, Julie Beaulieu6, Peter J Bentley7, Samuel Bernard8, Guillaume Beslon9, David M Bryson4, Patryk Chrabaszcz11, Nick Cheney2, Antoine Cully12, Stephane Doncieux13, Fred C Dyer4, Kai Olav Ellefsen14, Robert Feldt15, Stephan Fischer16, Stephanie Forrest17, Antoine Fr´enoy18, Christian Gagn´e6 Leni Le Goff13, Laura M Grabowski19, Babak Hodjat20, Frank Hutter11, Laurent Keller21, Carole Knibbe9, Peter Krcah22, Richard E Lenski4, Hod Lipson23, Robert MacCurdy24, Carlos Maestre13, Risto Miikkulainen26, Sara Mitri21, David E Moriarty27, Jean-Baptiste Mouret28, Anh Nguyen2, Charles Ofria4, Marc Parizeau 6, David Parsons9, Robert T Pennock4, William F Punch4, Thomas S Ray29, Marc Schoenauer30, Eric Schulte17, Karl Sims, Kenneth O Stanley1,31, Fran¸coisTaddei3, Danesh Tarapore32, Simon Thibault6, Westley Weimer33, Richard Watson34, Jason Yosinski1 †Organizing lead authors 1 Uber AI Labs, San Francisco, CA, USA 2 University of Wyoming, Laramie, WY, USA 3 Center for Research and Interdisciplinarity, Paris, France 4 Michigan State University, East Lansing, MI, USA 5 Univeristy of Hawai‘i at Manoa, HI, USA 6 Universit´eLaval, Quebec City, Quebec, Canada 7 University College London, London, UK 8 INRIA, Institut Camille Jordan, CNRS, UMR5208, 69622 Villeurbanne, France 9 Universit´ede Lyon, INRIA, CNRS, LIRIS UMR5205, INSA, UCBL, -
A Note on Evolutionary Algorithms and Its Applications
Bhargava, S. (2013). A Note on Evolutionary Algorithms and Its Applications. Adults Learning Mathematics: An International Journal, 8(1), 31-45 A Note on Evolutionary Algorithms and Its Applications Shifali Bhargava Dept. of Mathematics, B.S.A. College, Mathura (U.P)- India. <[email protected]> Abstract This paper introduces evolutionary algorithms with its applications in multi-objective optimization. Here elitist and non-elitist multiobjective evolutionary algorithms are discussed with their advantages and disadvantages. We also discuss constrained multiobjective evolutionary algorithms and their applications in various areas. Key words: evolutionary algorithms, multi-objective optimization, pareto-optimality, elitist. Introduction The term evolutionary algorithm (EA) stands for a class of stochastic optimization methods that simulate the process of natural evolution. The origins of EAs can be traced back to the late 1950s, and since the 1970’s several evolutionary methodologies have been proposed, mainly genetic algorithms, evolutionary programming, and evolution strategies. All of these approaches operate on a set of candidate solutions. Using strong simplifications, this set is subsequently modified by the two basic principles of evolution: selection and variation. Selection represents the competition for resources among living beings. Some are better than others and more likely to survive and to reproduce their genetic information. In evolutionary algorithms, natural selection is simulated by a stochastic selection process. Each solution is given a chance to reproduce a certain number of times, dependent on their quality. Thereby, quality is assessed by evaluating the individuals and assigning them scalar fitness values. The other principle, variation, imitates natural capability of creating “new” living beings by means of recombination and mutation. -
An Evolutionary Framework for Automatic and Guided Discovery of Algorithms
An Evolutionary Framework for Automatic and Guided Discovery of Algorithms Ruchira Sasanka Konstantinos Krommydas Intel Corporation Intel Corporation [email protected] [email protected] Abstract evolution without a fitness function, by adding several po- This paper presents Automatic Algorithm Discoverer (AAD), tentially related problems together into a group. We call this an evolutionary framework for synthesizing programs of Problem Guided Evolution (PGE) and it is analogous to the high complexity. To guide evolution, prior evolutionary algo- way we teach children to solve complex problems. For in- rithms have depended on fitness (objective) functions, which stance, to help discover an algorithm for finding the area of a are challenging to design. To make evolutionary progress, in- polygon, we may ask a student to find a way to calculate the stead, AAD employs Problem Guided Evolution (PGE), which area of a triangle. That is, simpler problems guide solutions to requires introduction of a group of problems together. With more complex ones. Notably, PGE does not require knowing PGE, solutions discovered for simpler problems are used to the exact algorithm nor the exact constituents to a solution, solve more complex problems in the same group. PGE also but rather a few potentially related problems. In AAD, PGE enables several new evolutionary strategies, and naturally allows more complex solutions to be derived through (i) com- yields to High-Performance Computing (HPC) techniques. position (combining simpler ones), and through (ii) mutation We find that PGE and related evolutionary strategies en- of alredy discovered ones. able AAD to discover algorithms of similar or higher com- Grouping related problems for PGE, like designing a fit- plexity relative to the state-of-the-art. -
Analysis of Evolutionary Algorithms
Why We Do Not Evolve Software? Analysis of Evolutionary Algorithms Roman V. Yampolskiy Computer Engineering and Computer Science University of Louisville [email protected] Abstract In this paper, we review the state-of-the-art results in evolutionary computation and observe that we don’t evolve non-trivial software from scratch and with no human intervention. A number of possible explanations are considered, but we conclude that computational complexity of the problem prevents it from being solved as currently attempted. A detailed analysis of necessary and available computational resources is provided to support our findings. Keywords: Darwinian Algorithm, Genetic Algorithm, Genetic Programming, Optimization. “We point out a curious philosophical implication of the algorithmic perspective: if the origin of life is identified with the transition from trivial to non-trivial information processing – e.g. from something akin to a Turing machine capable of a single (or limited set of) computation(s) to a universal Turing machine capable of constructing any computable object (within a universality class) – then a precise point of transition from non-life to life may actually be undecidable in the logical sense. This would likely have very important philosophical implications, particularly in our interpretation of life as a predictable outcome of physical law.” [1]. 1. Introduction On April 1, 2016 Dr. Yampolskiy posted the following to his social media accounts: “Google just announced major layoffs of programmers. Future software development and updates will be done mostly via recursive self-improvement by evolving deep neural networks”. The joke got a number of “likes” but also, interestingly, a few requests from journalists for interviews on this “developing story”.