Neuroevolution

Total Page:16

File Type:pdf, Size:1020Kb

Neuroevolution Jeroen Soeters NEUROEVOLUTION The Hitchhiker’s Guide to Neuroevolution in Erlang. JEROEN… HOW DO YOU PRONOUNCE THAT? Well, it goes like this: YUH-ROON 2 ARTIFICIAL NEURAL NETWORKS 3 BIOLOGICAL NEURAL NETWORKS Dendrites Synapse Soma Axon 4 A COMPUTER MODEL FOR A NEURON Dendrites Synapses Axon x1 w1 x2 w2 Soma Y wn xn 5 A COMPUTER MODEL FOR A NEURON Input signals Weights Output signal x1 w1 x2 w2 Neuron Y wn xn 5 HOW DOES THE NEURON DETERMINE IT’S OUTPUT? n Y =sign ∑ xiwi - ⍬ n=1 6 ACTIVATION FUNCTION Y X 7 MEET FRANK ROSENBLATT 8 PERCEPTRON LEARNING RULE ℮(p) = Yd(p) - Y(p) wi(p + 1) = wi(p) + α • wi(p) • ℮(p) = 9 PERCEPTRON TRAINING ALGORITHM set weights and threshold to start random values [-0.5, 0.5] activate the perceptron no weights converged? yes weight training stop 10 LOGIC GATES input values x1 AND x2 x1 OR x2 x1 XOR x2 x1 x2 0 0 0 0 0 0 1 0 1 1 1 0 0 1 1 1 1 1 1 0 11 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.30.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.30.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.30.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0 0 0.3 -0.1 0 1 0 0.3 -0.1 0 0 0.3 -0.1 1 1 0 0 0.3 -0.1 1 -1 0.2 -0.1 1 1 1 0.2 -0.1 0 1 0.3 0.0 Threshold: ⍬ = 0.2 ; learning rate: α = 0.1 = 12 TRAINING A PERCEPTRON TO PERFORM THE AND OPERATION inputs desired initial weights actual final weights epoch output output error x1 x2 Yd w1 w2 Y ℮ w1 w2 0 0 0 0.3 -0.1 0
Recommended publications
  • Noisy Evolutionary Optimization Algorithms-A Comprehensive Survey
    Author’s Accepted Manuscript Noisy Evolutionary Optimization Algorithms-A Comprehensive Survey Pratyusha Rakshit, Amit Konar, Swagatam Das www.elsevier.com/locate/swevo PII: S2210-6502(16)30308-X DOI: http://dx.doi.org/10.1016/j.swevo.2016.09.002 Reference: SWEVO233 To appear in: Swarm and Evolutionary Computation Received date: 23 May 2016 Revised date: 26 September 2016 Accepted date: 29 September 2016 Cite this article as: Pratyusha Rakshit, Amit Konar and Swagatam Das, Noisy Evolutionary Optimization Algorithms-A Comprehensive Survey, Swarm and Evolutionary Computation, http://dx.doi.org/10.1016/j.swevo.2016.09.002 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. Noisy Evolutionary Optimization Algorithms-A Comprehensive Survey Pratyusha Rakshit, Amit Konar and Swagatam Das Abstract— Noisy optimization is currently receiving increasing popularity for its widespread applications in engineering optimization problems, where the objective functions are often found to be contaminated with noisy sensory measurements. In absence of knowledge of the noise-statistics, discriminating better trial solutions from the rest becomes difficult in the “selection” step of an evolutionary optimization algorithm with noisy objective/s. This paper provides a thorough survey of the present state-of-the-art research on noisy evolutionary algorithms for both single and multi-objective optimization problems.
    [Show full text]
  • A Novel Particle Swarm Optimizer with Multi-Stage Transformation And
    NOVEMBER 2018 1 A novel particle swarm optimizer with multi-stage transformation and genetic operation for VLSI routing Genggeng Liu, Zhen Zhuang, Wenzhong Guo, Naixue Xiong, and Guolong Chen Abstract—As the basic model for very large scale integration Constructing the non-Manhattan Steiner tree is a NP hard (VLSI) routing, the Steiner minimal tree (SMT) can be used problem [17]. On one hand, the above researches on non- in various practical problems, such as wire length optimization, Manhattan Steiner tree [9-16] are both based on exact algo- congestion, and time delay estimation. In this paper, a novel particle swarm optimization (PSO) algorithm based on multi- rithm and traditional heuristic algorithm. However, the time stage transformation and genetic operation is presented to complexity of the exact algorithm increases exponentially with construct two types of SMT, including non-Manhattan SMT the scale of the problem. Most of the traditional heuristic and Manhattan SMT. Firstly, in order to be able to handle algorithms are based on greedy strategy and easy to fall into two types of SMT problems at the same time, an effective local minima. The two types of methods in the construction of edge-vertex encoding strategy is proposed. Secondly, a multi- stage transformation strategy is proposed to both expand the Steiner tree, did not make full use of the geometric properties algorithm search space and ensure the effective convergence. of non-Manhattan architecture, and cannot guarantee the qual- We have tested three types from two to four stages and various ity of the Steiner tree. Those methods provided less suitable combinations under each type to highlight the best combination.
    [Show full text]
  • Designing Controllers for Computer Generated Forces with Evolutionary Computing: Experiments in a Simple Synthetic Environment
    Designing controllers for computer generated forces with evolutionary computing: Experiments in a simple synthetic environment A. Taylor Defence Research and Development Canada – Ottawa Technical Memorandum DRDC Ottawa TM 2012-162 October 2013 Designing controllers for computer generated forces with evolutionary computing: Experiments in a simple synthetic environment A. Taylor Defence Research and Development Canada – Ottawa Technical Memorandum DRDC Ottawa TM 2012-162 October 2013 c Her Majesty the Queen in Right of Canada as represented by the Minister of National Defence, 2013 c Sa Majesté la Reine (en droit du Canada), telle que représentée par le ministre de la Défense nationale, 2013 Abstract Military exercises and experiments are increasingly relying on synthetic environments to reduce costs and enable realistic and dynamic scenarios. However current simulation tech- nology is limited by the simplicity of the controllers for the entities in the environment. Realistic simulations thus require human operators to play the roles of red and white forces, increasing their cost and complexity. Applied research project 13oc aims to re- duce that human workload by improving artificial intelligence in synthetic environments. One approach identified early in the project was to use learning in artificial intelligence (AI). Further work identified evolutionary computing as a method of simplifying the design of AI. Described herein are experiments using evolutionary computing to design controllers in a simple synthetic environment. Controllers with a simple framework are evolved with genetic algorithms and compared to a hand-crafted finite-state-machine controller. Given careful parameter choices, the evolved controller is able to meet the performance of the hand-crafted one.
    [Show full text]
  • An Evolutionary Framework for Automatic and Guided Discovery of Algorithms
    An Evolutionary Framework for Automatic and Guided Discovery of Algorithms Ruchira Sasanka Konstantinos Krommydas Intel Corporation Intel Corporation [email protected] [email protected] Abstract evolution without a fitness function, by adding several po- This paper presents Automatic Algorithm Discoverer (AAD), tentially related problems together into a group. We call this an evolutionary framework for synthesizing programs of Problem Guided Evolution (PGE) and it is analogous to the high complexity. To guide evolution, prior evolutionary algo- way we teach children to solve complex problems. For in- rithms have depended on fitness (objective) functions, which stance, to help discover an algorithm for finding the area of a are challenging to design. To make evolutionary progress, in- polygon, we may ask a student to find a way to calculate the stead, AAD employs Problem Guided Evolution (PGE), which area of a triangle. That is, simpler problems guide solutions to requires introduction of a group of problems together. With more complex ones. Notably, PGE does not require knowing PGE, solutions discovered for simpler problems are used to the exact algorithm nor the exact constituents to a solution, solve more complex problems in the same group. PGE also but rather a few potentially related problems. In AAD, PGE enables several new evolutionary strategies, and naturally allows more complex solutions to be derived through (i) com- yields to High-Performance Computing (HPC) techniques. position (combining simpler ones), and through (ii) mutation We find that PGE and related evolutionary strategies en- of alredy discovered ones. able AAD to discover algorithms of similar or higher com- Grouping related problems for PGE, like designing a fit- plexity relative to the state-of-the-art.
    [Show full text]
  • A Fitness Function to Find Feasible Sequences of Method Calls For
    University of Texas at El Paso DigitalCommons@UTEP Departmental Technical Reports (CS) Department of Computer Science 11-1-2007 A Fitness Function to Find Feasible Sequences of Method Calls for Evolutionary Testing of Object- Oriented Programs Myoung Yee Kim University of Texas at El Paso, [email protected] Yoonsik Cheon University of Texas at El Paso, [email protected] Follow this and additional works at: http://digitalcommons.utep.edu/cs_techrep Part of the Computer Engineering Commons Comments: Technical Report: UTEP-CS-07-57 Recommended Citation Kim, Myoung Yee and Cheon, Yoonsik, "A Fitness Function to Find Feasible Sequences of Method Calls for Evolutionary Testing of Object-Oriented Programs" (2007). Departmental Technical Reports (CS). Paper 226. http://digitalcommons.utep.edu/cs_techrep/226 This Article is brought to you for free and open access by the Department of Computer Science at DigitalCommons@UTEP. It has been accepted for inclusion in Departmental Technical Reports (CS) by an authorized administrator of DigitalCommons@UTEP. For more information, please contact [email protected]. A Fitness Function to Find Feasible Sequences of Method Calls for Evolutionary Testing of Object-Oriented Programs Myoung Yee Kim and Yoonsik Cheon TR #07-57 November 2007; revised January 2008 Keywords: fitness function, evolutionary testing, genetic algorithms, test data generator, pre and postcondi- tions, object-oriented programming, JML language. 1998 CR Categories: D.2.4 [Software Engineering] Software/Program Verification — class invariants, for- mal methods, programming by contract; D.2.5 [Software Engineering] Testing and Debugging — testing tools (e.g., data generators, coverage testing); D.3.2 [Programming Languages] Language Classifications — Object-oriented languages; F.3.1 [Logics and Meanings of Programs] Specifying and Verifying and Rea- soning about Programs — Assertions, invariants, pre- and post-conditions, specification techniques; I.2.8 [Artificial Intelligence] Problem Solving, Control Methods, and Search — Heuristic methods.
    [Show full text]
  • Minimal Fitness Functions in Genetic Algorithms for the Composition of Piano Music
    ICMC 2015 – Sept. 25 - Oct. 1, 2015 – CEMI, University of North Texas Minimal Fitness Functions in Genetic Algorithms for the Composition of Piano Music Rodney Waschka II North Carolina State University [email protected] ABSTRACT Perhaps the most significant problem facing anyone em- A general strategy for creating extremely simple but effec- ploying genetic algorithms for music is the determination tive fitness functions in genetic algorithms is described. of an appropriate fitness function. Two major factors cause Called “minimal fitness functions”, these fitness functions the fitness function to become a point of high tension in the are designed to 1) consist of the smallest amount of infor- creation of a genetic algorithm for music composition. mation or restrictions possible, 2) avoid the “fitness bot- First, it is usually the case that the fitness function serves tleneck” problem, 3) be “aesthetically neutral”, and 4) be as the main method for determination of the musical con- musically useful. A summary of the general background in tent of a piece made with genetic algorithms. Second, even music for this work on fitness functions is provided, the when composing intuitively -- in the “old-fashioned way” - rationale and strategy of minimal fitness functions are set - it is often incredibly difficult for a composer to determine forth, and then specific examples of this type of fitness what constitutes a good reason to incorporate or omit some function are presented. Finally, examples of the musical note, phrase, sound, or group of sounds into or from a results generated by these minimal functions to help com- composition. Explaining the reasons for such inclusions or pose piano music are provided.
    [Show full text]
  • Outline of Machine Learning
    Outline of machine learning The following outline is provided as an overview of and topical guide to machine learning: Machine learning – subfield of computer science[1] (more particularly soft computing) that evolved from the study of pattern recognition and computational learning theory in artificial intelligence.[1] In 1959, Arthur Samuel defined machine learning as a "Field of study that gives computers the ability to learn without being explicitly programmed".[2] Machine learning explores the study and construction of algorithms that can learn from and make predictions on data.[3] Such algorithms operate by building a model from an example training set of input observations in order to make data-driven predictions or decisions expressed as outputs, rather than following strictly static program instructions. Contents What type of thing is machine learning? Branches of machine learning Subfields of machine learning Cross-disciplinary fields involving machine learning Applications of machine learning Machine learning hardware Machine learning tools Machine learning frameworks Machine learning libraries Machine learning algorithms Machine learning methods Dimensionality reduction Ensemble learning Meta learning Reinforcement learning Supervised learning Unsupervised learning Semi-supervised learning Deep learning Other machine learning methods and problems Machine learning research History of machine learning Machine learning projects Machine learning organizations Machine learning conferences and workshops Machine learning publications
    [Show full text]
  • Schemata Evolution and Building Blocks
    Schemata Evolution and Building Blocks Chris Stephens ∗ and Henri Waelbroeck † Instituto de Ciencias Nucleares, UNAM Circuito Exterior, A.Postal 70-543 M´exico D.F. 04510 12th October 1997 Abstract In the light of a recently derived evolution equation for genetic algorithms we consider the schema theorem and the building block hypothesis. We derive a schema theorem based on the concept of effective fitness showing that schemata of higher than average effective fitness receive an exponentially increasing number of trials over time. The equation makes manifest the content of the building block hypothesis showing how fit schemata are constructed from fit sub-schemata. How- ever, we show that generically there is no preference for short, low-order schemata. In the case where schema reconstruction is favored over schema destruction large schemata tend to be favored. As a corollary of the evolution equation we prove Geiringer’s theorem. Key Words: Schema Theorem, Building Block Hypothesis, Evolution equation, Ef- fective fitness. arXiv:nlin/0006048v1 [nlin.AO] 30 Jun 2000 1 Introduction One of the most commonly asked questions about genetic algorithms (GAs) is: under what circumstances do GAs work well? Obviously an answer to this question would help immeasurably in knowing to which problems one can apply a GA and expect a high level of performance. However, to answer this question one has to answer a more fundamental question: how do GAs work? For example, in a typical optimization problem how does the GA arrive at a good solution? It is clear that in very complex problems this is not ∗e-mail: [email protected] †e-mail: [email protected] 1 achieved via a random search in the state space.
    [Show full text]
  • Particle Swarm Optimizer for Finding Robust Optima
    Particle Swarm Optimizer for Finding Robust Optima J.K. Vis [email protected] June 23, 2009 ABSTRACT Many real-world processes are subject to uncertainties and noise. Robust Optimization methods seek to obtain optima that are robust to noise and it can deal with uncertainties in the problem definition. In this thesis we will investigate if Particle Swarm Optimizers (PSO) are suited to solve prob- lems for robust optima. A set of standard benchmark functions will we used to test two PSO algorithms { Canonical PSO and Fully Informed Particle Swarm { against other optimization approaches. Based on observations of the behaviour of these PSOs our aim is to develop improvements for solving for robust optima. The emphasis lies on finding appropriate neighbourhood topologies. CONTENTS 1. Introduction ::::::::::::::::::::::::::::::: 5 1.1 Objectives . 5 1.2 Thesis Outline . 6 2. Optimization :::::::::::::::::::::::::::::: 7 2.1 Robust Optimization . 8 3. Particle Swarm Optimizer ::::::::::::::::::::::: 10 3.1 Canonical Particle Swarm Optimizer . 11 3.1.1 Parameters . 11 3.1.2 Initialization . 12 3.1.3 Loop . 13 3.2 Fully Informed Particle Swarm . 15 3.2.1 Parameters . 15 3.2.2 Initialization . 15 3.2.3 Loop . 15 4. Benchmark Problems :::::::::::::::::::::::::: 18 4.1 General Benchmark Functions . 18 4.2 Robust Design Benchmark Functions . 21 4.2.1 A Simple Test Function . 23 5. Empirical Performance ::::::::::::::::::::::::: 27 5.1 PSO Performance for general optimization . 27 5.2 PSO Performance for robust optimization . 29 5.2.1 Proof of Concept . 33 6. Conclusions and Discussion :::::::::::::::::::::: 37 6.1 Future Research . 39 Contents 4 Appendix 42 A. MATLAB Code for Canonical PSO :::::::::::::::::: 43 B.
    [Show full text]
  • "Effective" Fitness Landscapes for Evolutionary Systems
    “EFFECTIVE” FITNESS LANDSCAPES FOR EVOLUTIONARY SYSTEMS C. R. Stephens NNCP, Instituto de Ciencias Nucleares, UNAM, Circuito Exterior, A.Postal 70-543 M´exico D.F. 04510 e-mail: [email protected] Abstract- ulation dynamics on fitness landscapes much attention has In evolution theory the concept of a fitness landscape been paid to adaptive walks on the hypercubic configuration has played an important role, evolution itself being por- spaces of the Kauffman NK-models [2]. Such dynamics can trayed as a hill-climbing process on a rugged landscape. be of interest biologically speaking, but do not seem to be of In this article it is shown that in general, in the presence particular interest for evolutionary computation. Thus, there of other genetic operators such as mutation and recombi- has been an “expectation gap” between what theoretical bi- nation, hill-climbing is the exception rather than the rule. ologists, physicists, and mathematicians have been able to This descrepency can be traced to the different ways that achieve in landscape theory and what the evolutionary com- the concept of fitness appears — as a measure of the num- putation community expects. ber of fit offspring, or as a measure of the probability to Landscape analysis in GA theory, for instance, has tended reach reproductive age. Effective fitness models the for- to focus on the relation between problem difficulty and land- mer not the latter and gives an intuitive way to under- scape modality; the assumption being that more modality stand population dynamics as flows on an effective fit- signifies more difficulty. Obviously a classification of land- ness landscape when genetic operators other than selec- scapes into those that are difficult for an evolutionary algo- tion play an important role.
    [Show full text]
  • Efficient Search for Robust Solutions by Means of Evolutionary Algorithms and Fitness Approximation 407
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 10, NO. 4, AUGUST 2006 405 Efficient Search for Robust Solutions by Means of Evolutionary Algorithms and Fitness Approximation Ingo Paenke, Jürgen Branke, Member, IEEE, and Yaochu Jin, Senior Member, IEEE Abstract—For many real-world optimization problems, the ro- robust solutions is to consider the best worst case performance. bustness of a solution is of great importance in addition to the so- Another definition of robust solutions is to consider a solution’s lution’s quality. By robustness, we mean that small deviations from expected performance over all possible disturbances, which the original design, e.g., due to manufacturing tolerances, should be tolerated without a severe loss of quality. One way to achieve corresponds to a risk-neutral decision maker’s choice. In these that goal is to evaluate each solution under a number of different two definitions for robustness, only one objective is considered, scenarios and use the average solution quality as fitness. However, and we denote such approaches single objective (SO) robustness this approach is often impractical, because the cost for evaluating optimization. However, robustness of solutions might be better each individual several times is unacceptable. In this paper, we defined by considering both the quality and the risk separately, present a new and efficient approach to estimating a solution’s ex- pected quality and variance. We propose to construct local approx- i.e., by converting the problem into a multiobjective problem imate models of the fitness function and then use these approxi- [22]. We denote such approaches as multiobjective (MO) ro- mate models to estimate expected fitness and variance.
    [Show full text]
  • Abandoning Objectives: Evolution Through the Search for Novelty Alone
    Abandoning Objectives: Evolution Through the Search for Novelty Alone Joel Lehman [email protected] School of Electrical Engineering and Computer Science, University of Central Florida, Orlando, Florida 32816, USA Kenneth O. Stanley [email protected] School of Electrical Engineering and Computer Science, University of Central Florida, Orlando, Florida 32816, USA Abstract In evolutionary computation, the fitness function normally measures progress toward an objective in the search space, effectively acting as an objective function. Through deception, such objective functions may actually prevent the objective from being reached. While methods exist to mitigate deception, they leave the underlying pathol- ogy untreated: Objective functions themselves may actively misdirect search toward dead ends. This paper proposes an approach to circumventing deception that also yields a new perspective on open-ended evolution. Instead of either explicitly seeking an objective or modeling natural evolution to capture open-endedness, the idea is to simply search for behavioral novelty. Even in an objective-based problem, such nov- elty search ignores the objective. Because many points in the search space collapse to a single behavior, the search for novelty is often feasible. Furthermore, because there are only so many simple behaviors, the search for novelty leads to increasing complexity. By decoupling open-ended search from artificial life worlds, the search for novelty is applicable to real world problems. Counterintuitively, in the maze navigation and biped walking tasks in this paper, novelty search significantly outperforms objective- based search, suggesting the strange conclusion that some problems are best solved by methods that ignore the objective. The main lesson is the inherent limitation of the objective-based paradigm and the unexploited opportunity to guide search through other means.
    [Show full text]