XIVth International Conference on Parallel Problem Solving from Nature

Conference Booklet Overview: Workshops and Tutorials

Saturday 17 Sept Prestonfield Holyrood Salisbury Duddingston

0930-1030 Registration

1030-1100 Coffee in foyer

Luigi Malago : A Bridge Mike Preuss, Michael Julian Miller, Patricia between Optimization G. Epitropakis : Ryser-Welch : Graph- 1100-1230 over Manifolds and Advances on Multi- based and Cartesian Evolutionary modal optimization Genetic Programming Computation

1230-1400 lunch

Nadarajen Veerapen, Dimo Brockhoff, Tobias JJ Merelo : Wagner : Evolutionary Implementing 1400-1530 Gabriela Ochoa : Landscape-Aware Multiobjective evolutionary algorithms Heuristic Search Optimization in the cloud

1530-1545 coffee in foyer

Giovanni Squillero, Alberto Tonda : Stjepan Picek : Promoting Diversity in Evolutionary 1545-1715 Evolutionary Computation in Optimization: Why and Cryptography How

Sunday 18 September Prestonfield Holyrood Salisbury Duddingston

Carlos Fonseca, Andreia Guerreiro : Darrell Whitley : Gray The Attainment 0930-1045 Box Optimization in Function Approach to Theory Performance Ahmed Kheiri, Rhyd Mike Preuss, Michael Evaluation in EMO Lewis, Ender Ozcan : G. Epitropakis, 1045-1100 Natural Computing in coffee in foyer Xiaodong Li : Advances Scheduling and in Multi-modal Timetabling Optimization Enrique Alba : Benjamin Doerr : 1100-1230 Intelligent Systems for Theory of evolutionary Smart Cities computation

1230-1400 lunch

Jacqueline Heinerman, Per Kristian Lehre, Gusz Eiben, Evert Pietro Oliveto : Nelishia Pillay : Haasdijk : Evolutionary Runtime Analysis of 1400-1530 Evolutionary Algorithms robotics – a practical Evolutionary and Hyper-Heuristics guide to experiment Algorithms: Basic with real hardware Introduction Neil Urquhart : Intelligent 1530-1545 coffee in foyer Transportation Workshop Boris Naujoks, Jörg Stork, Martin Zaefferer, Thomas Bartz- Dirk Sudholt : Theory of 1545-1715 Beielstein : Meta- Parallel Evolutionary Modell Assisted Algorithms (evolutionary) optimization

1 Overview: Monday-Wednesday

Tuesday 20 September Pentland Room Prestonfield Room 0900-0930 coffee in foyer

0930-0945 Announcements

0945-1100 Keynote Katie Bentley

1100-1130 coffee in foyer

1130-1300 Poster session 4

1300-1400 lunch

1400-1530 Poster session 5

1530-1545 coffee in foyer

1545-1715 Poster session 6

1800-1930 Edinburgh open-top coach tour

2000-2250 Conference dinner

Wednesday 21 September Pentland Room Prestonfield Room 0900-0930 coffee in foyer

0930-1045 Keynote Josh Bongard

1045-1115 coffee in foyer

1115-1245 Poster session 7

1245-1330 Conference closing amd awards

1330-1430 lunch

2 Contents

PPSN XIV 4 Welcome ...... 4 Conference Committee ...... 5 Steering Committee ...... 5

Conference Information 6 Venue ...... 6 Lunches and Coffee Breaks ...... 7 Room Plan ...... 7 Instructions for poster presenters ...... 7 Wifi ...... 7

Social Events 8 Opening Reception ...... 8 Guided Bus Tour ...... 8 Conference Dinner ...... 8

Tutorials 9 Saturday 17th September ...... 9 Sunday 18th September ...... 12

Workshops 16 Saturday 17th September ...... 16 Sunday 18th September ...... 17

Keynotes 19 Susan Stepney ...... 19 Katie Bentley ...... 19 Josh Bongard ...... 20

Poster Sessions 21 Monday 19th September ...... 21 Session 1 ...... 21 Session 2 ...... 21 Session 3 ...... 22 Tuesday 20th September ...... 23 Session 4 ...... 23 Session 6 ...... 24 Wednesday 21st September ...... 25 Session 7 ...... 25

Abstracts 27 Session 1 ...... 27 Session 2 ...... 29 Session 3 ...... 33 Session 4 ...... 36 Session 5 ...... 39 Session 6 ...... 43 Session 7 ...... 46

3 PPSN XIV

Welcome

Welcome to PPSN 2016, the 14th International Conference on Parallel Problem Solving from Nature (PPSN XIV). This biennial event constitutes one of the most important and highly regarded international conferences in nature-inspired computation, ranging from evolutionary computation and robotics to artificial life and metaheuristics. Continuing with a tradition that started in Dortmund in 1990, PPSN XIV was held during September 17-21, 2016, in Edinburgh, Scotland, UK, organised by Edinburgh Napier University. PPSN XIV received 224 submissions from 50 countries – an increase in both figures from the previous conference demonstrating the continued and widening interest in the field. After an extensive peer-review process where most papers were evaluated by at least four reviewers, the Program Committee Chairs examined all of the reports and ranked the papers. Where there was disagreement amongst the reviewers, the Chairs evaluated the papers themselves in order to ensure fair and accurate decisions. The top 93 manuscripts were finally selected for inclusion in this LNCS volume and for presentation at the conference. This represents an acceptance rate of 41.5%, which guarantees that PPSN will continue to be one of the most respected conferences for researchers working in nature-inspired computation around the world. The meeting begins with four workshops bringing together work in specialized areas, and sixteen free tutorials. These take place over the weekend and enable allow researchers with similar interests to discuss and explore ideas in an informal and friendly setting. The main conference begins on Monday 19th, and is enhanced by the inclusion of three distinguished keynote speakers representing facets of the field’s future and the interfaces with other disciplines: Susan Stepney (University of York, UK), Josh Bongard (University of Vermont, USA) and Katie Bentley (Harvard Medical School, USA). The 93 accepted papers will be presented in seven poster sessions over the course of the conference. We wish to express our gratitude first to the Program Chairs (Julia Handl, Manuel L´opez-Ib´a˜nezand Gabriela Ochoa) who have worked extremely hard in overseeing this high-quality and exciting program. We would also like to thank the Program Committee members and external reviewers who provided thorough evaluations of all 224 submissions. We would also express our profound thanks to all the members of the Organizing Committee and the local organizers for their outstanding efforts in preparing for and running the conference. Thanks to all the keynote, workshop and tutorial speakers for their participation, which greatly enhanced the quality of the conference, and particuarly to the Session Chairs for their efforts in reading papers beforehand in order to present the introductory talk for each session. Finally, we also express our gratitude to the sponsoring institutions, including Edinburgh Napier Univer- sity for their financial support, Springer for providing sponsorship which enabled us to provide 12 students with accommodation bursaries, and to the conference partners for participating in the organization of this event.

September 2016 Emma Hart and Ben Paechter General Chairs

4 Conference Committee

Conference Chair Emma Hart Edinburgh Napier University, UK Ben Paechter Edinburgh Napier University, UK

Honorary Chair Hans-Paul Schwefel Dortmund University of Technology, Germany

Program Chairs Julia Handl University of Manchester, UK Manuel L´opez-Ib´a˜nez University of Manchester, UK Gabriela Ochoa University of Stirling, UK

Tutorial Chairs Carola Doerr Universit´ePierre et Marie Curie, France Nicolas Bredeche Universit´ePierre et Marie Curie, France

Workshop Chairs Christian Blum University of the Basque Country, Spain Christine Zarges University of Birmingham, UK

Publications Chair Peter Lewis University of Aston, UK

Local Organisation Chair Neil Urquhart Edinburgh Napier University, UK

Local Organising Committee Kevin Sim Edinburgh Napier University, UK Christopher Stone Edinburgh Napier University, UK Andreas Steyven Edinburgh Napier University, UK

Steering Committee

Carlos Cotta Universidad de M´alaga,Spain David W. Corne Heriot-Watt University Edinburgh, UK Kenneth De Jong George Mason University, USA Agoston E. Eiben VU University Amsterdam, The Netherlands Juan Juli´anMerelo Guerv´os Universidad de Granada, Spain Gunter Rudolph Dortmund University of Technology Thomas P. Runarsson University of Iceland, Iceland Robert Schaefer University of Krakow, Poland Marc Schoenauer Inria, France Xin Yao University of Birmingham, UK

5 Conference Information

Venue

The conference takes place at the John McIntyre Conference Centre, which is located within the Pollock Halls complex in a spectacular setting at the foot of Arthurs Seat a 350 million year old extinct volcano. Pollock halls are a short walk from the citys vibrant South Side, which has plenty of bars, restaurants, and guest houses and hotels. A short bus ride, or a pleasant walk, will take you into the city Centre. Lothian Buses provide a number of services which stop within close proximity to Pollock Halls campus where the conference is located. These include 2, 3, 5 ,7 , 8, 14, 29, 30, 31, 33, 37, 47, 49. The easiest way to find out which bus to take, the bus stop location and how long your journey will take, is to use the Traveline Scotland journey planner. By entering information about where you are travelling from and to, you will be provided with a detailed plan of how to make your journey. The postcode for Pollock Halls is EH16 5AY.

S t Le 12 o Q eu e 'n s D ir v na e rd Q 's ueen's Ba Dr vi e lk nk a hT W e r's Ra e di te 17 Site of ns c un 18 ee Dr la R ol u iv V Jeannie Q o 20 e 8 Dean's da Her 21 m i Cottage 7 ts Text 10 Cr 5 9 of 3 12 t 24 11 Qu een's 13 14 Driv 17 25 e 15 16 32 20 18 T 30 e e r rar s v Croft i Q 28 r u 10 33 e D 16 e 12 s n 18 26 34 ' n s 20 e 22 14 D e r 24 32 i u v

e Q e 15 ac rr 35 eT 31 16 e id ks 24 E 17 r as aP t 23 36 12 Pa 19 11 r

Par ks k e

side St 10 id

reet e v 9 i

14 24 r

8 D 26 k R ao d 7 Par s 22 rood oly n 37 H e e u 34 Q 21 5 6 E 4 as t Pa 2 rks 3 E ide 20 as Q 41 t Pa rksi u ed 1 ee 42 n 's 16 rD Th Q i 15 e I u ev nno e P cen en a t s r Ra D k ilw r s ay i i ev d e

S T d t e 44 a r o

r

a R

c k e r dranoeL 14 a 's P S 13 do 147 t re 12 o e r t Saint Margaret's y 11 l e and Saint o l ca 110 10 H no P Leonard's tt 9 Lu 2 8 Lutton Place T 38 3 7 he 34 4 Inn Great Grog 6 oce nt R 10 Entrance ailw 5 5 ay 7 4 3 12 9 Pollock D 2A a 2 Halls Glass lk 1 Scottish Grant 14 e Recycling it House h Widows R Pollock o Q a Halls u d ee 16 ns Dr i ev Parkside Saint Petanque Preston Bowling Leonard's EUSA JMCC Pitch Th Street School Club Hall Shop e In no cen JMCC t R ailw Preston ay Qu nee Street Primary John McIntyre s rD ive 11 School 9 Conference Centre 7

d a o f 5 Turner o R r d 3 k Lee House House r 34 S 1 a Commonwealth JMCC Bar tr 32 P e d Pool Parking e o t o r t yl tree o ton S H Pres East 29 27 Disabled, Parent S A7 23 1 o u D & Child Baird Ewing t a h l Parking k House House 3 O e x it fo h John r 2 R 5 d o S a Burnett t d r Royal House ee Pollock t 4 Commonwealth 7 18 Pool Halls East Preston Street Cemetery 18A On site accommodation 6 24

7 26

34

42 Masson Holland Holland House House House D aD l B ek 1 it h 7 R 15 oa 21A 17 d 13 Royal Commonwealth 31 21 23 Pool 3 27M 23A 2A 4 27P Salisbury 5 29 2 The Salisbury Salisbury Centre Arms Holland Green Holland House House C A ad 4 Ro Royal Commonwealth South ury Pool sb ali Hall S The Edinburgh 35 6 Hebrew D a Conference Dinner Venue Congregation l The Salisbury k Pollock ie t Holland 62 h R Squash o Annexe a 2 d Courts 8 8C 8B 4 64 6 1 10 8 Abden 66 House 12 Fountain

16 1 e 2 lac R t P o Fraser ke Mural Archbishop m 18 c 13 e Court la r B 74 Oscar Romero o D D P 20 13A r 4 a 7A 12 y 15 lk l ca 7 d e e 11 i 22 ne nue t 3 e h P v 3 l A R a et 78 o Marchhall c 6 11 k a Blac 80 Centre

6 John McIntyre Conference Centre Pollock Halls 18 Holyrood Park Road Edinburgh

Lunches and Coffee Breaks

All lunchesJOHN and coffee breaks M arec includedINTYRE in the conference fee CONFERENCE CENTRE RoomFirst- Planfoor space

6.2m 6.2m Boardroom 1

6.2m 3.9m Salisbury Holyrood 6.2m Foyer Boardroom Open-air terrace

80sqm 8.6m 2 8.6m 4.3m

Mens Toilets Ladies Toilets

Concourse Disabled Toilet 11.06m

Ladies Toilets Mens Toilets Centro Bar 14.13m

Pentland West Ladies Toilets 10.5m Open-air terrace

270sqm 15.6m

Prestonfeld

Pentland 16.8m 8.25m East 12.6m Duddingston

15.6m 5.25m 7.95m

Instructions for Poster Presenters

Conference and events Email All postersT will+44 (0) be 131 presented 651 2189 in the Prestonfieldedinburgh.fi[email protected]. The poster presentation format is A0 (118.9 cm high and 84.1Accommodation cm wide, enquiries or 46.8 in x 33.1Visit in, respectively); note that the actual poster-board is 120cm x 90 cm so do notT +44 exceed (0) 131 these 651 dimensions2007 edinburghfirst.co.uk and make sure your poster is in the correct orientation!

Please setF + 44 up (0) your131 667 posters 7271 at least 15 minutes before the session begins. The poster presenters are expected to stand next to their posters and be prepared to discuss their work and answer specific questions. Each poster session will start with a short plenary introduction of the presented papers in the Pentland room. The session chair will summarise the main contributions of each paper.

Wifi Access

Free wifi is available throughout the conference venue.

7 Social Events

Opening Reception

The Opening Reception will take place on Monday 19th September from 5.3pm inside the conference venue (John McIntyre Centre)

Guided Bus Tour

An open-top guided bus tour of Edinburgh is included in your registration fee. The tour will leave at 6pm (sharp!) from the conference, and take approximately 90 minutes. The tour will return directly to the South Hall where the conference dinner takes place.

Conference Dinner

The Conference Dinner takes place in the South Hall which is on the same campus as the John McIntyre Centre. If you are taking part in the bus tour, you will be dropped directly here. If you are not taking part in the tour, please make your own way to the South Hall. There will be a drinks reception from 7.30pm. Dinner starts at 8pm

8 Tutorials

Saturday 17th September: 11-12.30

• Holyrood Advances on Multi-modal optimization

• Salisbury A Bridge between Optimization over Manifolds and Evolutionary Computation • Duddingston Graph-based and Cartesian Genetic Programming

Advances on Multi-modal optimization Mike Preuss, University of Dortmund (Germany) , Michael G. Epitropakis, Lancaster Univer- sity (UK) Multimodal optimization is currently getting established as a research direction that collects approaches from various domains of operational research and evolutionary computation that strive for delivering multiple very good solutions at once. We discuss several scenarios and list currently employed and potentially available performance measures. Furthermore, many state-of-the-art as well as older methods are compared and put into a rough taxonomy. We also discuss recent relevant competitions and their results and outline the possible future developments in this area.

A Bridge between Optimization over Manifolds and Evolutionary Computation Luigi Malag, Shinshu University, Japan The aim of this tutorial is to explore the promising connection between the well-consolidated field of opti- mization over manifolds and evolutionary computation. In mathematics, optimization over manifolds deals with the design and analysis of algorithms for the optimization over search spaces with admit a non-Euclidean geometry. One of the simplest examples is probably the sphere, where the shortest path between two points is given by a curve, and not a straight line. Manifolds may appear in evolutionary computation in at least two contexts. The simplest one is the case when an evolutionary algorithm is employed to optimize a fitness function defined over a manifold, such as in the case of the sphere, the cone of positive-definite matrices, the set of rotation matrices, and many others. The second one is more subtle, and is related to the stochastic relaxation of a fitness function. A common approach in model-based evolutionary computation is to search for the optimum of a function by sampling populations from a sequence of probability distributions. For instance, this is the case of evolutionary strategies, probabilistic model-building genetic algorithms, estima- tion of distribution algorithms and similar techniques, both in the continuous and in the discrete domain. A strictly related paradigm which can be used to describe the behavior of model-based search algorithms is that of stochastic relaxation, i.e., the optimization of the expected value of the original fitness function with respect to a probability distribution in a statistical model. From this perspective a model-based algorithm is solving a problem which is strictly related to the optimization of the stochastic relaxation over a statistical model. Notably, statistical models are well-known examples of manifolds, where the Fisher information plays the role of metric tensor. For this reason, it becomes of great interest to compare the standard techniques in the field of optimization over manifolds, with the mechanisms implemented by model-based algorithm in evolutionary computation. The tutorial will consist of two parts. In the first one, a unifying framework for the description of model-based algorithms will be introduced and some standard well-known algorithms will be presented from the perspective of the optimization over manifold. Particular attention will be devoted to first-order methods based on the Riemannian gradient over a manifold, which in the case of a statistical model is known as the natural gradient. In the second part, we will discuss how evolutionary algorithms can be adapted to solve optimization problems defined over manifold, which constitutes a novel and promising area of research in evolutionary computation.

9 Graph-based and Cartesian Genetic Programming Julian Miller and Patricia Ryser-Welch, University of York (UK) Genetic Programming is often associated with a tree representation for encoding expressions and algorithms. However, graphs are also very useful and flexible program representations which can be applied to many domains (e.g. electronic circuits, neural networks, algorithms). Over the years a variety of representations of graphs have been explored such as: Parallel Distributed Genetic Programming (PDGP) , Linear-Graph Genetic Programming, Enzyme Genetic Programming, Graph Structured Program Evolution (GRAPE) and Cartesian Genetic Programming (CGP). Cartesian Genetic Programming (CGP) is probably the best known form of graph-based Genetic Pro- gramming. It was developed by Julian Miller in 1999-2000. In its classic form, it uses a very simple integer address-based genetic representation of a program in the form of a directed graph. CGP has been adopted by a large number of researchers in many domains. In a number of studies, CGP has been shown to be comparatively efficient to other GP techniques. It is also very simple to program. Since its original formulation, the classical form of CGP has also undergone a number of developments which have made it more useful, efficient and flexible in various ways. These include the addition of automatically defined functions (modular CGP), self-modification operators (self-modifying CGP), the encoding of artificial neural networks (GCPANNs) and evolving iterative programs (iterative CGP).

Saturday 17th September: 14-15.30

• Holyrood Evolutionary Multiobjective Optimization • Duddingston Implementing Evolutionary Algorithms in the Cloud

Evolutionary Multiobjective Optimization

Dimo Brockhoff, INRIA Lille (France) Many optimization problems are multiobjective, i.e., multiple, conflicting criteria need to be considered simultaneously. Due to conflicts between the objectives, usually no single optimum solution exists. Instead, a set of so-called Pareto-optimal solutions, for which no other solution has better function values in all objectives, does emerge. In practice, Evolutionary Multiobjective Optimization (EMO) algorithms are widely used for solving mul- tiobjective optimization problems. As stochastic blackbox optimizers, EMO approaches cope with nonlinear, nondifferentiable, or noisy objective functions. By inherently working on sets of solutions, they allow the Pareto-optimal set to be approximated in one algorithm run opposed to classical techniques for multicriteria decision making (MCDM), which aim for single solutions. Defining problems in a multiobjective way has two further advantages:

• The set of Pareto-optimal solutions may reveal shared design principles (innovization) • Singleobjective problems may become easier to solve if auxiliary objectives are added (multiobjectiviza- tion).

Within this tutorial, we comprehensively introduce the field of EMO and present selected research results in more detail. More specifically, we explain the basic principles of EMO algorithms in comparison to classical approaches, show a few practical examples motivating the use of EMO, and present a general overview of state-of-the-art algorithms and selected recent research results.

10 Implementing Evolutionary Algorithms in the Cloud JJ Merelo, University of Granada (Spain) Creating experiments that can be easily reproduced and converted in a straightforward way into a report involves knowing a series of techniques that are of widespread use in the open source and commercial software communities. This tutorial will introduce this techniques, including an introduction to cloud computing and DevOps for evolutionary algorithm practitioners, with reference to the tools and platforms that can make development of new algorithms and problem solutions fast and reproducible.

Saturday 17th September: 15:45-17.15

• Holyrood Promoting Diversity in Evolutionary Optimization: Why and How • Duddingston Evolutionary Computation in Cryptography

Promoting Diversity in Evolutionary Optimization: Why and How Giovanni Squillero, Politecnico di Torino (Italy), Alberto Tonda, INRA (France) Divergence of character is a cornerstone of natural evolution. On the contrary, evolutionary optimization processes are plagued by an endemic lack of diversity: all candidate solutions eventually crowd the very same areas in the search space. Such a lack of speciation has been pointed out in the seminal work of Holland in 1975, and nowadays is well known among scholars. It has different effects on the different search algorithms, but almost all are quite deleterious. The problem is usually labeled with the oxymoron premature convergence, that is, the tendency of an algorithm to convergence toward a point where it was not supposed to converge to in the first place. Scientific literature contains several efficient diversity-preservation methodologies that ranged from general techniques to problem-dependent heuristics. However, the fragmentation of the field and the difference in terminology led to a general dispersion of this important corpus of knowledge in many small, hard-to-track research lines. Upon completion of this tutorial, attendees will understand the root causes and dangers of premature convergence. They will know the main research lines in the area of diversity promotion. They will be able to choose an effective solution from the literature, or design a new one more tailored to their specific needs.

Evolutionary Computation in Cryptography Stjepan Picek , KU Leuven, Belgium and University of Zagreb, Croatia Evolutionary Computation (EC) has been used with great success on various real-world problems. One do- main abundant with numerous difficult problems is cryptology. Cryptology can be divided into cryptography and cryptanalysis where although not always in an obvious way, EC can be applied to problems from both domains. This tutorial will first give a brief introduction to cryptology intended for general audience. Af- terwards, we concentrate on several topics from cryptography that are successfully tackled up to now with EC and discuss why those topics are suitable to apply EC. However, care must be taken since there exists a number of problems that seem to be impossible to solve with EC and one needs to realize the limitations of the heuristics. We will discuss the choice of appropriate EC techniques (GA, GP, CGP, ES, multi-objective optimization) for various problems and evaluate on the importance of that choice. Furthermore, we will discuss the gap between the cryptographic community and EC community and what does that mean for the results. By doing that, we give a special emphasis on the perspective that cryptography presents a source of benchmark problems for the EC community. This tutorial will also present some live demos of EC in action when dealing with cryptographic problems.

11 Sunday 18th September: 09:30-10.45

• Holyrood Gray Box Optimization in Theory and Practice • Duddingston Gray Box Optimization in Theory and Practice

Gray Box Optimization in Theory and Practice Darrell Whitley, Colorado State University (USA) This tutorial will cover Gray Box Complexity and Gray Box Optimization for kbounded pseudo-Boolean optimization. These problems can also be referred to at Mk Landscapes, and included problems such as MAX-kSAT, spin glass problems and NK Landscapes. Mk Landscape problems are a linear combination of M subfunctions, where each subfunction accepts at most k variables. Under Gray Box optimization, the optimizer is given access to the set of M subfunctions. If the set of subfunctions is k-bounded and separable, the Gray Box optimizer is guaranteed to return the global optimum with 1 evaluation. If a problem is not deceptive, the Gray Box optimizer also returns the global optimum after 1 evaluation. This means that simple test problems from ONEMAX to Trap Functions are solved in 1 evaluation in O(n) time under Gray Box Optimization. If a tree decomposition exists with a fixed bounded tree width, then the problem can be solved using dynamic programming in O(n) time. If the tree decomposition is bounded by lg(n), then the problem can be solved by dynamic programming in O(n2) time. Even for those problems that are not trivially solved, Gray Box optimization also makes it possible to exactly compute Hamming distance 1 improving moves in constant time. Thus, neither mutation nor enumeration of the Hamming neighborhood are necessary. Under many conditions it is possible to calculate the location of improving moves in a Hamming distance radius r neighborhood, thus selecting improving moves several moves ahead. This also can be done in constant time. There also exists deterministic forms of recombination that provably return the best possible offspring from a reachable set of offspring. Partition Crossover relies on localized problem decomposition, and is invariant to the order of the bits in the representation. The methods identify partitions of nonlinear interaction between variables. Variables within a partition must be inherited together. However, bits in different partitions can be linearly recombined. Given p partitions, recombination can be done in O(n) time such that crossover returns the best solutions out of 2p offspring. The offspring can also be proven to be locally optimal in the largest hyperplane subspace in which the two parents reside. Thus, Partition Crossover is capable of directly moving from known local optima to new, high quality local optima in O(n) time. These innovations will fundamentally change both Local Search and Evolutionary Algorithms. Empirical results show that combining smart local search with Partition Crossover results in search algorithms that are capable of finding globally optimal solutions for nonlinear problems with a million variables in less than 1 minute.

The Attainment Function Approach to Performance Evaluation in Evolutionary Multiobjective Optimization Carlos M. Fonseca and Andreia P. Guerreiro, University of Coimbra (Portugal) The development of improved optimization algorithms and their adoption by end users depend on the ability to evaluate their performance on the problem classes of interest. In the absence of theoretical guarantees, performance must be evaluated experimentally while taking into account both the experimental conditions and the nature of the data collected. Evolutionary approaches to multiobjective optimization typically produce discrete Pareto-optimal front approximations in the form of sets of mutually non-dominated points in objective space. Since evolutionary algorithms are stochastic, such non-dominated point sets are random, and vary according to some probability distribution. In contrast to quality indicators, which map non-dominated point sets to real values, and side-step the set nature of the data, the attainment-function approach addresses the non-dominated point set distribution

12 directly. Distributional aspects such as location, variability, and dependence, can be estimated from the raw non-dominated point set data. This tutorial will focus on the attainment function as a tool for the evaluation of the performance of evolutionary multiobjective optimization (EMO) algorithms. In addition to the theoretical foundations of the methodology, computational and visualization issues will be discussed. The application of the methodology will be demonstrated by interactively exploring example data sets with freely available software tools. To conclude, a selection of open questions and directions for further work will be identified.

Sunday 18th September: 11:00-12.30

• Holyrood Intelligent Systems for Smart Cities

• Duddingston Theory of Evolutionary Computation

Intelligent Systems for Smart Cities Enrique Alba, University of Mlaga (Spain) The concept of Smart Cities can be understood as a holistic approach to improve the level of development and management of the city in a broad range of services by using information and communication technologies. It is common to recognize six axes of work in them: i) Smart Economy, ii) Smart People, iii) Smart Governance, iv) Smart Mobility, v) Smart Environment, and vi) Smart Living. In this tutorial we first focus on a capital issue: smart mobility. European citizens and economic actors need a transport system which provides them with seamless, high-quality door-to-door mobility. At the same time, the adverse effects of transport on the climate, the environment and human health need to be reduced. We will show many new systems based in the use of bio-inspired techniques to ease the road traffic flow in the city, as well as allowing a customized smooth experience for travellers (private and public transport). This tutorial will then discuss on potential applications of intelligent systems for energy (like adaptive lighting in streets), environmental applications (like mobile sensors for air pollution), smart building (intelli- gent design), and several other applications linked to smart living, tourism, and smart municipal governance.

Theory of Evolutionary Computation Benjamin Doerr, Ecole Polytechnique de Paris (France) Theoretical research has always accompanied the development and analysis of evolutionary algorithms, both by explaining observed phenomena in a very rigorous manner and by creating new ideas. Since the method- ology of theory research is very different from experimental or applied research, non-theory researcher occa- sionally find it hard to understand and profit from theoretical research. Overcoming this gap in our research field is the target of this tutorial. Independent of particular theoretical subdisciplines or methods like run- time analysis or landscape theory, we aim at making theory accessible to researchers having little exposure to theory research previously. In particular, we describe what theory research in EC is, what it aims at, and showcase some of key findings of the last 15 years, we discuss the particular strengths and limitations of theory research, we show how to read, understand, interpret, and profit from theory results.

13 Sunday 18th September: 14:00-15.30

• Prestonfield Evolutionary robotics a practical guide to experiment with real hardware • Salisbury Evolutionary Algorithms and Hyper-Heuristics • Duddingston Runtime Analysis of Evolutionary Algorithms: Basic Introduction

Evolutionary robotics a practical guide to experiment with real hardware Jacqueline Heinerman, Gusz Eiben, Evert Haasdijk and Julien Hubert, VU University Ams- terdam(Netherlands) Evolutionary robotics aims to evolve the controllers, the morphologies, or both, for real and/or simulated autonomous robots. Most research in evolutionary robotics is partly or completely carried in simulation. Although simulation has advantages, e.g., it is cheaper and it can be faster, it suffers from the notorious reality gap. Recently, affordable and reliable robots became commercially available. Hence, setting up a population of real robots is within reach for a large group of research groups today. This tutorial focuses on the know-how required to utilise such a population for running evolutionary experiments. To this end we use Thymio II robots with Raspberry Pi extensions (including a camera). The tutorial explains and demonstrates the work-flow from beginning to end, by going through a case study of a group of Thymio II robots evolving their neural network controllers to learn collecting objects on-the-fly. Besides the methodology and lessons learned, we spend time on how to code.

Evolutionary Algorithms and Hyper-Heuristics

Nelishia Pillay,University of KwaZulu-Natal, (South Africa) Hyper-heuristics is a rapidly developing domain which has proven to be effective at providing generalized solutions to problems and across problem domains. Evolutionary algorithms have played a pivotal role in the advancement of hyper-heuristics, especially generation hyper-heuristics. Evolutionary algorithm hyper- heuristics have been successful applied to solving problems in various domains including packing problems, educational timetabling, vehicle routing, permutation flowshop and financial forecasting amongst others. The aim of the tutorial is to firstly provide an introduction to evolutionary algorithm hyper-heuristics for researchers interested in working in this domain. An overview of hyper-heuristics will be provided. The tutorial will examine each of the four categories of hyper-heuristics, namely, selection constructive, selection perturbative, generation constructive and generation perturbative, showing how evolutionary algorithms can be used for each type of hyper-heuristic. A case study will be presented for each type of hyper-heuristic to provide researchers with a foundation to start their own research in this area. Challenges in the imple- mentation of evolutionary algorithm hyper-heuristics will be highlighted. An emerging research direction is using hyper-heuristics for the automated design of computational intelligence techniques. The tutorial will look at the synergistic relationship between evolutionary algorithms and hyper-heuristics in this area. The use of hyper-heuristics for the automated design of evolutionary algorithms will be examined as well as the application of evolutionary algorithm hyper-heuristics for the design of computational intelligence tech- niques. The tutorial will end with a discussion session on future directions in evolutionary algorithms and hyper-heuristics.

Runtime Analysis of Evolutionary Algorithms: Basic Introduction Per Kristian Lehre, University of Nottingham, (UK), Pietro S. Oliveto, University of Sheffield, (UK) Evolutionary algorithm theory has studied the time complexity of evolutionary algorithms for more than 20 years. This tutorial presents the foundations of this field. We introduce the most important notions and

14 definitions used in the field and consider different evolutionary algorithms on a number of well-known and important example problems. Through a careful and thorough introduction of important analytical tools and methods, including fitness- and level-based analysis, typical events and runs, and drift analysis. By the end of the tutorial the attendees will be able to apply these techniques to derive relevant runtime results for non-trivial evolutionary algorithms. In addition to custom-tailored methods for the analysis of evolutionary algorithms we also introduce the relevant tools and notions from probability theory in an accessible form. This makes the tutorial appropriate for everyone with an interest in the theory of evolutionary algorithms without the need to have prior knowledge of probability theory and analysis of randomised algorithms. Variants of this tutorial have been presented at GECCO 2013-2015, attracting well over 50 participants each time. The tutorial will be based on the Theoretical analysis of stochastic search heuristics chapter of the forthcoming Springer Handbook of Heuristics.

Sunday 18th September: 15:45-17.15

• Salisbury Meta-Model Assisted (Evolutionary) Optimization • Duddingston Theory of Parallel Evolutionary Algorithms

Meta-Model Assisted (Evolutionary) Optimization Boris Naujoks, Jrg Stork, Martin Zaefferer, and Thomas Bartz-Beielstein, TH Kln (Germany) Meta-model assisted optimization is a well-recognized research area. When the evaluation of an objective function is expensive, meta-model assisted optimization yields huge improvements in optimization time or cost in a large number of different scenarios. Hence, it is extremely useful for numerous real-world applications. These include, but are not limited to, the optimization of designs like airfoils or ship propulsion systems, chemical processes, biogas plants, composite structures, and electromagnetic circuit design. This tutorial is largely focused on evolutionary optimization assisted by meta-models, and has the fol- lowing aims: Firstly, we will provide a detailed understanding of the established concepts and distinguished methods in meta-model assisted optimization. Therefore, we will present an overview of current research and open issues in this field. Moreover, we aim for a practical approach. The tutorial should enable the participants to apply up-to-date meta-modelling approaches to actual problems at hand. Afterwards, we will discuss typical problems and their solutions with the participants. Finally, the tutorial offers new perspectives by taking a look into areas where links to meta-modelling concepts have been established more recently, e.g., the application of meta-models in multi-objective optimization or in combinatorial search spaces.

Theory of Parallel Evolutionary Algorithms Dirk Sudholt, University of Sheffield (UK) Evolutionary algorithms (EAs) have given rise to many parallel variants, fuelled by the rapidly increasing number of CPU cores and the ready availability of computation power through GPUs and cloud computing. A very popular approach is to parallelize evolution in island models, or coarse-grained EAs, by evolving different populations on different processors. These populations run independently most of the time, but they periodically communicate genetic information to coordinate search. Many applications have shown that island models can speed up computation time significantly, and that parallel populations can further increase solution diversity. However, there is little understanding of when and why island models perform well, and what impact fundamental parameters have on performance. This tutorial will give an overview of recent theoretical results on the runtime of parallel evolutionary algorithms. These results give insight into the fundamental working principles of parallel EAs, assess the impact of parameters and design choices on performance, and contribute to the design of more effective parallel EAs.

15 Workshops

Saturday 17th September

Landscape-Aware Heuristic Search (Prestonfield) 1100-17.15

Organisers: Nadarajen Veerapen, Gabriela Ochoa 11:00 12:30 1st Session • The Global Structure of TSP Fitness Landscapes Gabriela Ochoa, Nadarajen Veerapen • Mk Landscapes Problem Structure Darrell Whitley, Francisco Chicano, and Brian Goldman

• Fitness Landscape Analysis of a Class of NP-Complete Binary Packing Problems Khulood Alyahya and Jonathan E. Rowe • A Triple Interpretation of Combinatorial Search Spaces Valentino Santucci and Alfredo Milani

12:30 14:00 Lunch 14:00 15:30 2nd Session • Fitness Landscape of the Triangle Program William B. Langdon and Mark Harman • Climbing Fitness Landscapes with the Maximum Expansion Pivoting Rule Sara Tari, Matthieu Basseur, and Adrien Goffon

• Dynamic Selection of Evolutionary Operators Based on Online Learning and Fitness Landscape Analysis Pietro A. Consoli, Yi Mei, Leandro L. Minku, and Xin Yao • Fitness Landscape Analysis, Problems Features and Performance Prediction for Multi-objective Opti- mization Fabio Daolio, Arnaud Liefooghe, Sbastien Verel, Hernan Aguirre, and Kiyoshi Tanaka

15:30 15:45 Coffee break 15:45 17:15 3rd Session • Towards Algorithm Portfolio Based on Local Optima Network Features Sbastien Verel, Fabio Daolio, Gabriela Ochoa, and Marco Tomassini • Exploratory Landscape Analysis By Using the R-Package flacco Pascal Kerschke and Heike Trautmann

• DISCUSSION

16 Sunday 18th September

Natural Computing in Scheduling and Timetabling (Prestonfield): 0930-12:30

Organisers: Ahmed Kheiri, Rhyd Lewis, Ender Ozcan 0945-1045: Invited Talk • Evolutionary Design of Production Scheduling Heuristics Juergen Branke, University of Warwick

1045-1100: Coffee Break 1100-12.15 Accepted Papers • 11:00-11:25 he Role of Generation Constructive Hyper-Heuristics in Educational Timetabling Nelishia Pillay and Ender Ozcan • 11:25-11:50 Multi-objective Scheduling in Unreliable Distributed Computing Environment Jakub Gasior and Franciszek Seredynski • 11:50-12:15 Heuristic-based Method for Scheduling Surgical Procedures Ahmed Kheiri, Rhyd Lewis, Jonathan Thompson and Paul Harper

12:15 - 1230 Open discussion

Advances in Multi-modal Optimization (Salisbury): 0930-12:30

Organisers: Mike Preuss, Michael G. Epitropakis, Xiaodong Li Session 1: 9:30 – 10:45 • 9:30–9:35 Introduction to the Workshop WMMO: Recent developments in Multi-modal Optimization, Michael Epitropakis, Mike Preuss, Xiaodong Li • 9:35–10:00 Towards theoretical analysis in Multi-modal Optimisation Christine Zarges • 10:00–10:25 Assessing Basin Identification Methods for Locating Multiple Optima Simon Wessing, Gunter Rudolph and Mike Preuss

• 10:25–10:45 Multi-modality in Continuous Multi-Objective Optimization, Pascal Kerschke and Chris- tian Grimme

10:45–11:00 Coffee break Session 2: 11:00–12:35 • 11:00–11:25 Quality Diversity: Treating Diversity as the Primary Goal Justin K. Pugh and Kenneth O. Stanley

17 • 11:25–11:50 Feature-Based Diversity Optimization for Problem Instance Classification Wanru Gao, Samadhi Nallaperuma, and Frank Neumann

• 11:50–12:10 Improved species conserving genetic algorithm for solving combinatorial optimisation problems, by Kirils Bibiks Jian-Ping Li, and Yim Fun Hu • 12:10–12:30 Multi-concept Optimization vs. Multi-modal Optimization Amiram Moshaiov • 12:30–12:35 Wrap-up Michael Epitropakis, Mike Preuss, Xiaodong Li

Intelligent Transportation Workshop (Holyrood) 1400-17:15

Organiser: Neil Urquhart • 1400-1430: tba. Jamal Ouennich, University of Edinburgh

• 1430-1500: Nowcasting traffic JJ Merelo, University of Granada • 1500-15:30 Combining Human Expertise and Machine Learning for Intelligent Transportation Re- source Management Aniko Ekart, Aston University

1530-1545 Coffee Break • 1545-16:15 Using multiple real world objectives in mobile workforce problems Neil Urquhart, Edinburgh Napier University • 1615-16:45 Evaluating Fitness Functions within the Artificial Ecosystem Algorithm and their Appli- cation to Bicycle Redistribution Manal Adham, University College London

• 16:45-17:15 A Combined Generative and Selective Hyper-heuristic for the Vehicle Routing Problem Kevin Sim, Edinburgh Napier University

18 Keynotes

Monday 19th September 09:45-1100

Open-Endedness in Simulation: a definition and its consequences Susan Stepney, University of York, UK Open-ended behaviour in simulated systems is one goal of artificial life, yet the term open-ended is rarely defined. Here I discuss a recent definition in terms of models and meta-models, and its consequences for discovering multi-scale open-endedness in computer simulations

Susan Stepney was originally a theoretical astrophysicist. She moved to industry where she used formal methods to prove the security of smart card ap- plications. In 2002 she returned to academia as Professor of Computer Science at the University of York, UK, where she leads the Non-Standard Computa- tion research group. Since 2012 she has been Director of the York Centre for Complex Systems Analysis. She is on the board of directors of the Interna- tional Society for Artificial life, and is a member of EPSRCs ICT Strategic Advisory Team. Her current research interests include unconventional models of computation, complex systems, artificial chemistries, emergence, open-ended evolution, and bio-inspired computing.

Tuesday 20th September 09:45-1100

Temporal regulation of collective behavior to generate different structures: learn- ing from the vasculature Katie Bentley, Harvard Medical School, USA Through integrated agent-based computational modeling with in vitro and in vivo experimentation we have recently uncovered that the speed and specific dynamic properties of collective decision-making among en- dothelial cells ultimately determines the morphology of the blood vascular network that is generated. Lateral inhibition via Notch-Dll4 signaling of neighboring cells is required to select migratory cells to lead new blood vessel sprouts. Our computational model predicted that the speed of this collective decision making process to select the cells migratory or inhibited states over time is affected by changes to tissue environment leading to drastic changes in branch spacing and vessel diameter. In experimental studies we have now validated that these predictions are correct, indicating an important new temporal mechanism for the switch to abnormal vessel growth in cancer and potentially many other diseases. Indeed we have also found first evidence that other pathways within the cells themselves can modulate collective decision deliberation times leading again to the morphogenesis if different vascular tree structures. This work has important ramifications for the field of vascular biology and therapeutic interventions targeting abnormal vascular growth in many diseases such as cancer and retinopathy. Also it represents a potentially interesting new temporal mechanism to exploit in bio-inspired collective systems.

19 Katie Bentley earned a PhD in Computer Science from UCL in 2006 after completing an MSc in Evolutionary and Adaptive systems at the University of Sussex in 2002 with a focus on morphogenesis in natural and artificial systems. She was awarded a Cancer Research UK postdoctoral fellowship to develop agent-based models integrated with in vitro and in vivo experiments of blood vessel growth at the London Research Institute in 2006. She was then funded by a Leducq Fondation transatlantic network grant to travel between five highly distinguished Vascular Biology Labs based at Yale, UCLA, KU Leuven and CR UK and develop predictive models tested in vivo. Dr. Bentley was appointed Assistant Professor of Pathology, Harvard Medical School and group leader of the Computational Biology Laboratory at the Center for Vascular Biology Re- search, Beth Israel Deaconess Medical Center, Boston (2013). She has also recently been appointed Associate Professor at the Rudbeck Laboratories, Uni- versity of Uppsala, Sweden to lead a second vascular modeling lab integrated within their vascular biology department. Dr. Bentley is on the Board of Di- rectors for the International Society of Artificial Life.

Wednesday 21st September 09:45-1100

Parallel problem solving through crowds and machines Josh Bongard, University of Vermont, USA Cloud robotics, and the internet of things, is enabling ever-larger combinations of people and machines to solve increasingly challenging problems. In this talk I will outline some of our work to recruit large numbers of non-experts to solve various problems in robotics by drawing on the twin human instincts to build and to teach. I will first describe the DotBot project, in which participants build the body plans of simulated robots, while search methods improve the controllers for them. I will then explain how features were learned from the human-designed robots to empower a subsequent, fully automated system in which computers op- timized robot bodies and brains. I will conclude by introducing the Twitch Plays Robotics project, in which participants teach robots how to ground the symbols of human languages in action and social prediction.

Josh Bongard obtained his Bachelors degree in Computer Science from Mc- Master University, ; his Masters degree from the University of Sussex, United Kingdom; his PhD from the , ; and served as a postdoctoral associate at . In 2006 he was named a Microsoft New Faculty Fellow, as well as one of the top 35 innovators under the age of 35 by MITs Technology Review Magazine. In 2011 he received a Presidential Early Career Award for Scientists and Engineers (PECASE) from Barack Obama at the White House. Josh is currently the Veinott Professor of Computer Science at the University of Vermont. His research foci include evolutionary robotics, crowdsourcing, and machine science.

20 Poster Sessions

Monday 19th September

Session 1: 11.30-13:00

Session Chair: Ken De Jong (best paper session) • Online Model Selection for Restricted Covariance Matrix Adaptation Youhei Akimoto and Nikolaus Hansen

• Evolution under Strong Noise: A Self-Adaptive Evolution Strategy Can Reach the Lower Performance Bound - the pcCMSA-ES Michael Hellwig and Hans-Georg Beyer • Cooperative Coevolution of Control for a Real Multirobot System Jorge Gomes, Miguel Duarte, Pedro Mariano and Anders Lyhne Christensen

• Feature Extraction for Surrogate Models in Genetic Programming Martin Pilat and Roman Neruda • REMEDA: Random Embedding EDA for optimising functions with intrinsic dimension Momodou Sanyang and Ata Kaban • Emergence of Diversity and its Benefits for Crossover in Genetic Algorithms Duc-Cuong Dang, Per Kristian Lehre, Tobias Friedrich, Timo Ktzing, Martin S. Krejca, Pietro S. Oliveto, Dirk Sudholt and Andrew M. Sutton • Towards Analyzing Multimodality of Multiobjective Landscapes Pascal Kerschke, Hao Wang, Mike Preuss, Christian Grimme, Andr Deutz, Heike Trautmann and Michael Emmerich • Towards automatic testing of reference point-based interactive methodsVesa Ojalehto, Dmitry Pod- kopaev and Kaisa Miettinen • Evolving Spatially Aggregated Features From Satellite Imagery for Regional Modeling Sam Kriegman, Marcin Szubert, Josh Bongard and Christian Skalka • Hypervolume Sharpe Ratio Indicator: Formalization and First Theoretical Results Andreia Guerreiro and Carlos Fonseca

Session 2 14:00-15.30

Session Chair: Christine Zarges • Optimising Quantisation Noise in Energy Measurement W Langdon, J. Petke, B. Bruce • TADE: Tight Adaptive Differential Evolution Weijie Zheng, Haohuan Fu and Guangwen Yang • Evolution of active categorical image classification via saccadic eye movement Randal Olson, Jason Moore and Christoph Adami • Population Diversity Measures based on Variable-order Markov Models for the Traveling Salesman Problem Yuichi Nagata • iMOACOR: A New Indicator-Based Multi-Objective Ant Colony Optimization Algorithm for Contin- uous Search Spaces Jess Guillermo Falcn Cardona and Carlos A. Coello Coello

21 • Genotype Regulation by Self-Modifying Instruction-Based Development on Cellular Automata Stefano Nichele, Tom Eivind Glover and Gunnar Tufte • Artificially Inducing Environmental Changes in Evolutionary Dynamic Optimization Renato Tinos and Shengxiang Yang • Parameterized Analysis of Multi-objective Evolutionary Algorithms and the Weighted Vertex Cover Problem Mojgan Pourhassan, Feng Shi and Frank Neumann • RK-EDA: A Novel Random Key Based Estimation of Distribution Algorithm Mayowa Ayodele, John McCall and Olivier Regnier-Coudert • An Asynchronous and Steady State Update Strategy for the Particle Swarm Optimization Algorithm Carlos Fernandes, Jj Merelo and Agostinho Da Rosa • Using Scaffolding with Partial Call-Trees to Improve Search Brad Alexander, George Lorenzetti, Con- stantina Pyromallis and Brad Zacher • Analyzing Inter-objective Relationships: A Case Study of Software Upgradability Zhilei Ren, He Jiang, Jifeng Xuan and Yan Hu • On constraint handling in surrogate-assisted evolutionary many-objective optimization Tinkle Chugh, Karthik Sindhya, Kaisa Miettinen, Jussi Hakanen and Yaochu Jin • Searching for Quality Diversity when Diversity is Unaligned with Quality Justin Pugh, Lisa Soros and Kenneth Stanley

Session 3 15:45-17.15

Session Chair: Carlos Coello Coello • Syntactical Similarity Learning by means of Grammatical Evolution Alberto Bartoli, Andrea De Lorenzo, Eric Medvet and Fabiano Tarlao • Feature-Based Diversity Optimization for Problem Instance Classification Wanru Gao, Samadhi Nal- laperuma and Frank Neumann • How Far Are We From an Optimal, Adaptive DE? Ryoji Tanabe and Alex Fukunaga • Selection Hyper-heuristics Can Provably be Helpful in Evolutionary Multi-objective Optimization Chao Qian, Ke Tang and Zhi-Hua Zhou • Augmented Lagrangian Constraint Handling for CMA-ES–Case of a Single Linear Constraint Asma Atamna, Anne Auger and Nikolaus Hansen • Replicating the Stroop Effect using A Developmental Spatial Neuroevolution System Amit Benbassat and Avishai Henik • Efficient Sampling when Searching for Robust Solutions Juergen Branke and Xin Fei • Kin Selection with Twin Genetic Programming W Langdon • Multicriteria Building Spatial Design with Mixed Integer Evolutionary Algorithms Koen van der Blom, Sjonnie Boonstra, Hrm Hofmeyer and Michael T. M. Emmerich • Comparing Asynchronous and Synchronous Parallelization of the SMS-EMOA Simon Wessing, Gnter Rudolph and Dino Menges • Rapid Phenotypic Landscape Exploration through Hierarchical Spatial Partitioning Davy Smith, Lau- rissa Tokarchuk and Geraint Wiggins • Community structure detection for the functional connectivity networks of the brain Rodica Ioana Lung, Mihai Suciu, Regina Meszlnyi, Krisztian Buza and Nomi Gask

22 • A Fitness Cloud Model for Adaptive Metaheuristic Selection Methods Christopher Jankee, Sebastien Verel, Bilel Derbel and Cyril Fonlupt

• Data Classification using Carbon-Nanotubes and Evolutionary Algorithms E. Vissol-Gaudin, A. Kot- sialos, M. K. Massey, D. A. Zeze,C. Pearson, C. Groves and M. C. Petty

Tuesday 20th September

Session 4 11:30-13:00

Session Chair Bogdan Filipic • Hierarchical Knowledge in Self-Improving Grammar Based Genetic Programming Pak-Kan Wong, Man- Leung Wong and Kwong-Sak Leung • Speciated Evolutionary Algorithm for Dynamic Constrained Optimisation Xiaofen Lu, Ke Tang and Xin Yao • A Decomposition-based Approach for Solving Large Scale Multi-objective Problems Luis Miguel Anto- nio and Carlos A. Cello Coello

• Feature Based Algorithm Configuration: a Case Study with Differential Evolution Nacim Belkhir, Johann Dro, Pierre Savant and Marc Schoenauer • Fixed-Parameter Single Objective Search Heuristics for Minimum Vertex Cover Wanru Gao, Tobias Friedrich and Frank Neumann

• What does the Evolution Path Approximate in CMA-ES? Zhenhua Li and Qingfu Zhang • Improving Efficiency of Bi-level Worst Case Optimization Ke Lu, Juergen Branke and Tapabrata Ray • A General-Purpose Framework for Genetic Improvement Francesco Marino, Giovanni Squillero and Alberto Tonda

• The Multiple Insertion Pyramid: A Fast Parameter-less Population Scheme Willem den Besten, Dirk Thierens and Peter A.N. Bosman • An Evolutionary Framework for Replicating Neurophysiological Data with Spiking Neural Networks Emily L. Rounds, Eric O. Scott, Andrew S. Alexander, Kenneth A. De Jong, Douglas A. Nitz and Jeffrey L. Krichmar

• A Cross-Platform Assessment of Energy Consumption in Evolutionary Algorithms. Towards Energy- Aware Bioinspired Algorithms Francisco Chavez de La O, Francisco Fernandez de Vega, Josefa Diaz Alvarez, Juan Angel Garcia, Pedro Angel Castillo Valdivieso, Juan Julian Merelo Guervos and Carlos Cotta Porras.

• Coarse-Grained Barrier Trees of Fitness Landscapes Sebastian Herrmann, Gabriela Ochoa and Franz Rothlauf • Understanding Environmental Influence in an Open-Ended Evolutionary Algorithm Andreas Steyven, Emma Hart and Ben Paechter • A Hybrid Autoencoder and Density Estimation Model for Anomaly Detection Van Loi Cao, Miguel Nicolau and James McDermott

23 Session 5 14:00-15.30

Session Chair Elizabeth Wanner • Parallel Hierarchical Evolution of String Library Functions Jacob Soderlund, Darwin Vickers and Alan Blair • Variable Interaction in Multi-Objective Optimization Problems Ke Li, Mohammad Omidvar, Kalyan- moy Deb and Xin Yao • Graceful Scaling on Uniform versus Steep-Tailed Noise Tobias Friedrich, Timo Ktzing, Martin S. Krejca and Andrew M. Sutton • An Active Set Evolution Strategy for Optimization with Known Constraints Dirk Arnold

• An Extension of Algebraic Differential Evolution for the Linear Ordering Problem with Cumulative Costs Marco Baioletti, Alfredo Milani and Valentino Santucci • Tunnelling Crossover Networks for the Asymmetric TSP Nadarajen Veerapen, Gabriela Ochoa, Renato Tinos and Darrell Whitley • Exploring Uncertainty and Movement in Categorical Perception using Robots Nathaniel Powell and Josh Bongard • On the Use of Semantics in Multi-Objective Genetic Programming Edgar Galvan-Lopez, Efren Mezura- Montes, Ouassim Ait Elhara and Marc Schoenauer • WS Network Design Problem with Nonlinear Pricing Solved by Hybrid Algorithm Duan Hrabec, Pavel Popela and Jan Roupec • A Study of the Performance of Self-* Memetic Algorithms on • Heterogeneous Ephemeral Environments Rafael Nogueras and Carlos Cotta • evoVision3D: A Multiscale Visualization of Evolutionary Histories Justin Kelly and Christian Jacob

• Example Landscapes to Support Analysis of Multimodal Optimisation Thomas Jansen and Christine Zarges • A Parallel Version of SMS-EMOA for Many-Objective Optimization Problems Raquel Hernndez, Carlos A. Coello and Enrique Alba

• Self-adaptation of Mutation Rates in Non-elitist Populations Duc-Cuong Dang and Per Kristian Lehre

Session 6 15-45:17.15

Session Chair Boris Naujoks • Tournament Selection based on Statistical Test in Genetic Programming Thi Huong Chu, Quang Uy Nguyen and Michael O’Neill • Multi-objective Local Search based on Decomposition Bilel Derbel, Arnaud Liefooghe, Qingfu Zhang, Hernan Aguirre and Kiyoshi Tanaka • Provably Optimal Self-Adjusting Step Sizes for Multi-Valued Decision Variables Benjamin Doerr, Car- ola Doerr and Timo Ktzing • Evolving Cryptographic Pseudorandom Number Generators Stjepan Picek, Dominik Sisejkovic, Vladimir Rozic, Bohan Yang, Nele Mentens and Domagoj Jakobovic • Efficient Global Optimization with Indefinite Kernels Martin Zaefferer and Thomas Bartz-Beielstein

24 • Towards Many-objective Optimisation with Hyper-heuristics: Identifying Good Heuristics with Indica- tors David Walker and Ed Keedwell

• Simple Random Sampling Estimation of the Number of Local Optima Khulood Alyahya and Jonathan Rowe • Reducing Dimensionality to Improve Search in Semantic Genetic Programming Luiz Otavio V. B. Oliveira, Luis Fernando Miranda, Gisele L. Pappa, Fernando E. B. Otero and Ricardo H. C. Takahashi • A Novel Efficient Mutation for Evolutionary Design of Combinational Logic Circuits Francisco Manfrini, Heder Bernardino and Helio Barbosa • Lyapunov design of a simple step-size adaptation strategy based on success Claudia Correa, Elizabeth Wanner and Carlos Fonseca • Fast and Effective Multi-Objective Optimisation of Submerged Wave Energy Converters Didac Ro- driguez Arbones, Nataliia Y. Sergiienko, Boyin Ding and Markus Wagner • Analysing the performance of migrating birds optimisation approaches for large scale continuous prob- lems Eduardo Lalla-Ruiz, Eduardo Segredo, Stefan Vo, Emma Hart and Ben Paechter • A Parallel Multi-Objective Memetic Algorithm based on the IGD+ Indicator Edgar Manoatl Lopez and Carlos Coello Coello

• Pareto inspired multi-objective rule fitness for adaptive rule-based machine learning Ryan Urbanowicz, Jason Moore and Randal Olson

Wednesday 21st September

Session 7 11:15-12:45

Session Chair Tea Tusar • On the Non-uniform Redundancy in Grammatical Evolution Ann Thorhauer • An Evolutionary Hyper-heuristic for the Software Project Scheduling Problem Xiuli Wu, Pietro Consoli, Leandro Minku, Gabriela Ochoa and Xin Yao • Convergence Versus Diversity in Multiobjective Optimization Shouyong Jiang and Shengxiang Yang

• Multi-Objective Selection of Algorithm Portfolios: Experimental Validation Daniel Horn, Karin Schork and Tobias Wagner • On the Robustness of Evolving Populations Tobias Friedrich, Timo Ktzing and Andrew Sutton • The Competing Travelling Salespersons Problem under Multi-criteria Erella Matalon-Eisenstadt, Ami- ram Moshaiov and Gideon Avigad

• Doubly Trained Evolution Control for the Surrogate CMA-ES Zbynk Pitra, Luk Bajer and Martin Holena • Semantic Forward Propagation for Symbolic Regression Marcin Szubert, Anuradha Kodali, Sangram Ganguly, Kamalika Das and Joshua Bongard

• Evolution of spiking neural networks robust to noise and damage for control of simple animats Borys Wrobel

25 • Anomaly Detection with the Voronoi Diagram Evolutionary Algorithm Luis Mart, Arsene Fansi, Lau- rent Navarro and Marc Schoenauer

• Use of Piecewise Linear and Nonlinear Scalarizing Functions in MOEA/D Hisao Ishibuchi, Ken Doi and Yusuke Nojima • k-Bit Mutation with Self-Adjusting k Outperforms Standard Bit Mutation Benjamin Doerr, Carola Doerr and Jing Yang • Landscape Features for Computationally Expensive Evaluation Functions: Revisiting the Problem of Noise Eric O. Scott and Kenneth A. De Jong

26 Abstracts

Session 1

Online Model Selection for Restricted Covariance Matrix Adaptation Youhei Akimoto and Nikolaus Hansen We focus on a variant of covariance matrix adaptation evolution strategy (CMA-ES) with a restricted co- variance matrix model, namely VkD-CMA, which is aimed at reducing the internal time complexity and the adaptation time in terms of function evaluations. We tackle the shortage of the VkD-CMA—the model of the restricted covariance matrices needs to be selected beforehand. We propose a novel mechanism to adapt the model online in the VkD-CMA. It eliminates the need for advance model selection and leads to a performance competitive with or even better than the algorithm with a nearly optimal but fixed model.

Evolution under Strong Noise: A Self-Adaptive Evolution Strategy Can Reach the Lower Performance Bound - the pcCMSA-ES Michael Hellwig and Hans- Georg Beyer According to a theorem by Astete-Morales, Cauwet, and Teytaud, simple Evolution Strategies (ES) that optimize quadratic functions disturbed by additive Gaussian noise of constant variance can only reach a simple regret log-log convergence slope ≥ 1/2 (lower bound). In this paper a population size controlled ES is presented that is able to perform better than the 1/2 limit. It is shown experimentally that the pcCMSA-ES is able to reach a slope of 1 being the theoretical lower bound of all comparison-based direct search algorithms.

Cooperative Coevolution of Control for a Real Multirobot System Jorge Gomes, Miguel Duarte, Pedro Mariano and Anders Lyhne Christensen The potential of cooperative coevolutionary algorithms (CCEAs) as a tool for evolving control for heteroge- neous multirobot teams has been shown in several previous works. The vast majority of these works have, however, been confined to simulation-based experiments. In this paper, we present one of the first demon- strations of a real multirobot system, operating outside laboratory conditions, with controllers synthesised by CCEAs. We evolve control for an aquatic multirobot system that has to perform a cooperative predator-prey pursuit task. The evolved controllers are transferred to real hardware, and their performance is assessed in a realistic non-controlled environment. Two approaches are used to evolve control: a standard fitness-driven CCEA, and novelty-driven coevolution. We find that both approaches are able to evolve teams that transfer successfully to the real robots. Novelty-driven coevolution is able to evolve a broad range of successful team behaviours, which we test on the real multirobot system.

Feature Extraction for Surrogate Models in Genetic Programming Martin Pilat and Roman Neruda We discuss the use of surrogate models in the field of genetic programming. We describe a set of features extracted from each tree and use it to train a model of the fitness function. The results indicate that such a model can be used to predict the fitness of new individuals without the need to evaluate them. In a series of experiments, we show how surrogate modeling is able to reduce the number of fitness evaluations needed in genetic programming, and we discuss how the use of surrogate models affects the exploration and convergence of genetic programming algorithms.

27 REMEDA: Random Embedding EDA for optimising functions with intrinsic di- mension Momodou Sanyang and Ata Kaban It has been observed that in many real-world large scale problems only few variables have a major impact on the function value: While there are many inputs to the function, there are just few degrees of freedom. We refer to such functions as having a low intrinsic dimension. In this paper we devise an Estimation of Distribution Algorithm (EDA) for continuous optimisation that exploits intrinsic dimension without knowing the influential subspace of the input space, or its dimension, by employing the idea of random embedding. While the idea is applicable to any optimiser, EDA is known to be remarkably successful in low dimen- sional problems but prone to the curse of dimensionality in lager problems because its model building step requires large population sizes. Our method, Random Embedding in Estimation of Distribution Algorithm (REMEDA) remedies this weakness and is able to optimise very large dimensional problems as long as their intrinsic dimensions is low.

Emergence of Diversity and its Benefits for Crossover in Genetic Algorithms Duc-Cuong Dang, Per Kristian Lehre, Tobias Friedrich, Timo Ktzing, Martin S. Krejca, Pietro S. Oliveto, Dirk Sudholt and Andrew M. Sutton Population diversity is essential for avoiding premature convergence in Genetic Algorithms (GAs) and for the effective use of crossover. Yet the dynamics of how diversity emerges in populations are not well understood. We use rigorous runtime analysis to gain insight into population dynamics and GA performance for a standard (µ+1) GA and the jumpk test function. By studying the stochastic process underlying the size of the largest collection of identical genotypes we show that the interplay of crossover followed by mutation may serve as a catalyst leading to a sudden burst of diversity. This leads to improvements of the expected optimisation time of order Ω(n/ log n) compared to mutation-only algorithms like the (1+1) EA.

Towards Analyzing Multimodality of Multiobjective Landscapes Pascal Ker- schke, Hao Wang, Mike Preuss, Christian Grimme, Andr Deutz, Heike Traut- mann and Michael Emmerich This paper formally defines multimodality in multiobjective optimization (MO). We introduce a test-bed in which multimodal MO problems with known properties can be constructed as well as numerical characteristics of the resulting landscape. Gradient- and local search based strategies are compared on exemplary problems together with specific performance indicators in the multimodal MO setting. By this means the foundation for Exploratory Landscape Analysis in MO is provided.

Towards automatic testing of reference point-based interactive methodsVesa Ojalehto, Dmitry Podkopaev and Kaisa Miettinen In order to understand strengths and weaknesses of optimization algorithms, it is important to have access to different types of test problems, well defined performance indicators and analysis tools. Such tools are widely available for testing evolutionary multiobjective optimization algorithms. To our knowledge, there do not exist tools for analyzing the performance of interactive multiobjective optimization methods based on the reference point approach to communicating preference information. The main barrier to such tools is the involvement of human decision makers into interactive solution processes, which makes the performance of interactive methods dependent on the performance of humans using them. In this research, we aim towards a testing framework where the human decision maker is replaced with an artificial one. This framework allows to repetitively test interactive in controlled environment.

Evolving Spatially Aggregated Features From Satellite Imagery for Regional Modeling Sam Kriegman, Marcin Szubert, Josh Bongard and Christian Skalka Satellite imagery and remote sensing provide explanatory variables at relatively high resolutions for modeling geospatial phenomena, yet regional summaries are often desirable for analysis and actionable insight. In this paper, we propose a novel method of inducing spatial aggregations as a component of the machine learning process, yielding regional model features whose construction is driven by model prediction performance rather

28 than prior assumptions. Our results demonstrate that Genetic Programming is particularly well suited to this type of feature construction because it can automatically synthesize appropriate aggregations, as well as better incorporate them into regression models compared to other regression models we tested. In our experiments we consider a specific problem instance and real-world dataset relevant to predicting snow properties in high-mountain Asia.

Hypervolume Sharpe Ratio Indicator: Formalization and First Theoretical Re- sults Andreia Guerreiro and Carlos Fonseca Set-indicators have been used in Evolutionary Multiobjective Algorithms (EMOAs) to guide the search process. A new class of set-indicators, the Sharpe Ratio Indicator, combining the selection of solutions with fitness assignment has been recently proposed. This class is based on a formulation of fitness assignment as a Portfolio Selection Problem which sees solutions as assets that have expected return and variance, and fitness as the investment in such assets/solutions. An instance of this class based on the Hypervolume Indicator has shown promising results when integrated in an EMOA called POSEA. The aim of this paper is to formalize the class of Sharpe Ratio Indicators and to demonstrate the prop- erties of a particular instance of this class based on the Hypervolume Indicator, namely monotonicity, scalar independence, and parameter independence with respect to one of its parameters.

Session 2

Optimising Quantisation Noise in Energy Measurement W Langdon, J. Petke, B. Bruce A simple model of distributed genetic improvement running in parallel across a local area network in which start/stop commands are sent to measuring devices calculates a minimum usable software mutation effect based on the analogue to digital convert (ADC)’s resolution. With modern low cost power monitors, high speed Ethernet LAN’s latency and jitter have little effect. Optimal test duration is inversely proportional to minimum mutation effect size, typically well under a second.

TADE: Tight Adaptive Differential Evolution Weijie Zheng, Haohuan Fu and Guangwen Yang Differential Evolution (DE) is a simple and effective evolutionary algorithm to solve optimization problems. The existing DE variants always maintained or increased the randomness of the differential vector when considering the trade-off of randomness and certainty among three components of the mutation operator. This paper considers the possibility to achieve a better trade-off and more accurate result by reducing the randomness of the differential vector, and designs a tight adaptive DE variant called TADE. In TADE, the population is divided into a major subpopulation adopting the general ”current-to-pbest” strategy and a minor subpopulation utilizing our proposed strategy of sharing the same base vector but reducing the randomness in differential vector. Based on success-history parameter adaptation, TADE designs a sim- ple information exchange scheme to avoid the homogeneity of parameters. The extensive experiments on CEC2014 suite show that TADE achieves better or equivalent performance on at least 76.7

Evolution of active categorical image classification via saccadic eye movement Randal Olson, Jason Moore and Christoph Adami Pattern recognition and classification is a central concern for modern information processing systems. In particular, one key challenge to image and video classification has been that the computational cost of image processing scales linearly with the number of pixels in the image or video. Here we present an intelligent machine (the ”active categorical classifier,” or ACC) that is inspired by the saccadic movements of the eye, and is capable of classifying images by selectively scanning only a portion of the image. We harness

29 evolutionary computation to optimize the ACC on the MNIST hand-written digit classification task, and provide a proof-of-concept that the ACC works on noisy multi-class data. We further analyze the ACC and demonstrate its ability to classify images after viewing only a fraction of the pixels, and provide insight on future research paths to further improve upon the ACC presented here..

Population Diversity Measures based on Variable-order Markov Models for the Traveling Salesman Problem Yuichi Nagata This paper presents entropy-based population diversity measures that take into account dependencies between the variables in order to maintain genetic diversity in a GA for the traveling salesman problem. The first one is formulated as the entropy rate of a variable-order Markov process, where the probability of occurrence of each vertex is assumed to be dependent on the preceding vertices of variable length in the population. Compared to the use of a fixed-order Markov model, the variable-order model has the advantage of avoiding the lack of sufficient statistics for the estimation of the exponentially increasing number of conditional probability components as the order of Markov process increases. Moreover, we develop a more elaborate population diversity measure by further reducing the problem of the lack of statistics.

iMOACOR: A New Indicator-Based Multi-Objective Ant Colony Optimization Algorithm for Continuous Search Spaces Jess Guillermo Falcn Cardona and Carlos A. Coello Coello Ant colony optimization (ACO) is a metaheurisitc which was originally designed to solve combinatorial opti- mization problems. In recent years, ACO has been extended to tackle continuous single-objective optimization problems, being ACOR one of the most remarkable approaches of this sort. However, there exist just a few ACO-based algorithms designed to solve continuous multi-objective optimization problems (MOPs) and none of them has been tested with many-objective problems (i.e., multi-objective problems having four or more objectives). In this paper, we propose a novel multi-objective ant colony optimizer (called iMOACOR) for continuous search spaces, which is based on ACOR and the R2 performance indicator. Our proposed ap- proach is the first specifically designed to tackle many-objective optimization problems. Moreover, we present a comparative study of our proposal with respect to NSGA-III, MOEA/D, MOACOR and SMS-EMOA using standard test problems and performance indicators adopted in the specialized literature. Our preliminary re- sults indicate that iMOACOR is very competitive with respect to state-of-the-art multi-objective evolutionary algorithms and is also able to outperform MOACOR.

Genotype Regulation by Self-Modifying Instruction-Based Development on Cel- lular Automata Stefano Nichele, Tom Eivind Glover and Gunnar Tufte In this paper a novel method for regulation of gene expression for artificial cellular systems, e.g., cellular automata, is introduced. It is based on an instructon based representation which allows self-modification of genotype programs, as to be able to control the expression of different genes at different stages of development, e.g., to adapt to different environmental conditions. Coding and non-coding genome analogies can be drawn in our cellular automata system, where coding genes are in the form of developmental actions whether non- coding genes are represented as modifying instructions that can change one or more of the other genes. This novel technique was tested succesfully on the morphogenesis of cellular structures from a seed, self-replication of structures, growth and replication combined, as well as reuse of an evolved genotype for development or replication of different structures than those initially targeted by evolution.

Artificially Inducing Environmental Changes in Evolutionary Dynamic Opti- mization Renato Tinos and Shengxiang Yang Biological and artificial evolution can be speeded up by environmental changes. From the evolutionary com- putation perspective, environmental changes during the optimization process generates dynamic optimization problems (DOPs). However, only DOPs caused by intrinsic changes have been investigated in the area of evolutionary dynamic optimization (EDO). This paper is devoted to investigate artificially induced DOPs. A framework to generate artificially induced DOPs from any pseudo-boolean problem is proposed. We use

30 this framework to induce six different types of changes in a 0-1 knapsack problem and test which one re- sults in higher speed up. Two strategies based on immigrants, which are used in EDO, are adapted to the artificially induced DOPs investigated here. Some types of changes did not result in better performance, while some types led to higher speed up. The algorithm with memory based immigrants presented very good performance.

Parameterized Analysis of Multi-objective Evolutionary Algorithms and the Weighted Vertex Cover Problem Mojgan Pourhassan, Feng Shi and Frank Neu- mann A rigorous runtime analysis of evolutionary multi-objective optimization for the classical vertex cover problem in the context of parameterized complexity analysis has been presented by Kratsch and Neumann. In this paper, we extend the analysis to the weighted vertex cover problem and provide a fixed parameter evolutionary algorithm with respect to OPT, the cost of the optimal solution for the problem. Moreover, using a diversity mechanisms, we present a multi-objective evolutionary algorithm that finds a 2-approximation in expected polynomial time.

RK-EDA: A Novel Random Key Based Estimation of Distribution Algorithm Mayowa Ayodele, John McCall and Olivier Regnier-Coudert The challenges of solving problems naturally represented as permutations by Estimation of Distribution Algorithms (EDAs) have been a recent focus of interest in the evolutionary computation community. One of the most common representations for permutation based problems is the Random Key (RK). EDAs based on RKs are however considered the poorest as they are not able to properly benefit from problem regularities. This has led to the development of superior algorithms using specially adapted representations. In this paper, we present RK-EDA, a novel RK based EDA which balances the exploration and exploitation of the search space by controlling variance of the probabilistic model with a temperature parameter. Different from the general performance of RK based EDAs, RK-EDA is actually competitive with the best algorithms on common permutation test problems: Travelling Salesman, Quadratic Assignment, Linear Ordering and Flow Shop Scheduling Problems.

An Asynchronous and Steady State Update Strategy for the Particle Swarm Optimization Algorithm Carlos Fernandes, Jj Merelo and Agostinho Da Rosa This paper proposes an asynchronous and steady state update strategy for the Particle Swarm Optimization (PSO) inspired by the Bak-Sneppen model of co-evolution. The model consists of a set of fitness values (representing species) arranged in a network. By replacing iteratively the least fit species and its neighbors with random values (simulating extinction), the average fitness of the population tends to grow while the system is driven to a critical state. Based on these rules, we implement a PSO in which only the worst particle and its neighbors are updated and evaluated in each time-step. The other particles remain steady during one or more iterations, until they eventually meet the update criterion. The steady state PSO (SS-PSO) was tested on a set of benchmark functions, with three different population structures: lbest ring and lattice with von Neumann and Moore neighborhood. The experiments demonstrate that the strategy significantly improves the quality of results and convergence speed with Moore neighborhood. Further tests show that the major factor of enhancement is the selective pressure on the worst, since replacing the best or a random particle (and neighbors) yields inferior results.

Using Scaffolding with Partial Call-Trees to Improve Search Brad Alexander, George Lorenzetti, Constantina Pyromallis and Brad Zacher Recursive functions are an attractive target for genetic programming because they can express complex computation compactly. However, the need to simultaneously discover correct recursive and base cases in these functions is a major obstacle in the evolutionary search process. To overcome these obstacles two recent remedies have been proposed. The first is Scaffolding which permits the recursive case of a function to evaluated independently of the base case. The second is Call-Tree-Guided Genetic Programming (CTGGP) which uses a partial call tree, supplied by the user, to separately evolve the parameter expressions for recursive

31 calls. Used in isolation, both of these approaches have been shown to offer significant advantages in terms of search performance. In this work we investigate the impact of different combinations of these approaches. We find that, on our benchmarks, CTGGP significantly outperforms Scaffolding and that a combination CTGGP and Scaffolding appears to produce further improvements in worst-case performance.

Analyzing Inter-objective Relationships: A Case Study of Software Upgradabil- ity Zhilei Ren, He Jiang, Jifeng Xuan and Yan Hu In the process of solving real-world multi-objective problems, many existing studies only consider aggregate formulations of the problem, leaving the relationships between different objectives less visited. In this study, taking the software upgradability problem as a case study, we intend to gain insights into the inter-objective relationships of multi-objective problems. First, we obtain the Pareto schemes by uniformly sampling a set of solutions within the Pareto front. Second, we analyze the characteristics of the Pareto scheme, which reveal the relationships between different objectives. Third, to estimate the inter-objective relationships for new upgrade requests, we build a predictive model, with a set of problem-specific features. Finally, we propose a reference based indicator, to assess the risk of applying single-objective algorithms to solve the multi-objective software upgradability problem. Extensive experimental results demonstrate that, the predictive models built with problem-specific features are able to predict both algorithm independent inter-objective relationships, as well as the algorithm performance specific indicator properly. These findings would be useful for deeper problem understanding.

On constraint handling in surrogate-assisted evolutionary many-objective opti- mization Tinkle Chugh, Karthik Sindhya, Kaisa Miettinen, Jussi Hakanen and Yaochu Jin Surrogate-assisted evolutionary multiobjective optimization algorithms are often used to solve computation- ally expensive problems. But their efficacy on handling constrained optimization problems having more than three objectives has not been widely studied. Particularly the issue of how feasible and infeasible solutions are handled in generating a data set for training a surrogate has not received much attention. In this paper, we use a recently proposed Kriging-assisted evolutionary algorithm for many-objective optimization and in- vestigate the effect of infeasible solutions on the performance of the surrogates. We consider different ways of handling feasible and infeasible solutions to generate the training data set for the surrogate and examine them on different benchmark problems. Results on the comparison with a reference vector guided evolutionary algorithm show that it is vital for the success of the surrogate to properly deal with infeasible solutions.

Searching for Quality Diversity when Diversity is Unaligned with Quality Justin Pugh, Lisa Soros and Kenneth Stanley Inspired by natural evolution’s affinity for discovering a wide variety of successful organisms, a new evolution- ary search paradigm has emerged wherein the goal is not to find the single best solution but rather to collect a diversity of unique phenotypes where each variant is as good as it can be. These quality diversity (QD) algorithms therefore must explore multiple promising niches simultaneously. A QD algorithm’s diversity component, formalized by specifying a behavior characterization (BC), not only generates diversity but also promotes quality by helping to overcome deception in the fitness landscape. However, some BCs (particularly those that are unaligned with the notion of quality) do not adequately mitigate deception, rendering QD algorithms unable to discover the best-performing solutions on difficult problems. This paper introduces a solution that enables QD algorithms to pursue arbitrary notions of diversity without compromising their ability to solve hard problems: driving search with multiple BCs simultaneously.

32 Session 3

Syntactical Similarity Learning by means of Grammatical Evolution Alberto Bartoli, Andrea De Lorenzo, Eric Medvet and Fabiano Tarlao Several research efforts have shown that a similarity function synthesized from examples may capture an application-specific similarity criterion in a way that fits the application needs more effectively than a generic distance definition. In this work, we propose a similarity learning algorithm tailored to problems of syntax- based entity extraction from unstructured text streams. The algorithm takes in input pairs of strings along with an indication of whether they adhere or not adhere to the same syntactic pattern. Our approach is based on Grammatical Evolution and explores systematically a similarity definition space including all functions that may be expressed with a specialized, simple language that we have defined for this purpose. We assessed our proposal on patterns representative of practical applications. The results suggest that the proposed approach is indeed feasible and that the learned similarity function is more effective than the Levenshtein distance and the Jaccard similarity index.

Feature-Based Diversity Optimization for Problem Instance Classification Wanru Gao, Samadhi Nallaperuma and Frank Neumann Understanding the behaviour of heuristic search methods is a challenge. This even holds for simple local search methods such as 2-OPT for the Traveling Salesperson problem. In this paper, we present a general framework that is able to construct a diverse set of instances that are hard or easy for a given search heuristic. Such a diverse set is obtained by using an evolutionary algorithm for constructing hard or easy instances that are diverse with respect to different features of the underlying problem. Examining the constructed instance sets, we show that many combinations of two or three features give a good classification of the TSP instances in terms of whether they are hard to be solved by 2-OPT.

How Far Are We From an Optimal, Adaptive DE? Ryoji Tanabe and Alex Fukunaga We consider how an (almost) optimal parameter adaptation process for an adaptive DE might behave, and compare the behavior and performance of this approximately optimal process to that of existing, adaptive mechanisms for DE. We first define an optimal parameter adaptation process, and propose GAO, a simulation process which can be used in order to greedily approximate the behavior of such an optimal process. We propose GAODE, which applies the proposed GAO framework to DE and simulates an approximately optimal parameter adaptation process for a specific adaptive DE framework. We compare the behavior of GAODE to typical adaptive DEs on six benchmark functions and the BBOB benchmarks, and show that GAO can be used to (1) explore how much room for improvement there is in the performance of the adaptive DEs, and (2) obtain hints for developing future, effective parameter adaptation methods for adaptive DEs.

Selection Hyper-heuristics Can Provably be Helpful in Evolutionary Multi- objective Optimization Chao Qian, Ke Tang and Zhi-Hua Zhou Selection hyper-heuristics are automated methodologies for selecting existing low-level heuristics to solve hard computational problems. They have been found well useful for evolutionary algorithms when solving both single and multi-objective real-world optimization problems. Previous work mainly focuses on the empirical study, while the theoretical study, particularly in multi-objective optimization, is largely insufficient. In this paper, we use three main components of multi-objective evolutionary algorithms (selection mechanisms, mutation operators, acceptance strategies) as low-level heuristics, respectively, and prove that using heuristic selection (i.e., mixing low-level heuristics) can be exponentially faster than using only one low-level heuristic. Our result provides theoretical support for multi-objective selection hyper-heuristics, and might be helpful for designing efficient heuristic selection methods in practice.

33 Augmented Lagrangian Constraint Handling for CMA-ES–Case of a Single Lin- ear Constraint Asma Atamna, Anne Auger and Nikolaus Hansen We consider the problem of minimizing a function f subject to a single inequality constraint g(x) 0, in a black-box scenario. We present a covariance matrix adaptation evolution strategy using an adaptive augmented Lagrangian method to handle the constraint. We show that our algorithm is an instance of a general framework that allows to build an adaptive constraint handling algorithm from a general randomized adaptive algorithm for unconstrained optimization. We assess the performance of our algorithm on a set of linearly constrained functions, including convex quadratic and ill-conditioned functions and observe linear convergence.

Replicating the Stroop Effect using A Developmental Spatial Neuroevolution System Amit Benbassat and Avishai Henik We present a novel approach to the study of cognitive phenomena by using evolutionary computation. To this end we use a spatial, developmental, neuroevolution system. We use our system to evolve ANNs to perform simple abstractions of the cognitive tasks of color perception and color reading. We define these tasks to explore the nature of the Stroop effect. We show that we can evolve it to perform a variety of cognitive tasks, and also that evolved networks exhibit complex interference behavior when dealing with multiple tasks and incongruent data. We also show that this interference behavior can be manipulated by changing the learning parameters, a method that we successfully use to create a Stroop like interference pattern.

Efficient Sampling when Searching for Robust Solutions Juergen Branke and Xin Fei We present a novel approach to the study of cognitive phenomena by using evolutionary computation. To this end we use a spatial, developmental, neuroevolution system. We use our system to evolve ANNs to perform simple abstractions of the cognitive tasks of color perception and color reading. We define these tasks to explore the nature of the Stroop effect. We show that we can evolve it to perform a variety of cognitive tasks, and also that evolved networks exhibit complex interference behavior when dealing with multiple tasks and incongruent data. We also show that this interference behavior can be manipulated by changing the learning parameters, a method that we successfully use to create a Stroop like interference pattern.

Kin Selection with Twin Genetic Programming W Langdon We present a novel approach to the study of cognitive phenomena by using evolutionary computation. To this end we use a spatial, developmental, neuroevolution system. We use our system to evolve ANNs to perform simple abstractions of the cognitive tasks of color perception and color reading. We define these tasks to explore the nature of the Stroop effect. We show that we can evolve it to perform a variety of cognitive tasks, and also that evolved networks exhibit complex interference behavior when dealing with multiple tasks and incongruent data. We also show that this interference behavior can be manipulated by changing the learning parameters, a method that we successfully use to create a Stroop like interference pattern.

Multicriteria Building Spatial Design with Mixed Integer Evolutionary Algo- rithms Koen van der Blom, Sjonnie Boonstra, Hrm Hofmeyer and Michael T. M. Emmerich This paper proposes a first step towards multidisciplinary design of building spatial designs. Two criteria, total surface area (i.e. energy performance) and compliance (i.e. structural performance), are combined in a multicriteria optimisation framework. A new way of representing building spatial designs in a mixed integer parameter space is used within this framework. Two state-of-the-art algorithms, namely NSGA-II and SMS-EMOA, are used and compared to compute Pareto front approximations for problems of different size. Moreover, the paper discusses domain specific search operators, which are compared to generic operators, and techniques to handle constraints within the mutation. The results give first insights into the trade-off between energy and structural performance and the scalability of the approach.

34 Comparing Asynchronous and Synchronous Parallelization of the SMS-EMOA Simon Wessing, Gnter Rudolph and Dino Menges We experimentally compare synchronous and asynchronous parallelization of the SMS-EMOA. We find that asynchronous parallelization usually obtains a better speed-up and is more robust to fluctuations in the evaluation time of objective functions. Simultaneously, the solution quality of both methods only degrades slightly as against the sequential variant. We even consider it possible for the parallelization to improve the quality of the solution set on some multimodal problems.

Rapid Phenotypic Landscape Exploration through Hierarchical Spatial Parti- tioning Davy Smith, Laurissa Tokarchuk and Geraint Wiggins Exploration of the search space through the optimisation of phenotypic diversity is of increasing interest within the field of evolutionary robotics. Novelty search and the more recent MAP-Elites are two state of the art evolutionary algorithms which diversify low dimensional phenotypic traits for divergent exploration. In this paper we introduce a novel alternative for rapid divergent search of the feature space. Unlike previous phenotypic search procedures, our proposed Spatial, Hierarchical, Illuminated Neuro-Evolution (SHINE) algorithm utilises a tree structure for the maintenance and selection of potential candidates. SHINE penalises previous solutions in more crowded areas of the landscape. Our experimental results show that SHINE significantly outperforms novelty search and MAP-Elites in both performance and exploration. We conclude that the SHINE algorithm is a viable method for rapid divergent search of low dimensional, phenotypic landscapes.

Community structure detection for the functional connectivity networks of the brain Rodica Ioana Lung, Mihai Suciu, Regina Meszlnyi, Krisztian Buza and Nomi Gask The community structure detection problem in weighted networks is tackled with a new approach based on game theory and extremal optimization, called Weighted Nash Extremal Optimization. This method approximates the Nash equilibria of a game in which nodes, as players, chose their community by maximizing their payoffs. After performing numerical experiments on synthetic networks, the new method is used to analyze functional connectivity networks of the brain in order to reveal possible connections between different brain regions. Results show that the proposed approach may be used to find biomedically relevant knowledge about brain functionality.

A Fitness Cloud Model for Adaptive Metaheuristic Selection Methods Christo- pher Jankee, Sebastien Verel, Bilel Derbel and Cyril Fonlupt Designing portfolio adaptive selection strategies is a promising approach to gain in generality when tackling a given optimization problem. However, we still lack much understanding of what makes a strategy effective, even if different benchmarks have been already designed for this issues. In this paper, we propose a new model based on fitness cloud allowing us to provide theoretical and empirical insights on when an on-line adaptive strategy can be beneficial to the search. In particular, we investigate the relative performance and behavior of two representative and commonly used selection strategies with respect to static (off-line) and purely random approaches, in a simple, yet sound realistic, setting of proposed model.

Data Classification using Carbon-Nanotubes and Evolutionary Algorithms E. Vissol-Gaudin, A. Kotsialos, M. K. Massey, D. A. Zeze,C. Pearson, C. Groves and M. C. Petty The potential of Evolution in Materio (EiM) for machine learning problems is explored here. This technique makes use of evolutionary algorithms (EAs) to influence the processing abilities of an un-configured physically rich medium, via exploitation of its physical properties. For the first time, results of EiM using particle swarm optimisation (PSO) and differential evolution (DE) to exploit the complex voltage/current relationship of a mixture of single walled carbon nanotubes (SWCNTs) and liquid crystals (LCs) are reported. The computational problem considered is simple binary data classification. Results presented are consistent and

35 reproducible. The evolution process based on EAs has the capacity to evolve the material to a state where data classification can be performed. Finally, it appears that through a sinusoidal exploration of the search space, PSO produces good classifiers out of the SWCNT/LC substrate.

Session 4

Hierarchical Knowledge in Self-Improving Grammar Based Genetic Program- ming Pak-Kan Wong, Man-Leung Wong and Kwong-Sak Leung Structure of a grammar can influence how well a Grammar-Based Genetic Programming system solves a given problem but it is not obvious to design the structure of grammar, especially when the problem is large. In this paper, our proposed Bayesian Grammar-Based Genetic Programming with Hierarchical Learning (BGBGP- HL) examines the grammar and builds new rules on the existing grammar structure during evolution. Once our system successfully finds the good solution(s), the adapted grammar will provide a grammar-based probabilistic model to the generation process of optimal solution(s). Moreover, our system can automatically discover new hierarchical knowledge (i.e. how the rules are structurally combined) which composes of multiple production rules in the original grammar. Based on our case study using deceptive royal tree problem, our evaluation shows that BGBGP-HL achieves the best performance among the competitors while is capable of composing hierarchical knowledge. Compared to other algorithms, search performance of BGBGP-HL is shown to be more robust against deceptiveness and complexity of the problem.

Speciated Evolutionary Algorithm for Dynamic Constrained Optimisation Xi- aofen Lu, Ke Tang and Xin Yao Dynamic constrained optimisation problems (DCOPs) have specific characteristics that do not exist in dy- namic optimisation problems without constraints or with bounded constraints. This poses difficulties for some existing dynamic optimisation strategies. The maintaining/ introducing diversity approaches might become less effective due to the presence of infeasible areas, and thus might not well handle with the switching of global optima between disconnected feasible regions. This paper firstly proposes a speciation-based approach to overcome this, which utilizes deterministic crowding to maintain diversity, assortative mating and local search to promote exploitation, as well as feasibility rules to deal with constraints. The experimental studies demonstrate that the new method is overall superior to existing algorithms on a benchmark set of DCOPs.

A Decomposition-based Approach for Solving Large Scale Multi-objective Prob- lems Luis Miguel Antonio and Carlos A. Cello Coello Decomposition is a well-established mathematical programming technique for dealing with multi-objective optimization problems (MOPs), which has been found to be efficient and effective when coupled to evolu- tionary algorithms, as evidenced by MOEA/D. MOEA/D decomposes a MOP into several single-objective subproblems by means of well-defined scalarizing functions. It has been shown that MOEA/D is able to gen- erate a set of evenly distributed solutions for different multi-objective benchmark functions with a relatively low number of decision variables (usually no more than 30). In this work, we study the effect of scalability in MOEA/D and show how its efficacy decreases as the number of decision variables of the MOP increases. Also, we investigate the improvements that MOEA/D can achieve when combined with coevolutionary techniques, giving rise to a novel MOEA which decomposes the MOP both in objective and in decision variables space. This new algorithm is capable of optimizing large scale MOPs and outperforms MOEA/D and GDE3 when solving problems with a large number of decision variables (from 200 up to 1200).

36 Feature Based Algorithm Configuration: a Case Study with Differential Evolu- tion Nacim Belkhir, Johann Dro, Pierre Savant and Marc Schoenauer Algorithm Configuration is still an intricate problem especially in the continuous black box optimization domain. This paper empirically investigates the relationship between continuous problem features (measuring different problem characteristics) and the best parameter configuration of a given stochastic algorithm over a bench of test functions namely here, the original version of Differential Evolution over the BBOB testbench. This is achieved by learning an empirical performance model from the problem features and the algorithm parameters. This performance model can then be used to compute empirical optimal parameter configuration from features values. The results show that reasonable performance models can indeed be learned, resulting in better parameter configuration than a static parameter setting optimized for robustness over the test bench.

Fixed-Parameter Single Objective Search Heuristics for Minimum Vertex Cover Wanru Gao, Tobias Friedrich and Frank Neumann We consider how well-known branching approaches for the classical minimum vertex cover problem can be turned into randomized initialization strategies with provable performance guarantees and investigated them by experimental investigations. Furthermore, we show how these techniques can be build into local search components and analyze a basic local search variants that is similar to a state-of-the-art approach called NuMVC. Our experimental results for the two local search approaches show that making use of more complex branching strategies in the local search component can lead to better results on various benchmark graphs.

What does the Evolution Path Approximate in CMA-ES? Zhenhua Li and Qingfu Zhang The Covariance matrix adaptation evolution strategy (CMA-ES) evolves a multivariate Gaussian distribution for continuous optimization. The evolution path, which accumulates historical search direction in successive generations, plays a crucial role in the adaptation of covariance matrix. In this paper, we investigate what the evolution path approximates in the optimization procedure. We show that the evolution path accumulates natural gradient with respect to the distribution mean, and works as a momentum under stationary condition. The experimental results suggest that it approximates a scale direction expanded by singular values along corresponding eigenvectors of the inverse Hessian. Further, we show that the outer product of evolution path serves as a rank-1 momentum term for the covariance matrix.

Improving Efficiency of Bi-level Worst Case Optimization Ke Lu, Juergen Branke and Tapabrata Ray We consider the problem of identifying the trade-off between tolerance level and worst case performance, for a problem where the decision variables may be disturbed within a set tolerance level. This is a special case of a bi-level optimization problem. In general, bi-level optimization problems are computationally very expensive, because a lower level optimizer is called to evaluate each solution on the upper level. In this paper, we propose and compare several strategies to reduce the number of fitness evaluations without substantially compromising the final solution quality.

A General-Purpose Framework for Genetic Improvement Francesco Marino, Giovanni Squillero and Alberto Tonda Genetic Improvement is an evolutionary-based technique. Despite its relatively recent introduction, several successful applications have been already reported in the scientific literature: it has been demonstrated able to modify the code complex programs without modifying their intended behavior; to increase performance with regards to speed, energy consumption or memory utilization. Some results suggest that it could be also used to correct bugs, restoring the software’s intended functionalities. Given the novelty of the technique, however, instances of Genetic Improvement so far rely upon ad-hoc, language-specific implementations. In this paper, we propose a general framework based on the software engineering’s idea of mutation testing coupled with Genetic Programming, that can be easily adapted to different programming languages and

37 objective. In a preliminary evaluation, the framework efficiently optimizes the code of the md5 hash function in C, Java, and Python.

The Multiple Insertion Pyramid: A Fast Parameter-less Population Scheme Willem den Besten, Dirk Thierens and Peter A.N. Bosman The Parameter-less Population Pyramid (P3) uses a novel population scheme, called the population pyramid. This population scheme does not require a user to set a fixed population size, instead it keeps adding new solutions to an ever growing set of layered populations. P3 is very efficient in terms of number of fitness function evaluations but its run-time is significantly higher than that of the Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) which uses the same method of exploration. This higher run-time is caused by the need to rebuild the linkage tree every time a single new solution is added to the population pyramid. We propose a new population scheme, called the multiple insertion pyramid that results in a faster variant of P3 by inserting multiple solutions at the same time and operating on populations instead of on single solutions.

An Evolutionary Framework for Replicating Neurophysiological Data with Spik- ing Neural Networks Emily L. Rounds, Eric O. Scott, Andrew S. Alexander, Kenneth A. De Jong, Douglas A. Nitz and Jeffrey L. Krichmar Here we present a framework for the automatic tuning of spiking neural networks (SNNs) that utilizes an evolutionary algorithm featuring indirect encoding to achieve a drastic reduction in the dimensionality of the parameter space, combined with a GPU-accelerated SNN simulator that results in a considerable decrease in the time needed for fitness evaluation, despite the need for both a training and a testing phase. We tuned the parameters governing a learning rule called spike-timing-dependent plasticity (STDP), which was used to alter the synaptic weights of the network. We validated this framework by applying it to a case study in which synthetic neuronal firing rates were matched to electrophysiologically recorded neuronal firing rates in order to evolve network functionality. Our framework was not only able to match their firing rates, but also captured functional and behavioral aspects of the biological neuronal population, in roughly 50 generations.

A Cross-Platform Assessment of Energy Consumption in Evolutionary Algo- rithms. Towards Energy-Aware Bioinspired Algorithms Francisco Chavez de La O, Francisco Fernandez de Vega, Josefa Diaz Alvarez, Juan Angel Garcia, Pe- dro Angel Castillo Valdivieso, Juan Julian Merelo Guervos and Carlos Cotta Porras. Energy consumption is a matter of paramount importance in nowadays environmentally conscious society. It is also bound to be a crucial issue in light of the emergent computational environments arising from the pervasive use of networked handheld devices and wearables. Evolutionary algorithms (EAs) are ideally suited for this kind of environments due to their intrinsic flexibility and adaptiveness, provided they operate on viable energy terms. In this work we analyze the energy requirements of EAs, and particularly one of their main flavours, genetic programming (GP), on several computational platforms and study the impact that parametrisation has on these requirements, paving the way for a future generation of energy-aware EAs. As experimentally demonstrated, handheld devices and tiny computer models mainly use for educational purposes may be the most energy efficient ones when looking for solutions by means of EAs.

Coarse-Grained Barrier Trees of Fitness Landscapes Sebastian Herrmann, Gabriela Ochoa and Franz Rothlauf Recent literature suggests that local optima in fitness landscapes are clustered, which offers an explanation of why perturbation-based metaheuristics often fail to find the global optimum: they become trapped in a sub-optimal cluster. We introduce a method to extract and visualize the global organization of these clusters in form of a barrier tree. Barrier trees have been used to visualize the barriers between local optima basins in fitness landscapes. Our method computes a more coarse-grained tree to reveal the barriers between clusters of local optima. The core element is a new variant of the flooding algorithm, applicable to local optima networks. We use local optima networks as a compressed representation of fitness landscapes. To identify

38 the clusters, we apply a community detection algorithm. A sample of 200 NK fitness landscapes suggests that the depth of their coarse-grained barrier tree is related to their search difficulty for perturbation-based metaheuristics.

Understanding Environmental Influence in an Open-Ended Evolutionary Algo- rithm Andreas Steyven, Emma Hart and Ben Paechter It is well known that in open-ended evolution, the nature of the environment plays in key role in directing evolution. However, in Evolutionary Robotics, it is often unclear exactly how parameterisation of a given environment might influence the emergence of particular behaviours. We consider environments in which the total amount of energy is parameterised by availability and value, and use surface plots to explore the relationship between those environment parameters and emergent behaviour using a variant of a well-known distributed evolutionary algorithm (mEDEA). Analysis of the resulting landscape show that it is crucial for a researcher to select appropriate parameterisations in order that the environment provides the right balance between facilitating survival and exerting sufficient pressure for new behaviours to emerge. To the best of our knowledge, this is the first time such an analysis has been undertaken.

A Hybrid Autoencoder and Density Estimation Model for Anomaly Detection Van Loi Cao, Miguel Nicolau and James McDermott A novel one-class learning approach is proposed for network anomaly detection based on autoencoder and kernel density estimation. An autoencoder attempts to reproduce the input data in the output layer with respect to the reduction of reconstruction error. The smaller hidden layer becomes a bottleneck, forming a compressed representation of the data. In a typical approach to one-class learning, the autoencoder is trained on normal data only. After successful training, reconstruction error on normal data will be low, while error on anomaly data may be high. Reconstruction error above a threshold indicates an anomaly. However, the compressed hidden layer representation presents an interesting alternative. It is now proposed to take low density in the hidden layer as indicating an anomaly. We study two possibilities: a single Gaussian, and a full kernel density estimation. The methods are tested on datasets including in the domain of network security. Experiments show that the proposed methods out-perform best-known results on several datasets.

Session 5

Parallel Hierarchical Evolution of String Library Functions Jacob Soderlund, Darwin Vickers and Alan Blair We introduce a parallel version of hierarchical evolutionary re-combination (HERC) and use it to evolve programs for ten standard string processing tasks and a postfix calculator emulation task. Each processor maintains a separate evolutionary niche, with its own ladder of competing agents and codebank of potential mates. Further enhancements include evolution of multi-cell programs and incremental learning with reshuf- fling of data. We find the success rate is improved by transgenic evolution, where solutions to earlier tasks are recombined to solve later tasks. Sharing of genetic material between niches seems to improve performance for the postfix task, but for some of the string processing tasks it can increase the risk of premature convergence.

Variable Interaction in Multi-Objective Optimization Problems Ke Li, Moham- mad Omidvar, Kalyanmoy Deb and Xin Yao Variable interaction is an important aspect of a problem, which reflects its structure, and has implications on the design of efficient optimization algorithms. Although variable interaction has been widely studied in the global optimization community, it has rarely been explored in the multi-objective optimization literature. In

39 this paper, we empirically and analytically study the variable interaction structures of some popular multi- objective benchmark problems. Our study uncovers nontrivial variable interaction structures for the ZDT and DTLZ benchmark problems which were thought to be either separable or non-separable.

Graceful Scaling on Uniform versus Steep-Tailed Noise Tobias Friedrich, Timo Ktzing, Martin S. Krejca and Andrew M. Sutton Recently, different evolutionary algorithms (EAs) have been analyzed in noisy environments. The most frequently used noise model for this was additive posterior noise (noise added after the fitness evaluation) taken from a Gaussian distribution. In particular, for this setting it was shown that the (µ + 1)-EA on OneMax does not scale gracefully (higher noise cannot efficiently be compensated by higher µ). In this paper we want to understand whether there is anything special about the Gaussian distribution which makes the (µ + 1)-EA not scale gracefully. We keep the setting of posterior noise, but we look at other distributions. We see that for exponential tails the (µ + 1)-EA on OneMax does also not scale gracefully, for similar reasons as in the case of Gaussian noise. On the other hand, for uniform distributions (as well as other, similar distributions) we see that the (µ + 1)-EA on OneMax does scale gracefully, indicating the importance of the noise model.

An Active Set Evolution Strategy for Optimization with Known Constraints Dirk Arnold We propose an evolutionary approach to constrained optimization where the objective function is considered a black box, but the constraint functions are assumed to be known. The approach can be considered a stochastic active set method. It labels constraints as either active or inactive and projects candidate solutions onto the subspace of the feasible region that is implied by the equality and active inequality constraints. We implement the approach in the context of a (1+1)-ES and evaluate its performance using a commonly employed set of test problems. Marco

An Extension of Algebraic Differential Evolution for the Linear Ordering Prob- lem with Cumulative Costs Marco Baioletti, Alfredo Milani and Valentino San- tucci In this paper we propose an extension to the algebraic differential evolution approach for permutation based problems (DEP). Conversely from classical differential evolution, DEP is fully combinatorial and it is extended in two directions: new generating sets based on exchange and insertion moves are considered, and the case F > 1 is now allowed for the differential mutation operator. Moreover, also the crossover and selection operators of the original DEP have been modified in order to address the linear ordering problem with cumulative costs (LOPCC). The new DEP schemes are compared with the state-of-the-art LOPCC algorithms using a widely adopted benchmark suite. The experimental results show that DEP reaches competitive performances and, most remarkably, found 21 new best known solutions on the 50 largest LOPCC instances.

Tunnelling Crossover Networks for the Asymmetric TSP Nadarajen Veerapen, Gabriela Ochoa, Renato Tinos and Darrell Whitley Local optima networks are a compact representation of fitness landscapes that can be used for analysis and visualisation. This paper provides the first analysis of the Asymmetric Travelling Salesman Problem using local optima networks. These are generated by sampling the search space by recording the progress of an existing evolutionary algorithm based on the Generalised Asymmetric Partition Crossover. They are compared to networks sampled through the Chained Lin-Kernighan heuristic across 25 instances. Structural differences and similarities are identified, as well as examples where crossover smooths the landscape.

Exploring Uncertainty and Movement in Categorical Perception using Robots Nathaniel Powell and Josh Bongard Cognitive agents are able to perform categorical perception through physical interaction (active categorical perception; ACP), or passively at a distance (distal categorical perception; DCP). It is possible that the former

40 scaffolds the learning of the latter. However, it is unclear whether DCP indeed scaffolds ACP in humans and animals, nor how a robot could be trained to likewise learn DCP from ACP. Here we demonstrate a method for doing so which involves uncertainty: robots are trained to perform ACP when uncertain and DCP when certain. Furthermore, we demonstrate that robots trained in such a manner are more competent at categorizing novel objects than robots trained to categorize in other ways. This suggests that such a mechanism would also be useful for humans and animals, suggesting that they may be employing some version of this mechanism.

On the Use of Semantics in Multi-Objective Genetic Programming Edgar Galvan- Lopez, Efren Mezura-Montes, Ouassim Ait Elhara and Marc Schoenauer Research on semantics in Genetic Programming (GP) has increased dramatically over the last number of years. Results on this area clearly indicate that its use in GP considerably increases performance. Motivated by these results, this paper investigates its adoption in Muti- Objective GP based on the well-known NSGAII. To this end, we propose two forms of incorporating semantics into a MOGP system. Results on challenging (highly) unbalanced binary classification tasks indicate that the adoption of semantics in MOGP is beneficial. In particular when a semantic distance is incorporated into the core of the NSGAII algorithm.

WS Network Design Problem with Nonlinear Pricing Solved by Hybrid Algo- rithm Duan Hrabec, Pavel Popela and Jan Roupec The aim of the paper is to introduce wait-and-see (WS) reformulation of a transportation network design problem with stochastic price-dependent demands. The demand is defined by hyperbolic dependency and its parameters are modeled by random variables. Then, the WS reformulation of the mixed integer nonlinear program (MINLP) is proposed. The obtained separable scenario-based model can be repeatedly solved as a finite set of MINLPs by means of integer programming techniques or some heuristics. However, the authors combine a traditional optimization algorithm and a suitable genetic algorithm to obtain a hybrid algorithm that is modified for the WS case. Its implementation and test results illustrated by figures are also discussed in the paper.

A Study of the Performance of Self-* Memetic Algorithms on Heterogeneous Ephemeral Environments Rafael Nogueras and Carlos Cotta We consider the deployment of island-based memetic algorithms (MAs) endowed with self-* properties on unstable computational environments composed by a collection of computing nodes whose availability fluc- tuates. In this context, these properties refer to the ability of the MA to work autonomously in order to optimize its performance and to react to the instability of computational resources. The main focus of this work is analyzing the performance of such MAs when the underlying computational substrate is not only volatile but also heterogeneous in terms of the computational power of each of its constituent nodes. We use for this purpose a simulated environment subject to different volatility rates, whose topology is modeled as scale-free networks and whose computing power is distributed among nodes following different distributions. We observe that in general computational homogeneity is preferable in scenarios with low instability; in case of high instability, MAs without self-scaling and self-healing perform better when the computational power follows a power law, but performance seems to be less sensitive to the distribution when these self-* properties are used.

evoVision3D: A Multiscale Visualization of Evolutionary Histories Justin Kelly and Christian Jacob Evolutionary computation is a field defined by large data sets and complex relationships. Because of this complexity it can be difficult to identify trends and patterns that can help improve future projects and drive experimentation. To address this we present evoVision3D, a multiscale 3D system designed to take data sets from evolutionary design experiments and visualize them in order to assist in their inspection and analysis. Our system is implemented in the Unity 3D game development environment, for which we show that it lends itself to immersive naviga- tion through large data, going even beyond evolution-based search and interactive data exploration.

41 Example Landscapes to Support Analysis of Multimodal Optimisation Thomas Jansen and Christine Zarges Theoretical analysis of all kinds of randomised search heuristics has been and keeps being supported and facilitated by the use of simple example functions that help understand the working principles of complicated heuristic algorithms in simple situations that are thought of being representative for landscapes that these heuristics might encounter in practice. While this has been very successful in the past for optimisation in unimodal landscapes there is a need for generally accepted useful simple example functions for situations where unimodal objective functions are insufficient: multimodal optimisation and investigation of diversity preserving mechanisms are examples. A family of example landscapes is defined that comes with a limited number of parameters that allow to control important features of the landscape while all being still simple in some sense. Different expressions of these landscapes are presented and fundamental properties are explored.

A Parallel Version of SMS-EMOA for Many-Objective Optimization Problems Raquel Hernndez, Carlos A. Coello and Enrique Alba In the last decade, there has been a growing interest in multi-objective evolutionary algorithms that use performance indicators to guide the search. A simple and effective one is the S-Metric Selection Evolution- ary Multi-Objective Algorithm (SMS-EMOA), which is based on the hypervolume indicator. Even though the maximization of the hypervolume is equivalent to achieving Pareto optimality, its computational cost increases exponentially on the number of objectives, which severely limits its applicability to many-objective optimization problems. In this paper, we present a parallel version of SMS-EMOA, where execution time is reduced through an asynchronous island model with micro-populations, and diversity is preserved by exter- nal archives that are pruned to a fixed size employing a recently created technique based on the Parallel- Coordinates graph. The proposed approach, called S-PAMICRO (PArallel MICRo Optimizer based on the S metric), is compared to the original SMS-EMOA and another state-of-the-art algorithm (HypE) on the WFG test problems using up to 10 objectives. Our experimental results show that S-PAMICRO is a promising alternative that can solve many-objective optimization problems at an affordable computational cost.

Self-adaptation of Mutation Rates in Non-elitist Populations Duc-Cuong Dang and Per Kristian Lehre The runtime of evolutionary algorithms (EAs) depends crit- ically on their parameter settings, which are often problem-specific. Au- tomated schemes for parameter tuning have been developed to alleviate the high costs of manual parameter tuning. Experimental results indi- cate that self-adaptation, where parameter settings are encoded in the genomes of individuals, can be effective in continuous optimisation. How- ever, results in discrete optimisation have been less conclusive. Further- more, a rigorous runtime analysis that explains how self-adaptation can lead to asymptotic speedups has been missing. This paper provides the first such analysis for discrete, population-based EAs. We describe how a self-adaptive EA is capable of fine-tuning its mutation rate, leading to exponential speedups over EAs using fixed mutation rates.

Session 6

Tournament Selection based on Statistical Test in Genetic Programming Thi Huong Chu, Quang Uy Nguyen and Michael O’Neill Selection plays a critical role in the performance of evolutionary algorithms. Tournament selection is often considered the most popular techniques among several selection methods. Standard tournament selection randomly selects several individuals from the population and the individual with the best fitness value is chosen as the winner. In the context of Genetic Programming, this approach ignores the error value on the fitness cases of the problem emphasising relative fitness quality rather than detailed quantitative comparison. Subsequently, potentially useful information from the error vector may be lost. In this paper, we introduce

42 the use of a statistical test into selection that utilizes information from the individual’s error vector. Two variants of tournament selection are proposed, and tested on Genetic Programming for symbolic regression problems. On the benchmark problems examined we observe a benefit of the proposed methods in reducing code growth and generalisation error.

Multi-objective Local Search based on Decomposition Bilel Derbel, Arnaud Liefooghe, Qingfu Zhang, Hernan Aguirre and Kiyoshi Tanaka It is generally believed that Local search (LS) should be used as a basic tool in multi-objective evolutionary computation for combinatorial optimization. However, not much effort has been made to investigate how to efficiently use LS in multi-objective evolutionary computation algorithms. In this paper, we study some issues in the use of cooperative scalarizing local search approaches for decomposition-based multi-objective combinatorial optimization. We propose and study multiple move strategies in the MOEA/D framework. By extensive experiments on a new set of bi-objective traveling salesman problems with tunable correlated objectives, we analyze these policies with different MOEA/Dparameters. Our empirical study has shed some insights on the impact of the LS move strategy on the anytime performance of the algorithm.

Provably Optimal Self-Adjusting Step Sizes for Multi-Valued Decision Variables Benjamin Doerr, Carola Doerr and Timo Ktzing We regard the problem of maximizing a OneMax-like function defined over an alphabet of size r. In previous work [GECCO 2016] we have investigated how three different mutation operators influence the performance of Randomized Local Search (RLS) and the (1+1) Evolutionary Algorithm. This work revealed that among these natural mutation operators none is superior to the other two for any choice of r. We have also given in [GECCO 2016] some indication that the best achievable run time for large r is Theta(n log r(log n + log r)), regardless of how the mutation operator is chosen, as long as it is a static choice (i.e., the distribution used for variation of the current individual does not change over time). Here in this work we show that we can achieve a better performance if we allow for adaptive mutation operators. More precisely, we analyze the performance of RLS using a self-adjusting mutation strength. In this algorithm the size of the steps taken in each iteration depends on the success of previous iterations. That is, the mutation strength is increased after a successful iteration and it is decreased otherwise. We show that this idea yields an expected optimization time of θ(n(log n + log r)), which is optimal among all comparison-based search heuristics. This is the first time that self-adjusting parameter choices are shown to outperform static choices on a discrete multi-valued optimization problem.

Evolving Cryptographic Pseudorandom Number Generators Stjepan Picek, Do- minik Sisejkovic, Vladimir Rozic, Bohan Yang, Nele Mentens and Domagoj Jakobovic Random number generators (RNGs) play an important role in many real-world applications. Besides true hardware RNGs, one important class are deterministic random number generators. Such generators do not possess the unpredictability of true RNGs, but still have a widespread usage. For a deterministic RNG to be used in cryptography, it needs to fulfill a number of conditions related to the speed, the security, and the ease of implementation. In this paper, we investigate how to evolve deterministic RNGs with Cartesian Genetic Programming. Our results show that such evolved generators easily pass all randomness tests and are extremely fast/small in hardware.

Efficient Global Optimization with Indefinite Kernels Martin Zaefferer and Thomas Bartz-Beielstein Kernel based surrogate models like Kriging are a popular approach to deal with costly objective function eval- uations in optimization. Usually, kernels are required to be positive semi-definite (PSD). Highly customized kernels, or kernels for combinatorial representations, may be indefinite. This study shows how deal with this issue in the context of Kriging. It is shown that approaches from the field of Support Vector Machines are use- ful starting points, but require additional modifications to work with Kriging. This study compares a broad

43 selection of methods for dealing with indefinite kernels in Kriging and Kriging-based Efficient Global Opti- mization. This includes spectrum transformation, feature embedding and computation of the nearest definite matrix. Model quality as well as optimization performance with a Kriging-supported Genetic Algorithm are tested. The standard, without explicit correction, yields reasonable results, but spectrum transformations work best.

Towards Many-objective Optimisation with Hyper-heuristics: Identifying Good Heuristics with Indicators David Walker and Ed Keedwell The use of hyper-heuristics is increasing in the multi-objective optimisation domain, and the next logical advance in such methods is to use them in the solution of many-objective problems. Such problems comprise four or more objectives and are known to present a significant challenge to standard dominance-based evolu- tionary algorithms. We incorporate three comparison operators as alternatives to dominance and investigate their potential to optimise many-objective problems with a hyper-heuristic from the literature. We discover that the best results are obtained using either the favour relation or hypervolume, but conclude that changing the comparison operator alone will not allow for the generation of estimated Pareto fronts that are both close to and fully cover the true Pareto front.

Simple Random Sampling Estimation of the Number of Local Optima Khulood Alyahya and Jonathan Rowe We evaluate the performance of estimating the number of local optima by estimating their proportion in the search space using simple random sampling. The performance of this method is compared against that of the jackknife method. The methods are used to estimate the number of optima in two landscapes of random instances of two combinatorial optimisation problems. Simple random sampling provides a cheap, unbiased and accurate estimate when the proportion is not exceedingly small. We discuss choices of confidence interval in the case of extremely small proportion. In such cases, the method provides an upper bound to the number of optima and can be combined with other methods to obtain a better lower bound. We suggest that simple random sampling should be the first choice for estimating the number of optima when no prior information is known about the landscape under study.

Reducing Dimensionality to Improve Search in Semantic Genetic Programming Luiz Otavio V. B. Oliveira, Luis Fernando Miranda, Gisele L. Pappa, Fer- nando E. B. Otero and Ricardo H. C. Takahashi Genetic programming approaches are moving from analysing the syntax of individual solutions to look into their semantics. One of the common definitions of the semantic space in the context of symbolic regression is a n-dimensional space, where n corresponds to the number of training examples. In problems where this number is high, the search process can became harder as the number of dimensions increase. Geometric semantic genetic programming (GSGP) explores the semantic space by performing geometric semantic operations— the fitness landscape seen by GSPS is guarantee to be conic by construction. Intuitively, a lower number of dimensions can make search more feasible in this scenario, decreasing the chances of data overfitting and reducing the number of evaluations required to find a suitable solution. This paper proposes two approaches for dimensionality reduction in GSGP: (i) to apply current instance selection methods as a pre-process step before training points are given to GSGP; (ii) to incorporate instance selection to the evolution of GSGP. Experiments in 15 datasets show that GSGP performance is improved by using instance reduction during the evolution.

A Novel Efficient Mutation for Evolutionary Design of Combinational Logic Circuits Francisco Manfrini, Heder Bernardino and Helio Barbosa In this paper we investigate evolutionary mechanisms and propose a new mutation operator for the evo- lutionary design of Combinational Logic Circuits (CLCs). Understanding the root causes of evolutionary success is critical to improving existing techniques. Our focus is two-fold: to analyze beneficial mutations in Cartesian Genetic Programming, and to create an efficient mutation operator for digital CLC design. In the experiments performed the mutation proposed is better than or equivalent to traditional mutation.

44 Lyapunov design of a simple step-size adaptation strategy based on success Claudia Correa, Elizabeth Wanner and Carlos Fonseca A simple success-based step-size adaptation rule for single-parent Evolution Strategies is formulated, and the setting of the corresponding parameters is considered. Theoretical convergence on the class of strictly unimodal functions of one variable that are symmetric around the optimum is investigated using a stochastic Lyapunov function method developed by Semenov and Terkel (2003) in the context of martingale theory. General expressions for the conditional expectations of the next values of step size and distance to the optimum under (1+,lambda)-selection are analytically derived, and an appropriate Lyapunov function is constructed. Convergence rate upper bounds, as well as adaptation parameter values, are obtained through numerical optimization for growing values of lambda. By selecting the number of offspring that minimizes the bound on the convergence rate with respect to the number of function evaluations, all strategy parameter values result from the analysis.

Fast and Effective Multi-Objective Optimisation of Submerged Wave Energy Converters Didac Rodriguez Arbones, Nataliia Y. Sergiienko, Boyin Ding and Markus Wagner Despite its considerable potential, wave energy has not yet reached full commercial development. Currently, dozens of wave energy projects are exploring a variety of techniques to produce wave energy efficiently. A common design for a wave energy converter is called a buoy. The buoy typically floats on the surface or just below the surface of the water, and captures energy from the movement of the waves. In this article, we tackle the multi-objective variant of this problem: we are taking into account the highly complex interactions of the buoys, while optimising the energy yield, the necessary area, and the cable length needed to connect all buoys. We employ caching-techniques and problem- specific variation operators to make this problem computationally feasible. This is the first time the interactions between wave energy resource and array configuration are studied in a multi-objective way.

Analysing the performance of migrating birds optimisation approaches for large scale continuous problems Eduardo Lalla-Ruiz, Eduardo Segredo, Stefan Vo, Emma Hart and Ben Paechter We present novel algorithmic schemes for dealing with large scale continuous problems. They are based on the recently proposed population-based meta-heuristics Migrating Birds Optimisation (MBO) and Multi-leader Migrating Birds Optimisation (MMBO), that have shown to be effective for solving combinatorial problems. The main objective of the current paper is twofold. First, we introduce a novel neighbour generating operator based on Differential Evolution (DE) that allows to produce new individuals in the continuous decision space starting from those belonging to the current population. Second, we evaluate the performance of MBO and MMBO by incorporating our novel operator to them. Hence, MBO and MMBO are enabled for solving continuous problems. Comparisons are carried out by applying both aforementioned schemes to a set of well-known large scale functions.

A Parallel Multi-Objective Memetic Algorithm based on the IGD+ Indicator Edgar Manoatl Lopez and Carlos Coello Coello The success of local search techniques in the solution of combinatorial optimization problems has motivated their incorporation into multi-objective evolutionary algorithms, giving rise to the so-called multi-objective memetic algorithms (MOMAs). The main advantage for adopting this sort of hybridization is to speed up convergence to the Pareto front. However, the use of MOMAs introduces new issues, such as how to select the solutions to which the local search will be applied and for how long to run the local search engine (the use of such a local search engine has an extra computational cost). Here, we propose a new MOMA which switches between a hypervolume-based global optimizer and an IGD+-based local search engine. Our proposed local search engine adopts a novel clustering technique based on the IGD+ indicator for splitting the objective space into sub-regions. Since both computing the hypervolume and applying a local search engine are very costly procedures, we propose a GPU-based parallelization of our algorithm. Our preliminary results indicate that

45 our MOMA is able to converge faster than SMS-EMOA to the true Pareto front of multi-objective problems having different degrees of difficulty.

Pareto inspired multi-objective rule fitness for adaptive rule-based machine learning Ryan Urbanowicz, Jason Moore and Randal Olson Learning classifier systems (LCSs) are rule-based evolutionary algorithms uniquely suited to classification and data mining in complex, multi-factorial, and heterogeneous problems. LCS rule fitness is commonly based on accuracy, but this metric alone is not ideal for assessing global rule ‘value’ in noisy problem domains, and thus impedes effective knowledge extraction. Multi-objective fitness functions are promising but rely on knowledge of how to weigh objective importance. Prior knowledge would be unavailable in most real-world problems. The Pareto-front concept offers a strategy for multi-objective machine learning that is agnostic to objective importance. We propose a Pareto-inspired multi-objective rule fitness (PIMORF) for LCS, and combine it with a complimentary rule-compaction approach (SRC). We implemented these strategies in ExSTraCS, a successful supervised LCS and evaluated performance over an array of complex simulated noisy and clean problems (i.e. genetic and multiplexer) that each concurrently model pure interaction effects and heterogeneity. While evaluation over multiple performance metrics yielded mixed results, this work represents an important first step towards efficiently learning complex problem spaces without the advantage of prior problem knowledge. Overall the results suggest that PIMORF paired with SRC improved rule set interpretability, particularly with regard to heterogeneous patterns.

Session 7

On the Non-uniform Redundancy in Grammatical Evolution Ann Thorhauer The right choice of representation is crucial for the success of a heuristic search method. When using an indirect representation, a distinction between geno- and phenotype is made. Representations are redundant if more than one genotype represents the same phenotype. Redundant representations can influence the search process if the optimal solution is under- or overrepresented. This paper examines the redundancy of representation in grammatical evolution (GE) for binary trees. In our experiments, we analyze the pheno- type trees concerning their characteristics: size, depth and shape. We show that the GE representation is strongly non-uniformly redundant since there are huge differences in the number of genotypes that encode one particular phenotype. In addition, there are more different invalid than different valid trees.

An Evolutionary Hyper-heuristic for the Software Project Scheduling Problem Xiuli Wu, Pietro Consoli, Leandro Minku, Gabriela Ochoa and Xin Yao Software project scheduling plays an important role in reducing the cost and duration of software projects. It is an NP-hard combinatorial optimization problem that has been addressed based on single and multi- objective algorithms. However, such algorithms have always used fixed genetic operators, and it is unclear which operators would be more appropriate across the search process. In this paper, we propose an evolu- tionary hyper-heuristics to solve the software project scheduling problem. Our novelty is the following: (1) this is the first work to adopt an evolutionary hyper-heuristic for the software project scheduling problem; (2) we design different credit assignment methods for the two types of operators: mutation and crossover; and (3) we use a sliding multi-armed bandit strategy to adaptively choose both crossover and mutation operators. The experimental results show that the proposed algorithm can solve the software project scheduling problem effectively.

46 Convergence Versus Diversity in Multiobjective Optimization Shouyong Jiang and Shengxiang Yang Convergence and diversity are two main goals in multiobjective optimization. In literature, most existing multiobjective optimization evolutionary algorithms (MOEAs) adopt a convergence-first-and-diversity-second environmental selection which prefers nondominated solutions to dominated ones, as is the case with the popular nondominated sorting based selection method. While convergence-first sorting has continuously shown effectiveness for handling a variety of problems, it faces challenges to maintain well population diversity due to the overemphasis of convergence. In this paper, we propose a general diversity-first sorting method for multiobjective optimization. Based on the method, a new MOEA, called DBEA, is then introduced. DBEA is compared with the recently-developed nondominated sorting genetic algorithm III (NSGA-III) on different problems. Experimental studies show that the diversity-first method has great potential for diversity maintenance and is very competitive for many-objective optimization.

Multi-Objective Selection of Algorithm Portfolios: Experimental Validation Daniel Horn, Karin Schork and Tobias Wagner The selection of algorithms to build portfolios represents a multi-objective problem. From a possibly large pool of algorithm candidates, a portfolio of limited size but good quality over a wide range of problems is desired. Each algorithm is represented by its Pareto front, which has been approximated in an a priori parameter tuning. Our approach for multi-objective selection of algorithm portfolios (MOSAP) is capable to trade-off the number of algorithm candidates and the respective quality of the portfolio. The quality of the portfolio is defined as the distance to the joint Pareto front of all algorithm candidates, which is computed using established binary performance indicators. By means of a decision tree, also the selection of the right algorithm is possible based on the characteristics of the problem. In this paper, we propose a validation framework to analyze the performance of our MOSAP approach. This framework is based on a parametrized generator of the algorithm candidate’s Pareto front shapes. We discuss how to sample a landscape of multiple Pareto fronts with predefined intersections. The validation is performed by calculating discrete approximations for different landscapes and assessing the effect of the landscape parameters, such as number and distribution of the fronts, on the MOSAP approach.

On the Robustness of Evolving Populations Tobias Friedrich, Timo Ktzing and Andrew Sutton Most theoretical work that studies the benefit of recombination focuses on the ability of crossover to speed up optimization time on specific search problems. In this paper, we take a slightly different perspective and investigate recombination in the context of evolving solutions that exhibit mutational robustness, i.e., they display insensitivity to small perturbations. Various models in population genetics have demonstrated that increasing the effective recombination rate promotes the evolution of robustness. We show this result also holds in the context of evolutionary computation by proving crossover promotes the evolution of robust solutions in the standard (µ+1) GA. Surprisingly, our results show that this effect is still present even when robust solutions are at a selective disadvantage due to lower fitness values.

The Competing Travelling Salespersons Problem under Multi-criteria Erella Matalon-Eisenstadt, Amiram Moshaiov and Gideon Avigad This paper introduces a novel type of a problem in which two travelling salespersons are competing, where each of them has two conflicting objectives. This problem is categorized as a Multi-Objective Game (MOG). It is solved by a non-utility approach, which has recently been introduced. According to this method all rationalizable strategies are initially found, to support posteriori decision on a strategy. An evolutionary algorithm is proposed to search for the set of rationalizable strategies. The applicability of the suggested algorithm is successfully demonstrated on the presented new type of problem.

47 Doubly Trained Evolution Control for the Surrogate CMA-ES Zbynk Pitra, Luk Bajer and Martin Holena This paper presents a new variant of surrogate-model utilization in expensive continuous evolutionary black- box optimization. This algorithm is based on the surrogate version of the CMA-ES, the Surrogate Covariance Matrix Adaptation Evolution Strategy (S-CMA-ES). Similarly to the original S-CMA-ES, expensive function evaluations are saved through a surrogate model. However, the model is retrained after the points in which its prediction was most uncertain have been evaluated by the true fitness in each generation. We demonstrate that within small budget of evaluations, the new variant of S-CMA-ES improves the original algorithm and outperforms two state-of-the-art surrogate optimizers, except a few evaluations at the beginning of the optimization process.

Semantic Forward Propagation for Symbolic Regression Marcin Szubert, Anu- radha Kodali, Sangram Ganguly, Kamalika Das and Joshua Bongard In recent years, a number of methods have been proposed that attempt to improve the performance of genetic programming by exploiting information about program semantics. One of the most important developments in this area is semantic backpropagation. The key idea of this method is to decompose a program into two parts — a subprogram and a context — and calculate the desired semantics of the subprogram that would make the entire program correct, assuming that the context remains unchanged. In this paper we introduce Forward Propagation Mutation, a novel operator that relies on the opposite assumption — instead of preserving the context, it retains the subprogram and attempts to place it in the semantically right context. We empirically compare the performance of semantic backpropagation and forward propagation operators on the set of symbolic regression benchmarks. The experimental results demonstrate that semantic forward propagation produces smaller programs that achieve significantly higher generalization performance.

Evolution of spiking neural networks robust to noise and damage for control of simple animats Borys Wrobel One of the central questions of biology is how complex biological systems can continue functioning in the presence of perturbations, damage, and mutational insults. This paper investigates evolution of spiking neural networks, consisting of adaptive-exponential neurons. The networks are encoded in linear genomes in a manner inspired by genetic networks. The networks control a simple animat, with two sensors and two actuators, searching for targets in simple environment. The results show that the presence of noise on the membrane voltage during evolution allows for evolution of efficient control and robustness to perturbations to the value of the neural parameters of neurons.

Anomaly Detection with the Voronoi Diagram Evolutionary Algorithm Luis Mart, Arsene Fansi, Laurent Navarro and Marc Schoenauer This paper presents the Voronoi diagram-based Artificial Immune System (VorAIS). VorAIS models the self/non-self using a Voronoi diagram that determines which areas of the problem domain correspond to self or non-self areas. The diagram is evolved using a multi-objective bio-inspired approach in order to conjointly optimize classification metrics while also being able to represent areas of the data space that are not present in the training dataset. As part of the paper VorAIS is experimentally validated and contrasted with similar approaches.

Use of Piecewise Linear and Nonlinear Scalarizing Functions in MOEA/D Hisao Ishibuchi, Ken Doi and Yusuke Nojima A number of weight vector-based algorithms have been proposed for many-objective optimization using the framework of MOEA/D (multi-objective evolutionary algorithm based on decomposition). Those algorithms are characterized by the use of uniformly distributed normalized weight vectors, which are also referred to as reference vectors, reference lines and search directions. Their common idea is to minimize the distance to the ideal point (i.e., convergence) and the distance to the reference line (i.e., uniformity). Each algorithm has its own mechanism for striking a convergence-uniformity balance. In the original MOEA/D with the PBI (penalty-based boundary intersection) function, this balance is handled by a penalty parameter. In

48 this paper, we first discuss why an appropriate specification of the penalty parameter is difficult in the PBI function. Next we suggest a desired shape of contour lines of a scalarizing function in MOEA/D. Then we propose two ideas for modifying the PBI function. The proposed ideas generate piecewise linear and nonlinear contour lines. Finally we examine the effectiveness of the proposed ideas on the performance of MOEA/D through computational experiments on many-objective test problems.

k-Bit Mutation with Self-Adjusting k Outperforms Standard Bit Mutation Ben- jamin Doerr, Carola Doerr and Jing Yang When using the classic standard bit mutation operator, parent and offspring differ in a random number of bits, distributed according to a binomial law. This has the advantage that all Hamming distances occur with some positive probability, hence this operator can be used, in principle, for all fitness landscape. The downside of this “one-size-fits-all” approach, naturally, is a performance loss caused by the fact that often not the ideal number of bits is flipped. Still, the fear of getting stuck in local optima has made standard bit mutation become the preferred mutation operator. In this work we show that a self-adjusting choice of the number of bits to be flipped can both avoid the performance loss of standard bit mutation and avoid the risk of getting stuck in local optima. We propose a simple mechanism to adaptively learn the currently optimal mutation strength from previous iterations. This aims both at exploiting that generally different problems may need different mutation strengths and that for a fixed problem different strengths may become optimal in different stages of the optimization process. We experimentally show that our simple hill-climber with this adaptive mutation strength outperforms both the randomized local search heuristic and the (1+1) evolutionary algorithm on the LeadingOnes test function and on the minimum spanning tree problem. We show via mathematical means that our algorithm is able to detect precisely (apart from lower order terms) the complicated optimal fitness-dependent mutation strength recently discovered for the OneMax test function. With its self-adjusting mutation strength it thus attains the same runtime (apart from o(n) lower-order terms) and the same (asymptotic) 13% fitness-distance improvement over RLS that was recently obtained by manually computing the optimal fitness-dependent mutation strength.

Landscape Features for Computationally Expensive Evaluation Functions: Re- visiting the Problem of Noise Eric O. Scott and Kenneth A. De Jong When combined with machine learning, the black-box analysis of fitness landscapes promises to provide us with easy-to-compute features that can be used to select and configure an algorithm that is well-suited to the task at hand. As applications that involve computationally expensive, stochastic simulations become increasingly relevant in practice, however, there is a need for landscape features that are both A) possible to estimate with a very limited budget of fitness evaluations, and B) accurate in the presence of small to moderate amounts of noise. We show via a small set of relatively inexpensive landscape features based on hill-climbing methods that these two goals are in tension with each other: cheap features are sometimes extremely sensitive to even very small amounts of noise. We propose that features that use population-based search methods as subroutines, instead of hill climbers, may provide an path forward in developing landscape analysis tools that are both inexpensive and robust to noise.

49