Evolving Neural Fields for Problems
Total Page:16
File Type:pdf, Size:1020Kb
Evolving neural fields for problems with large input and output spaces Benjamin Indena,d,∗, Yaochu Jinb, Robert Haschkec, Helge Ritterc aResearch Institute for Cognition and Robotics, Bielefeld University, Universitätsstr. 25, 33615 Bielefeld, Germany bDepartment of Computing, University of Surrey, United Kingdom cNeuroinformatics Group, Bielefeld University, Germany dArtificial Intelligence Group, Bielefeld University, Germany Abstract We have developed an extension of the NEAT neuroevolution method, called NEATfields, to solve problems with large input and output spaces. The NEATfields method is a multilevel neuroevolution method using externally specified design patterns. Its networks have three levels of architecture. The highest level is a NEAT-like network of neural fields. The intermediate level is a field of identical subnetworks, called field elements, with a two-dimensional topology. The lowest level is a NEAT-like subnetwork of neurons. The topology and connection weights of these networks are evolved with methods derived from the NEAT method. Evolution is provided with further design patterns to enable information flow between field elements, to dehomogenize neural fields, and to enable detection of local features. We show that the NEATfields method can solve a number of high dimensional pattern recognition and control problems, provide conceptual and empirical comparison with the state of the art HyperNEAT method, and evaluate the benefits of different design patterns. Keywords: Neuroevolution; indirect encoding; artificial life; NEAT. 1. Introduction is that most evolutionary algorithms use a recombination operator to obtain the benefits that sexual reproduction 1.1. Evolving artificial neural networks provides to evolution. Ideally, recombination could com- Artificial neural networks are computational models of bine the good features of two organisms in their offspring. animal nervous systems and have found a wide range of However, it is not obvious what constitutes a “feature” successful applications, such as system control and image in a neural network with an arbitrary topology, or how processing. Due to their nonlinear nature it is often diffi- corresponding features in two neural network with differ- cult to manually design neural networks for a specific task. ent topologies can be found. Similar problems exist for To this end, evolutionary algorithms have been widely used genomes of variable lengths in general. As discussed in for automatic design of neural networks (Yao, 1999; Flo- the next section, the already well known NEAT method reano et al., 2008). An important advantage of designing employs techniques that solve these problems satisfacto- neural networks with evolutionary algorithms is that both rily. Our NEATfields method makes use of the same tech- weights and topology of the neural networks can be op- niques. timized. However, if the network topology is changed by Another challenge in evolving neural networks is the evolution, a number of problems can arise that have to be scalability issue: the evolution of solutions for tasks of a addressed. large dimension. This problem has not been addressed One problem is how to add new neurons or connections by the NEAT method, which typically evolves neural net- to a neural network without fully disrupting the function works of, say, 1 to 50 neurons. The problem is particu- that it already performs. Of course, new elements that larly serious if, like in NEAT, a direct encoding scheme are added to the network should change its function to is used for representing the neural network because if ev- some degree because if there is no change at all, elements ery connection weight is directly encoded in the genome, without any function could accumulate over the course of the length of the genome grows linearly with the number evolution, and the networks would become too large. This of connections. However, the performance of evolutionary problem is known as the “bloat” problem in the genetic pro- algorithms degrades with increasing genome size. gramming literature (Poli et al., 2008). Another problem In contrast, indirect encoding of neural networks (Yao, 1999; Du and Swamy, 2006), in which the weights and ∗Corresponding author topologies are generated using grammatical rewriting rules, Email addresses: [email protected] grammar trees, or other methods, can achieve a sublinear (Benjamin Inden), [email protected] (Yaochu Jin), [email protected] (Robert Haschke), relationship between the genome size and the network size. [email protected] (Helge Ritter) These methods basically use a domain specific decompres- Preprint submitted to Elsevier January 16, 2012 sion algorithm in order to make a large phenotype from a Another idea introduced by NEAT is to protect innova- small genotype. Typically, the class of encodable pheno- tion that may arise during evolution through a speciation types is biased towards phenotypes that possess some kind technique. For example, if a network with a larger topol- of regularity (Lipson, 2004), i.e., some identically or sim- ogy arises by mutation, initially it may not be able to com- ilarly repeated structures. Indeed many neural networks, pete against networks with a smaller topology that are at a whether occurring in nature or in technical applications, local optimum of the fitness landscape. By using the glob- possess repeated elements. For example, the cerebral cor- ally unique reference numbers again, a distance measure tex is organized into columns of similar structure (Mount- between two genomes can be defined and used to partition castle, 1997). Brain areas concerned with visual processing the population into species. The number of offspring as- contain many modules, in which similar processing of lo- signed to a species is proportional to its mean fitness. This cal features is done for different regions of the field of view rather weak selection pressure prevents a slightly superior in parallel. This occurs in brain regions whose spatial ar- species from taking over the whole population, and en- rangement preserves the topology of the input (Bear et al., ables innovative yet currently inferior solutions to survive. 2006). In contrast, the selection pressure between members of the A particular kind of indirect encoding methods ap- same species is much stronger in NEAT. Recombination is ply artificial embryogeny to neuroevolution (Stanley and usually only allowed to occur within a species, such that Miikkulainen, 2003; Harding and Banzhaf, 2008). These parents look rather similar to each other and the offspring methods are mainly inspired by biological mechanisms in looks similar to its parents. morphological and neural development such as cell growth, Another feature of NEAT is that evolution starts with cell division, and cell migration under the control of ge- the simplest possible network topology and proceeds by netic regulatory networks. By using abstractions of these complexification, that is by adding neurons and connec- processes, large neural networks can be built from small tions. It makes sense to search for solutions with a small genomes. topology first because the size of the search space for con- Here we explore whether a very different kind of in- nection weights increases with the network size. There direct encoding, i.e., a multilevel neuroevolution method is a mutation operator that adds neurons only between using externally specified design patterns, can solve the two connected neurons, and adjusts the connection weights scalability problem. The next two subsections discuss the such that the properties of these connections change as lit- NEAT method, on which our recently introduced NEAT- tle as possible. This alleviates the problem of disrupting fields method (Inden et al., 2010) is based, and present the the existing function of a network. general approach of NEATfields. In section 2, the techni- Due to the success of the NEAT method, quite a few cal details of the method are explained. Sections 3 and derivatives have been developed. The goal of these has of- 4 present experiments on using NEATfields for problems ten been to evolve larger networks than those evolved by with large input and output spaces. Section 5 compares NEAT, and/or combine the power of NEAT, which uses NEATfields to some other indirect encoding methods used a direct encoding of connection weights, with ideas from for neuroevolution, and explains why the method is a good artificial embryogeny. For example Reisinger et al. (2004) choice for many problem domains. used NEAT networks as modules and co-evolved them with blueprints. Blueprints are lists of modules, together with 1.2. The NEAT neuroevolution method and derivatives specifications on how to map module inputs and outputs The NEAT method (Stanley and Miikkulainen, 2002; on network inputs and outputs. In the MBEANN method Stanley, 2004) is a well known and competitive neuroevo- (Ohkura et al., 2007), explicit modularity is introduced lution method that introduces a number of ideas to suc- into NEAT networks. In the initial network, all neurons cessfully deal with the problems discussed in the previous are in a first module m0. A new module is created ev- section. One idea is to give genes an unchanging identity. ery time a neuron is added that connects to at least one This is achieved by assigning a globally unique reference element in m0. New connections are established either number to each gene once it is generated by mutation. within a given module or between a given