A Novel Generative Encoding for Evolving Modular, Regular and Scalable Networks
Total Page:16
File Type:pdf, Size:1020Kb
A Novel Generative Encoding for Evolving Modular, Regular and Scalable Networks Marcin Suchorzewski Jeff Clune Artificial Intelligence Laboratory Creative Machines Laboratory West Pomeranian University of Technology Cornell University [email protected] [email protected] ABSTRACT 1. INTRODUCTION In this paper we introduce the Developmental Symbolic En- Networks designed by humans and evolution have certain coding (DSE), a new generative encoding for evolving net- properties that are thought to enhance both their perfor- works (e.g. neural or boolean). DSE combines elements of mance and adaptability, such as modularity, regularity, hier- two powerful generative encodings, Cellular Encoding and archy, scalability, and being scale-free [12, 11, 1]. In this pa- HyperNEAT, in order to evolve networks that are modular, per we introduce a new generative representation called the regular, scale-free, and scalable. Generating networks with Developmental Symbolic Encoding that evolves networks these properties is important because they can enhance per- that possess these properties. We first review the mean- formance and evolvability. We test DSE’s ability to gener- ing of these terms before discussing them with respect to ate scale-free and modular networks by explicitly rewarding previous evolutionary algorithms. these properties and seeing whether evolution can produce Modularity is the encapsulation of function within mostly networks that possess them. We compare the networks DSE independently operating units, where the internal workings evolves to those of HyperNEAT. The results show that both of the module tend to only affect the rest of the structure encodings can produce scale-free networks, although DSE through the module’s inputs and outputs [12, 11]. For ex- performs slightly, but significantly, better on this objective. ample, a car wheel is a module because it is a separate func- DSE networks are far more modular than HyperNEAT net- tional unit and changing its design is unlikely to affect other works. Both encodings produce regular networks. We fur- car systems (e.g. the muffler). This functional independence ther demonstrate that individual DSE genomes during de- enables the wheel design to be optimized independently from velopment can scale up a network pattern to accommodate most other car modules. In networks, modules are clusters of different numbers of inputs. We also compare DSE to Hyper- densely connected nodes, where connectivity is high within NEAT on a pattern recognition problem. DSE significantly the cluster and low to nodes outside the cluster [11, 12]. outperforms HyperNEAT, suggesting that its potential lay Regularity describes the compressibility of the network de- not just in the properties of the networks it produces, but scription, and often involves symmetries and the repetitions also because it can compete with leading encodings at solv- of modules [12]. Regularity and modularity are distinct con- ing challenging problems. These preliminary results imply cepts. A unicycle has one wheel module (modularity with- that DSE is an interesting new encoding worthy of additional out regularity), whereas a bicycle’s repetition of the wheel study. The results also raise questions about which network module is a form of regularity. Hierarchy is the recursive properties are more likely to be produced by different types composition of lower-level modules [12]. of generative encodings. Scalability in evolved networks refers to the ability to pro- duce effective solutions at different networks sizes. The con- Categories and Subject Descriptors: cept applies both to the idea of being able to evolve large, I.2.6 [Artificial Intelligence]: Learning but fixed-size networks [17], or to a single genome that can produce networks of different sizes as needed [6, 16, 5]. The General Terms: Algorithms latter capability is important for transferring knowledge to related problems of different complexity, and is a main moti- Keywords: Generative and Developmental Representations, vation for DSE. Scalability can be enhanced by other prop- Neuroevolution, Indirect Encoding, Networks, Modularity, erties, such as modularity, regularity and hierarchy [12]. Regularity, Scale-free, Scalability Scale-free networks have a few nodes with many connec- tions, and many nodes with few connections, making them efficient at transmitting data and robust to random node failure [1]. Many different types of networks are scale-free, from computer science (e.g. the Internet) and sociology (e.g. Permission to make digital or hard copies of all or part of this work for social networks) to biology (e.g. protein and ecological net- personal or classroom use is granted without fee provided that copies are works) [1]. Animal brains, including the human brain, are not made or distributed for profit or commercial advantage and that copies scale-free, and it is thought that this property is one reason bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific they are so capable [18]. While the scale-free property may permission and/or a fee. not help performance on all problems, it is evidently bene- GECCO’11, July 12–16, 2011, Dublin, Ireland. Copyright 2011 ACM 978-1-4503-0557-0/11/07 ...$10.00. 1523 ficial on many problems, and it is therefore beneficial if an form a more traditional test of the capability of an encoding, encoding can produce it when it is useful. we also compare DSE to HyperNEAT on a visual pattern Previous evolutionary algorithms have evolved networks recognition task. On all three challenges DSE outperforms that have some of the aforementioned network properties. HyperNEAT, suggesting that it is an encoding that deserves Most such algorithms are generative encodings [17], wherein additional investigation. information in a genotype can influence multiple parts of a phenotype, instead of direct encodings, wherein genotypic 2. DESCRIPTION OF DSE information specifies separate aspects of phenotypes. Gen- erative encodings tend to produce more regular phenotypes 2.1 Phenotype that outperform direct encodings [3, 8, 10] and can produce The phenotype of the individual is the network: modular and hierarchical structures [10]. Individual genera- G =(V,E) = ([v1 ... vN ], eij ), (1) tive genomes can also produce networks of various sizes, as { } appropriate [16, 9, 7, 5]. We are not aware of research that where V is a vector of nodes and E is a set of connections. has investigated whether evolved networks are scale-free. Nodes and connections can have attribute values, which play Two generative encodings that DSE is inspired by have be- a role either in development or in computation. Weighted come well-known in part because of their ability to produce connections have a weight attribute and plastic connections important network properties. Cellular Encoding (CE) iter- can have a number of attributes (parameters). The represen- atively applies instructions to manipulate network nodes and tation of G is a bit different from conventional notation for links. The encoding allows cells to call functions, which in- directed graphs, where nodes and connections are contained volve an entire sequence of instructions (encoded in a gram- in sets and the attributes are eventually imposed using func- mar tree), facilitating the production of regular, modular, tions. This departure follows from the assumed sequential and hierarchical networks [5]. Intriguingly, because CE al- model of computation, in which nodes are evaluated in an lows a repeated symbol to unpack into the same subnetwork, ordered manner. It is therefore natural to have them explic- the exhibited regularity, while high, tends to be a formulaic itly ordered in a vector. repetition of the same subnetwork, without complex varia- The vector V is composed of 3 contiguous subvectors: tion. input Vu, hidden Vh and output Vy. Input and output HyperNEAT generates patterns in neural networks via an subvectors constitute the network interface and their length abstraction of biological development [16]. Just as in natu- depends on the problem. The hidden part of the network is ral developing organisms, phenotypic attributes (in this case, free to vary in size. edge weights in a network) are a function of evolved geomet- DSE network phenotypes are executed similarly to arti- ric coordinate frames. HyperNEAT departs from nature, ficial neural networks. The network is initialized before an however, by not having an iterative growth process. A sin- evaluation by zeroing the values of all nodes and setting gle HyperNEAT genome can encode networks of different the weights of any plastic connections to their initial values. sizes; one was scaled up from thousands of connections to Then, for any input vector u, the network is evaluated as millions and still performed well [16]. The neural wiring pat- follows: terns HyperNEAT produces are highly functional, complex, 1: Assign inputs Vu := u and regular, including symmetries and repetition, with and 2: For each vj [Vh Vy] do without variation [3]. However, HyperNEAT struggles to ∈ 3: Evaluate node: xj := fj ( eij (xi) i=1,...,n ) produce modular networks [2] unless it is manually injected { } i 4: For each connection eij do with a bias towards modular wiring patterns [20]. 5: Update eij (if applicable) Intriguingly, CE excels at producing modularity, hierar- 6: Return Vy, chy, and formulaic regularity, but fails to generate complex where x denotes the node’s value, e (x