Problem Decomposition and Adaptation in Cooperative Neuro-Evolution
Total Page:16
File Type:pdf, Size:1020Kb
Problem Decomposition and Adaptation in Cooperative Neuro-evolution by Rohitash Chandra A thesis submitted to the Victoria University of Wellington in fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science. Victoria University of Wellington 2012 Abstract One way to train neural networks is to use evolutionary algorithms such as cooperative coevolution - a method that decomposes the network’s learnable parameters into subsets, called subcomponents. Cooperative coevolution gains advantage over other methods by evolving particular subcomponents independently from the rest of the network. Its success depends strongly on how the problem decomposition is carried out. This thesis suggests new forms of problem decomposition, based on a novel and intuitive choice of modularity, and examines in detail at what stage and to what extent the different decomposition methods should be used. The new methods are evaluated by training feedforward networks to solve pattern classification tasks, and by training recurrent networks to solve grammatical inference problems. Efficient problem decomposition methods group interacting variables into the same subcomponents. We examine the methods from the liter- ature and provide an analysis of the nature of the neural network opti- mization problem in terms of interacting variables. We then present a novel problem decomposition method that groups interacting variables and that can be generalized to neural networks with more than a single hidden layer. We then incorporate local search into cooperative neuro-evolution. We present a memetic cooperative coevolution method that takes into account the cost of employing local search across several sub-populations. The optimisation process changes during evolution in terms of diver- sity and interacting variables. To address this, we examine the adaptation of the problem decomposition method during the evolutionary process. The results in this thesis show that the proposed methods improve per- formance in terms of optimization time, scalability and robustness. As a further test, we apply the problem decomposition and adaptive cooperative coevolution methods for training recurrent neural networks on chaotic time series problems. The proposed methods show better per- formance in terms of accuracy and robustness. Dedication I dedicate this thesis to my mom, Prakash Wati. She has been the main inspiration behind this thesis. iii iv Acknowledgements I wish to acknowledge my supervisors, Dr. Marcus Frean and Prof. Mengjie Zhang for their guidance during the course of the PhD program. I would like to thank Prof. Christian Omlin of Middle East Techni- cal University and Mohammad Nabi Omidvar of RMIT University. I ac- knowledge the comments from the anonymous reviewers on the papers published from this thesis. I also acknowledge the examination commit- tee. My sincere gratitude to Siva Kumar Dorairaj and Rajeswari Nadara- jan for their support and motivation. I also thank Prema Ram and Mani Nambayah for their support. I also thank my sisters, Kumari Ranjeeni and Ranjita Kumari for their support and my wife Ronika Kumar for the inspiration. My sincere grati- tude to all my family members and friends for their support. v vi Publications Journal Articles 1. R. Chandra, M. Frean, M. Zhang, Crossover-based Local Search in Cooperative Co-evolutionary Feedforward Networks, Applied Soft Computing, Elsevier, Conditionally Accepted, March 2012 . 2. R. Chandra, M. Frean , M. Zhang, On the Issue of Separability in Cooperative Co-evolutionary Feedforward Networks , Neurocom- puting, Elsevier, In press, 2012. (http://dx.doi.org/10.1016/j.neucom.2012.02.005) 3. R. Chandra, M. Zhang, Cooperative Coevolution of Elman Recurrent Neural Networks for Chaotic Time Series Prediction, Neurocomput- ing, Elsevier, In Press, 2012. (http://dx.doi.org/10.1016/j.neucom.2012.01.014) 4. R. Chandra , M. Frean, M. Zhang, Adapting Modularity During Learn- ing in Cooperative Co-evolutionary Recurrent Neural Networks, Soft Computing, In Press, 2012 (DOI: 10.1007/s00500-011-0798-9). 5. R. Chandra M. Frean, M. Zhang and Christian Omlin, Encoding Sub- components in Cooperative Coevolutionary Recurrent Neural Net- works , Neurocomputing, Elsevier, Vol. 74, 2011, pp. 3223-3234. vii Refereed Conference Proceedings 1. R. Chandra, M. Frean, M. Zhang, Modularity Adaptation in Cooper- ative Co-evolutionary Feedforward Neural Networks , Proceedings of the International Joint Conference on Neural Networks, San Jose, USA, 2011, pp. 681-688. 2. R. Chandra, M. Frean, M. Zhang, A Memetic Framework for Co- operative Coevolution of Recurrent Neural Networks , Proceedings of the International Joint Conference on Neural Networks, San Jose, USA, 2011, pp. 673-680. 3. R. Chandra, M. Frean, M. Zhang, An Encoding Scheme for Coop- erative Co-evolutionary Neural Networks. Proceedings of the 23rd Australasian Joint Conference on Artificial Intelligence. AI 2010: Ad- vances in Artificial Intelligence. Lecture Notes in Artificial Intelli- gence. Vol. 6464. Springer. Adelaide, Australia, 2010. pp. 253-262. 4. R. Chandra, M. Frean, L. Rolland, A Hybrid Meta-Heuristic Paradigm for Solving the Forward Kinematics of 6-6 General Parallel Manipu- lator , Proceedings of 8th IEEE International Symposium on Compu- tational Intelligence in Robotics and Automation (CIRA 2009), Dae- jeon, Korea, 2009, pp. 171-176. 5. R. Chandra, M. Zhang, L. Rolland, Solving the Forward Kinemat- ics of the 3RPR Planar Parallel Manipulator using a Hybrid Meta- Heuristic Paradigm , Proceedings of 8th IEEE International Sym- posium on Computational Intelligence in Robotics and Automation (CIRA 2009), Daejeon, Korea, 2009, pp. 177-182. Other Related Publications 1. R. Chandra and L. Rolland, ”On Solving the Forward Kinematics of 3RPR Planar Parallel Manipulator using Hybrid Metaheuristics”, viii Applied Mathematics and Computation, Elsevier, Vol 217, No. 22, 2011 pp. 8997-9008 , (doi: 10.1016/j.amc.2011.03.106) 2. L. Rolland and R. Chandra, ” On Solving the Forward Kinematics of the 6-6 General Parallel Manipulator with an Efficient Evolutionary Algorithm”, In ROMANSY 18 - Robot Design, Dynamics and Con- trol, Series: CISM International Centre for Mechanical Sciences, Vol. 524 Springer, Berlin, 2010, pp. 117-124. ix x Contents 1 Introduction 1 1.1 Premises .............................. 1 1.2 Motivations ............................ 4 1.3 Research Goals .......................... 7 1.4 Major Contributions ....................... 8 1.5 Thesis Outline ........................... 9 2 Background and Literature Review 11 2.1 Machine Learning and Optimisation .............. 11 2.2 Neural Networks ......................... 12 2.2.1 Feedforward Neural Networks ............. 13 2.2.2 Learning in Feedforward Networks .......... 14 2.2.3 Overview of Recurrent Neural Networks ....... 16 2.2.4 First-Order Recurrent Networks ............ 19 2.2.5 Learning in Recurrent Neural Networks ........ 20 2.2.6 Learning Finite-State Machines with RNNs ..... 21 2.3 Evolutionary Computation ................... 26 2.3.1 Real Coded Genetic Algorithms ............ 27 2.3.2 G3-PCX: Generalised Generation Gap with PCX ... 31 2.4 Hybrid Meta-heuristics and Memetic Algorithms ...... 33 2.4.1 Collaborative Hybrid MHs ............... 34 2.4.2 Integrative Hybrid MHs: Memetic Algorithms .... 36 xi 2.4.3 MHs with evolutionary intensification and diversi- fication ........................... 40 2.5 Cooperative Coevolution .................... 41 2.5.1 Diversity in Cooperative Coevolution ......... 44 2.5.2 Cooperative Coevolution for Non-Separable Problems 45 2.6 Neuro-Evolution ......................... 50 2.6.1 Direct Encoding in Neuro-evolution .......... 51 2.6.2 Indirect Encoding in Neuro-evolution ......... 54 2.6.3 Hybrid HMs for Neuro-Evolution ........... 54 2.6.4 Modularity and Problem Decomposition ....... 55 2.6.5 Fitness Evaluation of the Sub-populations ....... 59 2.6.6 Evaluation of Cooperative Neuro-evolution ..... 60 2.6.7 Chapter Summary .................... 62 3 Problem Decomposition in Cooperative Neuro-evolution 65 3.1 Introduction ............................ 65 3.1.1 Preliminaries and Motivation .............. 67 3.2 Neuron Based Sub-population (NSP) ............. 77 3.2.1 Feedforward Neural Networks ............. 77 3.2.2 Recurrent Neural Networks ............... 80 3.3 Simulations ............................ 82 3.3.1 Feedforward Neural Networks ............. 82 3.3.2 Recurrent Neural Networks ............... 102 3.3.3 Discussion ........................ 118 3.4 Chapter Summary ......................... 121 4 Memetic Cooperative Neuro-evolution 123 4.1 Introduction ............................ 123 4.1.1 Global and Local Search in Evolutionary Algorithms 124 4.2 Memetic Cooperative Neuro-evolution ............ 126 4.2.1 G3-PCX for Crossover-based Local Search ....... 131 4.3 Simulation ............................. 132 xii 4.3.1 Feedforward Neural Networks ............. 133 4.3.2 Recurrent Neural Networks ............... 143 4.3.3 Discussion ........................ 144 4.4 Chapter Summary ......................... 153 5 Adaptive Modularity in Cooperative Neuro-evolution 155 5.1 Introduction ............................ 155 5.2 Adaptive Modularity in Cooperative Neuro-evolution ... 158 5.2.1 Function Evaluation During Initialisation ....... 162 5.2.2 Transfer of Valuable Information ............ 164 5.2.3 The Heuristic to Change Modularity .......... 165 5.3 Simulation and Analysis ..................... 166 5.3.1 Feedforward Neural Networks ............