Evolving Autonomous Agent Controllers As Analytical Mathematical Models

Total Page:16

File Type:pdf, Size:1020Kb

Evolving Autonomous Agent Controllers As Analytical Mathematical Models Evolving Autonomous Agent Controllers as Analytical Mathematical Models Paul Grouchy1 and Gabriele M.T. D’Eleuterio1 1University of Toronto Institute for Aerospace Studies, Toronto, Ontario, Canada M3H 5T6 [email protected] Downloaded from http://direct.mit.edu/isal/proceedings-pdf/alife2014/26/681/1901537/978-0-262-32621-6-ch108.pdf by guest on 27 September 2021 Abstract a significant amount of time and computational resources. Furthermore, any such design decisions introduce additional A novel Artificial Life paradigm is proposed where experimenter bias into the simulation. Finally, complex be- autonomous agents are controlled via genetically- haviors require complex representations, adding to the al- encoded Evolvable Mathematical Models (EMMs). Agent/environment inputs are mapped to agent outputs ready high computational costs of ALife algorithms while via equation trees which are evolved using Genetic Pro- further obfuscating the inner workings of the evolved agents. gramming. Equations use only the four basic mathematical We propose Evolvable Mathematical Models (EMMs), a operators: addition, subtraction, multiplication and division. novel paradigm whereby autonomous agents are evolved via Experiments on the discrete Double-T Maze with Homing GP as analytical mathematical models of behavior. Agent problem are performed; the source code has been made available. Results demonstrate that autonomous controllers inputs are mapped to outputs via a system of genetically with learning capabilities can be evolved as analytical math- encoded equations. Only the four basic mathematical op- ematical models of behavior, and that neuroplasticity and erations (addition, subtraction, multiplication and division) neuromodulation can emerge within this paradigm without are required, as they can be used to approximate any ana- having these special functionalities specified a priori. lytic function, thus eliminating the need to determine which operators to employ. More sophisticated operations can of Introduction course be added at the will of the designer; however, the- oretically the four basic ones constitute the minimal set of When looking to design an Artificial Intelligence (AI) capa- operations required and accordingly the designer can be re- ble of robust and adaptable behavior, one must first choose lieved of much of the guessing game. Furthermore, since how to represent such an agent in silico. Although many agents are represented directly as mathematical equations, such representations exist, two of the most common are the evolved agents are amenable to mathematical analysis direct computer code representations, which can be pro- post facto. grammed autonomously via Genetic Programming (GP) The rest of this paper is organized as follows. In the next (Koza, 1992; Poli et al., 2008), and Artificial Neural Net- section, previous work on evolving ANNs with learning be- works (ANNs), which are computational models based on haviors and on Genetic Programming for agent control is the biological neural networks of animal brains that can be presented. This is followed by the algorithmic details of designed using one of a variety of approaches (Floreano EMMs. A Double-T Maze problem that requires learning et al., 2008). capabilities is then described, with the results of EMM ex- Both of these representations require significant a pri- periments in this domain presented in the subsequent sec- ori design decisions. For a GP approach, one must choose tion. The source code for these experiments is provided. which programming operations to include (e.g., bit-shift, Finally, conclusions and future work will be discussed. and, or, xor, if-then-else, etc.), with a trade-off between potential program capabilities and search space size (and Background thus the ability of GP to find a solution). With ANNs, one must choose from many different types (e.g., Feedforward, Neuroplasticity and Neuromodulation Recurrent, Continuous-Time, Spiking, etc.), with trade-offs Autonomous agents are often evolved as artificial neural that have not yet been systematically investigated (Floreano networks (ANNs). Typically, when one is evaluating an et al., 2008). Other important design decisions for ANNs ANN, either in simulation or hardware, the ANN’s connec- include the type of activation function and the incorporation tion weights are fixed. It is only when generating offspring of learning capabilities. Thus, to arrive at an agent represen- genomes that genetic operators such as mutation can mod- tation appropriate for their task, a designer must first invest ify the ANN’s weights. This is contrary to biological neural ALIFE 14: Proceedings of the Fourteenth International Conference on the Synthesis and Simulation of Living Systems networks, however, where “neuroplasticity” allows for con- Soltoggio et al., 2008)). Continuous-time recurrent neural nections between neurons to change during the lifespan of networks (CTRNNs) have also demonstrated learning capa- an organism (Pascual-Leone et al., 2005). Neuroplasticity bilities, sometimes even outperforming plastic neural net- (or “plasticity” for short) is what enables biological organ- works (Jesper and Floreano, 2002; Tuci and Quinn, 2003). isms to learn, modifying the way they react to certain inputs from the environment. Genetic Programming for Agent Control Hebbian learning, which is based on how learning is Genetic Programming (GP) can be used to evolve au- thought to occur in biology (Hebb, 1949), can be used to im- tonomous controllers as computer programs. In the first plement plasticity in ANNs (Floreano and Mondada, 1996). such experiment (Koza and Rice, 1992), controllers were In this paradigm, local learning rules ∆wij are evolved for represented as variable-length trees containing sensor inputs each network connection weight wij . After each timestep and four preprogrammed macros, such as “if-less-than-or- t of an ANN’s lifespan, each of its connection weights is equal.” Programs that could control a simulated robot to find Downloaded from http://direct.mit.edu/isal/proceedings-pdf/alife2014/26/681/1901537/978-0-262-32621-6-ch108.pdf by guest on 27 September 2021 updated using its associated learning rule a box in an irregularly shaped world and push it to a wall from four different starting configurations were evolved. t+∆t t t+∆t wij = wij + η∆wij , (1) A linear implementation of GP was used to evolve ma- chine code to control a Khepera robot in (Nordin and where 0 ≤ η ≤ 1 is the evolvable learning rate. An example Banzhaf, 1995, 1997). This work was recently extended in learning rule ∆wij is the “plain Hebb rule,” defined as (Burbidge et al., 2009; Burbidge and Wilson, 2014), where machine code was evolved using Grammatical Evolution. ∆wt+∆t = (1 − wt )vtvt , (2) ij ij i j Here, agent genomes are binary strings that are mapped to their machine code phenotypes via a prespecified generative where vt is the output of neuron i at the current timestep. i grammar. In the learning paradigm described above, connection GP has also been used to evolve competitors for the weights are adjusted at every timestep throughout the life- RoboCup robotic soccer tournament. The team “Darwin time of an agent. This differs from biological systems which United” was evolved using GP with a variety of possible are theorized to use “neuromodulation” to control and stabi- operations, including basic mathematical operators, reading lize learning (Bailey et al., 2000). and writing to memory locations and executing a variety of To improve learning algorithm performance, the Analog programmer-designedsubroutines (Andre and Teller, 1999). Genetic Encoding (AGE) algorithm, an encoding method A simple GP approach was also used to evolve robot goalie based on biological gene regulatorynetworks that can evolve behaviors in (Adorni et al., 1999). This work is the most both the connections weights and structure of an ANN (Mat- similar to our Evolvable Mathematical Models paradigm tiussi and Floreano, 2004; D¨urr et al., 2006), was modified to presented below, as it is evolving mathematical equations allow for Hebbian learning which could be enabled and dis- as trees for robot control. However, these experiments were abled via neuromodulatory signals (Soltoggio et al., 2007; performed on a relatively simple task (especially consider- Durr et al., 2008; Soltoggio et al., 2008). In this approach, a ing the amount of information provided to the controller), generalized Hebbian rule from (Niv et al., 2002) was modi- had only one equation tree/agent output, employed two ad- fied to include a modulatory signal m: ditional operators (sine and cosine), and did not allow for t+∆t t t t t t extra state variables/equation trees to be evolved. ∆wij = m η Avi vj + Bvi + Cvj + D (3) where A, B, C and D are evolvable parameters that deter- Evolvable Mathematical Models mine the importance of the different types of Hebbian learn- Our Evolvable Mathematical Models (EMM) algorithm uses ing and 0 ≤ mt ≤ 1 is the current strength of the sig- Genetic Programming (GP) to evolve mathematical mod- nal produced by one or more special modulatory neurons. els of behavior. An earlier version of this algorithm that These modulatory neurons operate in a manner similar to evolved one ODE per agent output (i.e., there were no extra regular neurons, taking inputs from other neurons in the net- state variables) was presented in (Grouchy and D’Eleuterio, work through network connections and generating their out- 2010) and implemented in (Grouchy and Lipson, 2012). The put value m using an activation function. core idea is that one can evolve autonomous agent con- trollers as mathematical equations that map from agent in- Fixed-Weight Learning puts to outputs. Learning behaviors have been observed in several fixed- weight ANN-based experiments as well. Fixed-weight re- Representation current neural networks seem to be able to accomplish cer- An EMM-based agent is represented as a system of equa- tain tasks requiring learning, as the recurrent neural connec- tions, with one equation for each of the N experimenter- tions can act as a type of memory (e.g., (Stanley et al., 2003; defined outputs vi in the simulation.
Recommended publications
  • Volume 3 Summer 2012
    Volume 3 Summer 2012 . Academic Partners . Cover image Magnetic resonance image of the human brain showing colour-coded regions activated by smell stimulus. Editors Ulisses Barres de Almeida Max-Planck-Institut fuer Physik [email protected] Juan Rojo TH Unit, PH Division, CERN [email protected] [email protected] Academic Partners Fondazione CEUR Consortium Nova Universitas Copyright ©2012 by Associazione EURESIS The user may not modify, copy, reproduce, retransmit or otherwise distribute this publication and its contents (whether text, graphics or original research content), without express permission in writing from the Editors. Where the above content is directly or indirectly reproduced in an academic context, this must be acknowledge with the appropriate bibliographical citation. The opinions stated in the papers of the Euresis Journal are those of their respective authors and do not necessarily reflect the opinions of the Editors or the members of the Euresis Association or its sponsors. Euresis Journal (ISSN 2239-2742), a publication of Associazione Euresis, an Association for the Promotion of Scientific Endevour, Via Caduti di Marcinelle 2, 20134 Milano, Italia. www.euresisjournal.org Contact information: Email. [email protected] Tel.+39-022-1085-2225 Fax. +39-022-1085-2222 Graphic design and layout Lorenzo Morabito Technical Editor Davide PJ Caironi This document was created using LATEX 2" and X LE ATEX 2 . Letter from the Editors Dear reader, with this new issue we reach the third volume of Euresis Journal, an editorial ad- venture started one year ago with the scope of opening up a novel space of debate and encounter within the scientific and academic communities.
    [Show full text]
  • Optimization of Deep Architectures for EEG Signal Classification
    sensors Article Optimization of Deep Architectures for EEG Signal Classification: An AutoML Approach Using Evolutionary Algorithms Diego Aquino-Brítez 1, Andrés Ortiz 2,* , Julio Ortega 1, Javier León 1, Marco Formoso 2 , John Q. Gan 3 and Juan José Escobar 1 1 Department of Computer Architecture and Technology, University of Granada, 18014 Granada, Spain; [email protected] (D.A.-B.); [email protected] (J.O.); [email protected] (J.L.); [email protected] (J.J.E.) 2 Department of Communications Engineering, University of Málaga, 29071 Málaga, Spain; [email protected] 3 School of Computer Science and Electronic Engineering, University of Essex, Colchester CO4 3SQ, UK; [email protected] * Correspondence: [email protected] Abstract: Electroencephalography (EEG) signal classification is a challenging task due to the low signal-to-noise ratio and the usual presence of artifacts from different sources. Different classification techniques, which are usually based on a predefined set of features extracted from the EEG band power distribution profile, have been previously proposed. However, the classification of EEG still remains a challenge, depending on the experimental conditions and the responses to be captured. In this context, the use of deep neural networks offers new opportunities to improve the classification performance without the use of a predefined set of features. Nevertheless, Deep Learning architec- Citation: Aquino-Brítez, D.; Ortiz, tures include a vast number of hyperparameters on which the performance of the model relies. In this A.; Ortega, J.; León, J.; Formoso, M.; paper, we propose a method for optimizing Deep Learning models, not only the hyperparameters, Gan, J.Q.; Escobar, J.J.
    [Show full text]
  • A Neuroevolution Approach to Imitating Human-Like Play in Ms
    A Neuroevolution Approach to Imitating Human-Like Play in Ms. Pac-Man Video Game Maximiliano Miranda, Antonio A. S´anchez-Ruiz, and Federico Peinado Departamento de Ingenier´ıadel Software e Inteligencia Artificial Universidad Complutense de Madrid c/ Profesor Jos´eGarc´ıaSantesmases 9, 28040 Madrid (Spain) [email protected] - [email protected] - [email protected] http://www.narratech.com Abstract. Simulating human behaviour when playing computer games has been recently proposed as a challenge for researchers on Artificial Intelligence. As part of our exploration of approaches that perform well both in terms of the instrumental similarity measure and the phenomeno- logical evaluation by human spectators, we are developing virtual players using Neuroevolution. This is a form of Machine Learning that employs Evolutionary Algorithms to train Artificial Neural Networks by consider- ing the weights of these networks as chromosomes for the application of genetic algorithms. Results strongly depend on the fitness function that is used, which tries to characterise the human-likeness of a machine- controlled player. Designing this function is complex and it must be implemented for specific games. In this work we use the classic game Ms. Pac-Man as an scenario for comparing two different methodologies. The first one uses raw data extracted directly from human traces, i.e. the set of movements executed by a real player and their corresponding time stamps. The second methodology adds more elaborated game-level parameters as the final score, the average distance to the closest ghost, and the number of changes in the player's route. We assess the impor- tance of these features for imitating human-like play, aiming to obtain findings that would be useful for many other games.
    [Show full text]
  • Neuro-Crítica: Un Aporte Al Estudio De La Etiología, Fenomenología Y Ética Del Uso Y Abuso Del Prefijo Neuro1
    JAHR ǀ Vol. 4 ǀ No. 7 ǀ 2013 Professional article Amir Muzur, Iva Rincic Neuro-crítica: un aporte al estudio de la etiología, fenomenología y ética del uso y abuso del prefijo neuro1 ABSTRACT The last few decades, beside being proclaimed "the decades of the brain" or "the decades of the mind," have witnessed a fascinating explosion of new disciplines and pseudo-disciplines characterized by the prefix neuro-. To the "old" specializations of neurosurgery, neurophysiology, neuropharmacology, neurobiology, etc., some new ones have to be added, which might sound somehow awkward, like neurophilosophy, neuroethics, neuropolitics, neurotheology, neuroanthropology, neuroeconomy, and other. Placing that phenomenon of "neuroization" of all fields of human thought and practice into a context of mostly unjustified and certainly too high – almost millenarianistic – expectations of the science of the brain and mind at the end of the 20th century, the present paper tries to analyze when the use of the prefix neuro- is adequate and when it is dubious. Key words: brain, neuroscience, word coinage RESUMEN Las últimas décadas, además de haber sido declaradas "las décadas del cerebro" o "las décadas de la mente", han sido testigos de una fascinante explosión de nuevas disciplinas y pseudo-disciplinas caracterizadas por el prefijo neuro-. A las "antiguas" especializaciones de la neurocirugía, la neurofisiología, la neurofarmacología, la neurobiología, etc., deben agregarse otras nuevas que pueden sonar algo extrañas, como la neurofilosofía, la neuroética, la neuropolítica, la neuroteología, la neuroantropología, la neuroeconomía, entre otras. El presente trabajo coloca ese fenómeno de "neuroización" de todos los campos del pensamiento y la práctica humana en un contexto de expectativas en general injustificadas y sin duda altísimas (y casi milenarias) en la ciencia del cerebro y la mente a fines del siglo XX; e intenta analizar cuándo el uso del prefijo neuro- es adecuado y cuándo es cuestionable.
    [Show full text]
  • Evolving Neural Networks Using Behavioural Genetic Principles
    Evolving Neural Networks Using Behavioural Genetic Principles Maitrei Kohli A dissertation submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy Department of Computer Science & Information Systems Birkbeck, University of London United Kingdom March 2017 0 Declaration This thesis is the result of my own work, except where explicitly acknowledged in the text. Maitrei Kohli …………………. 1 ABSTRACT Neuroevolution is a nature-inspired approach for creating artificial intelligence. Its main objective is to evolve artificial neural networks (ANNs) that are capable of exhibiting intelligent behaviours. It is a widely researched field with numerous successful methods and applications. However, despite its success, there are still open research questions and notable limitations. These include the challenge of scaling neuroevolution to evolve cognitive behaviours, evolving ANNs capable of adapting online and learning from previously acquired knowledge, as well as understanding and synthesising the evolutionary pressures that lead to high-level intelligence. This thesis presents a new perspective on the evolution of ANNs that exhibit intelligent behaviours. The novel neuroevolutionary approach presented in this thesis is based on the principles of behavioural genetics (BG). It evolves ANNs’ ‘general ability to learn’, combining evolution and ontogenetic adaptation within a single framework. The ‘general ability to learn’ was modelled by the interaction of artificial genes, encoding the intrinsic properties of the ANNs, and the environment, captured by a combination of filtered training datasets and stochastic initialisation weights of the ANNs. Genes shape and constrain learning whereas the environment provides the learning bias; together, they provide the ability of the ANN to acquire a particular task.
    [Show full text]
  • Automatic Synthesis of Working Memory Neural Networks with Neuroevolution Methods Tony Pinville, Stéphane Doncieux
    Automatic synthesis of working memory neural networks with neuroevolution methods Tony Pinville, Stéphane Doncieux To cite this version: Tony Pinville, Stéphane Doncieux. Automatic synthesis of working memory neural networks with neu- roevolution methods. Cinquième conférence plénière française de Neurosciences Computationnelles, ”Neurocomp’10”, Aug 2010, Lyon, France. hal-00553407 HAL Id: hal-00553407 https://hal.archives-ouvertes.fr/hal-00553407 Submitted on 10 Mar 2011 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. AUTOMATIC SYNTHESIS OF WORKING MEMORY NEURAL NETWORKS WITH NEUROEVOLUTION METHODS Tony Pinville Stephane´ Doncieux ISIR, CNRS UMR 7222 ISIR, CNRS UMR 7222 Universite´ Pierre et Marie Curie-Paris 6 Universite´ Pierre et Marie Curie-Paris 6 4 place Jussieu, F-75252 4 place Jussieu, F-75252 Paris Cedex 05, France Paris Cedex 05, France email: [email protected] email: [email protected] ABSTRACT used in particular in Evolutionary Robotics to make real or simulated robots exhibit a desired behavior [3]. Evolutionary Robotics is a research field focused on Most evolutionary algorithms optimize a fixed size autonomous design of robots based on evolutionary algo- genotype, whereas neuroevolution methods aim at explor- rithms.
    [Show full text]
  • Neuroevolution
    Neuroevolution Risto Miikkulainen The University of Texas at Austin 1 Definition Neuroevolution is a method for modifying neural network weights, topologies, or ensembles in order to learn a specific task. Evolutionary computation is used to search for network parameters that maximize a fitness function that measures performance in the task. Compared to other neural network learning methods, neuroevolution is highly general, allowing learning without explicit targets, with nondifferentiable activation functions, and with recurrent networks. It can also be combined with standard neural network learning to e.g. model biological adaptation. Neuroevolution can also be seen as a policy search method for reinforcement- learning problems, where it is well suited to continuous domains and to domains where the state is only partially observable. 2 Synonyms Evolving neural networks, genetic neural networks 3 Motivation and Background The primary motivation for neuroevolution is to be able to train neural networks in sequential decision tasks with sparse reinforcement information. Most neural network learning is concerned with supervised tasks, where the desired behavior is described in terms of a corpus of input-output examples. However, many learning tasks in the real world do not lend themselves to the supervised learning approach. For example, in game playing, vehicle control, and robotics, the optimal actions at each point in time are not always known; only after performing several actions it is possible to get information about how well they worked, such as winning or losing the game. Neuroevolution makes it possible to find a neural network that optimizes behavior given only such sparse information about how well the networks are doing, without direct information about what exactly they should be doing.
    [Show full text]
  • Neuroevolution Strategies for Episodic Reinforcement Learning ∗ Verena Heidrich-Meisner , Christian Igel
    J. Algorithms 64 (2009) 152–168 Contents lists available at ScienceDirect Journal of Algorithms Cognition, Informatics and Logic www.elsevier.com/locate/jalgor Neuroevolution strategies for episodic reinforcement learning ∗ Verena Heidrich-Meisner , Christian Igel Institut für Neuroinformatik, Ruhr-Universität Bochum, 44780 Bochum, Germany article info abstract Article history: Because of their convincing performance, there is a growing interest in using evolutionary Received 30 April 2009 algorithms for reinforcement learning. We propose learning of neural network policies Available online 8 May 2009 by the covariance matrix adaptation evolution strategy (CMA-ES), a randomized variable- metric search algorithm for continuous optimization. We argue that this approach, Keywords: which we refer to as CMA Neuroevolution Strategy (CMA-NeuroES), is ideally suited for Reinforcement learning Evolution strategy reinforcement learning, in particular because it is based on ranking policies (and therefore Covariance matrix adaptation robust against noise), efficiently detects correlations between parameters, and infers a Partially observable Markov decision process search direction from scalar reinforcement signals. We evaluate the CMA-NeuroES on Direct policy search five different (Markovian and non-Markovian) variants of the common pole balancing problem. The results are compared to those described in a recent study covering several RL algorithms, and the CMA-NeuroES shows the overall best performance. © 2009 Elsevier Inc. All rights reserved. 1. Introduction Neuroevolution denotes the application of evolutionary algorithms to the design and learning of neural networks [54,11]. Accordingly, the term Neuroevolution Strategies (NeuroESs) refers to evolution strategies applied to neural networks. Evolution strategies are a major branch of evolutionary algorithms [35,40,8,2,7].
    [Show full text]
  • Encyclopedia of Computational Neuroscience
    Encyclopedia of Computational Neuroscience Dieter Jaeger • Ranu Jung Editors Encyclopedia of Computational Neuroscience With 1109 Figures and 71 Tables Editors Dieter Jaeger Ranu Jung Department of Biology Department of Biomedical Engineering Emory University Florida International University Atlanta, GA, USA Miami, FL, USA ISBN 978-1-4614-6674-1 ISBN 978-1-4614-6675-8 (eBook) ISBN 978-1-4614-6676-5 (print and electronic bundle) DOI 10.1007/978-1-4614-6675-8 Springer New York Heidelberg Dordrecht London Library of Congress Control Number: 2014958664 # Springer Science+Business Media New York 2015 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its current version, and permission for use must always be obtained from Springer. Permissions for use may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law. The use of general descriptive names, registered names, trademarks, service marks, etc.
    [Show full text]
  • Neuroevolution for Micromanagement in Real-Time Strategy Games
    NEUROEVOLUTION FOR MICROMANAGEMENT IN REAL-TIME STRATEGY GAMES By Shunjie Zhen Supervised by Dr. Ian Watson The University of Auckland Auckland, New Zealand A thesis submitted in fullfillment of the requirements for the degree of Master of Science in Computer Science Department of Computer Science The University of Auckland, February 2014 ii ABSTRACT Real-Time Strategy (RTS) games have become an attractive domain for Artificial Intelligence research in recent years, due to their dynamic, multi-agent and multi-objective environments. Micromanagement, a core component of many RTS games, involves the control of multiple agents to accomplish goals that require fast, real time assessment and reaction. This thesis explores the novel application and evaluation of a Neuroevolution technique for evolving micromanagement agents in the RTS game StarCraft: Brood War. Its overall aim is to contribute to the development of an AI capable of executing expert human strategy in a complex real-time domain. The NeuroEvolution of Augmenting Topologies (NEAT) algorithm, both in its standard form and its real- time variant (rtNEAT) were successfully adapted to the micromanagement task. Several problem models and network designs were considered, resulting in two implemented agent models. Subsequent evaluations were performed on the two models and on the two NEAT algorithm variants, against traditional, non-adaptive AI. Overall results suggest that NEAT is successful at generating dynamic AI capable of defeating standard deterministic AI. Analysis of each algorithm and agent models identified further differences in task performance and learning rate. The behaviours of these models and algorithms as well as the effectiveness of NEAT were then thoroughly elaborated iii iv ACKNOWLEDGEMENTS Firstly, I would like to thank Dr.
    [Show full text]
  • Social Neuroscience and Its Relationship to Social Psychology
    Social Cognition, Vol. 28, No. 6, 2010, pp. 675–685 CACIOPPO ET AL. SOCIAL NEUROSCIENCE SOCIAL NEUROSCIENCE AND ITS RELATIONSHIP TO SOCIAL PSYCHOLOGY John T. Cacioppo University of Chicago Gary G. Berntson Ohio State University Jean Decety University of Chicago Social species create emergent organizations beyond the individual. These emergent structures evolved hand in hand with neural, hormonal, cel- lular, and genetic mechanisms to support them because the consequent social behaviors helped these organisms survive, reproduce, and care for offspring suf!ciently long that they too reproduced. Social neuroscience seeks to specify the neural, hormonal, cellular, and genetic mechanisms underlying social behavior, and in so doing to understand the associations and in"uences between social and biological levels of organization. Suc- cess in the !eld, therefore, is not measured in terms of the contributions to social psychology per se, but rather in terms of the speci!cation of the biological mechanisms underlying social interactions and behavior—one of the major problems for the neurosciences to address in the 21st century. Social species, by definition, create emergent organizations beyond the individ- ual—structures ranging from dyads and families to groups and cultures. These emergent social structures evolved hand in hand with neural, hormonal, cellular, and genetic mechanisms to support them because the consequent social behaviors helped these organisms survive, reproduce and, in the case of some social spe- cies, care for offspring sufficiently long that they too reproduced thereby ensuring their genetic legacy. Social neuroscience is the interdisciplinary field devoted to the study of these neural, hormonal, cellular, and genetic mechanisms and, relat- edly, to the study of the associations and influences between social and biological Preparation of this manuscript was supported by NIA Grant No.
    [Show full text]
  • Difference-Based Mutation Operation for Neuroevolution of Augmented Topologies †
    algorithms Article Difference-Based Mutation Operation for Neuroevolution of Augmented Topologies † Vladimir Stanovov * , Shakhnaz Akhmedova and Eugene Semenkin Institute of Informatics and Telecommunication, Reshetnev Siberian State University of Science and Technology, 660037 Krasnoyarsk, Russia; [email protected] (S.A.); [email protected] (E.S.) * Correspondence: [email protected] † This paper is the extended version of our published in IOP Conference Series: Materials Science and Engineering, Krasnoyarsk, Russia, 15 February 2021. Abstract: In this paper, a novel search operation is proposed for the neuroevolution of augmented topologies, namely the difference-based mutation. This operator uses the differences between individuals in the population to perform more efficient search for optimal weights and structure of the model. The difference is determined according to the innovation numbers assigned to each node and connection, allowing tracking the changes. The implemented neuroevolution algorithm allows backward connections and loops in the topology, and uses a set of mutation operators, including connections merging and deletion. The algorithm is tested on a set of classification problems and the rotary inverted pendulum control problem. The comparison is performed between the basic approach and modified versions. The sensitivity to parameter values is examined. The experimental results prove that the newly developed operator delivers significant improvements to the classification quality in several cases, and allow finding
    [Show full text]