Artificial Life

Total Page:16

File Type:pdf, Size:1020Kb

Artificial Life Steven Levy ARTIFICIAL LIFE Steven Levy is the author of two previous books: Hackers: Heroes of the Computer Revolution, and The Unicorn’s Secret. He has written for Rolling Stone, Es- quire, Harper’s, and currently writes a monthly col- umn for Macworld magazine. Helives in New York City and western Massachusetts with his wife and three-year-old son. ALSO BY STEVEN LEVY Hackers: Heroes of the Computer Revolution The Unicorn’s Secret: Murder in the Age of Aquarius ARTIFICIAL LIFE ARTIFICIAL LIFE A Report from the Frontier Where Computers Meet Biology fda. Steven Levy VINTAGE BOOKS A DIVISION OF RANDOM HOUSE, INC. _ NEW YORK FIRST VINTAGE BOOKS EDITION, AUGUST 1993 Copyright © 1992 by Steven Levy All rights reserved under International and Pan-American Copyright Conventions. Published in the United States by Vintage Books, a division of Random House, Inc., New York, and simultaneously in Canada by Random House of Canada Limited, Toronto. Originally published in hardcover by Pantheon Books, a division of Random House, Inc., New York, in 1992. Library of Congress Cataloguing-in-Publication Data Levy, Steven. Artificial life: a report from the frontier where computers meetbiology / Steven Levy.—1st Vintage Books ed. p- cm. Includes bibliographical references and index. ISBN 0-679-74389-8 (pbk.) 1. Neural networks (Computer science) I. Title. [QA76.87.L48 1993] 006.3—dc20 92-50600 CIP Manufactured in the United States of America 10 9 8 7 6 To Andrew CONTENTS Prologue: In Silico 1 The Promised Land 11 Playing by the Rules 47 Garage-Band Science 85 God’s Heart 121 The Genetic Algorithm 153 Alchemists and Parasites 189 CONTENTS Artificial Flora, Artificial Fauna, Artificial Ecologies 231 Real Artificial Life 271 The Strong Claim 309 Notes and Sources 349 Acknowledgments 371 Illustration Credits 373 Index 375 Vill PROLOGUE: IN SILICO Ifpatterns ofones and zeros were “‘like’’ patterns ofhumanlives and death, if everything about an individual could be represented in a computerrecord by a long string of ones and zeros, then what kind of creature would be represented by a long string of lives and deaths? Thomas Pynchon Thecreatures cruise silently, skimming the surface of their world with the elegance of ice skaters. They move atvarying speeds, some with the variegated cadenceofvacillation, others with what surely must be firm purpose. Their bodies—flecks ofcolors that resemble paperairplanes or pointed confetti—betray their needs. Green ones are hungry. Blue ones seek mates. Red ones wantto fight. They see. A so-called neural network bestows on them vision, and they can perceive the colors of their neighbors and something ofthe world around them. They know something about their own internal states and can sense fatigue. They learn. Experience teaches them what might make them feel better or what might relieve a pressing need. They reproduce. Twoofthem will mate, their genes will merge, and the combination determines the characteristics of the offspring. Over a period of generations, the mechanics of natural selection assert them- selves, and fitter creatures roam the landscape. They die, and sometimes before their bodies decay, others of their ilk devourthe corpses. In certain areas, at certain times, cannibal cults arise in whichthis behavioris the norm. Thecarcasses are nourishing, but not as muchas the food that can be serendipitously discovered ontheterrain. The nameofthis ecosystem is PolyWorld,andit is located in the chips and disk drives of a Silicon Graphics Iris Workstation. Thesole creator of this realm is a researcher named Larry Yaeger, who works for Apple Computer. It is a world inside a computer, whose inhabitants are, in effect, made of mathematics. The creatures have digital DNA. Someof these creatures are more fit than others, and those are the ones who eventually reproduce, forging a path that eventually leads to several sorts of organisms whosuccessfully exploit the peccadilloes of PolyWorld. ‘The species have their own unique behaviors and group dynamics,” notes Yaeger. One group seems on the edge of psychosis—the “‘frenet- ics,’ who, zipping compulsively through the landscape, constantly de- sire food and sex and expend energy onlittle else. Then there is “‘the cannibal cult,” members of which seek their own to mate with, fight with, and eat. They form grotesque clumps from which they need not ARTIFICIAL LIFE movein orderto fulfill any of those needs. A third species is the “edge runner.” Owing to a peculiarity in the landscape—unlike our own spherical planet, PolyWorld can be programmedto have a distinct end ofthe world—thereis a benefit in lurking on the brim of oblivion. Once a respectable numberoffellow creatures adoptthis behavior,there will always be an ample supply of conjugal partners, as well as old carcasses now turned to food. Yaeger is cautious about sweeping statements; he prefers to describe what he has done and what might immediately follow from it. “So far what PolyWorld has shownis that successful organismsin a biologically motivated and only somewhat complex environment have evolved adaptive strategies for living in this environment,” he says. Whenit comes to describing the creatures themselves, Yaeger is less tentative. “I see them,” he says, “‘as artificial life.” In September 1987, more than one hundredscientists and technicians gathered in Los Alamos, New Mexico, to establish the new science of artificial life. The event celebrated a technological and scientific water- shed. A deepened understanding of biological mechanisms, along with the exponentially increasing power ofdigital computers, had brought humankindto the threshold of duplicating nature’s masterpiece,living systems. The pioneers were both thrilled at the prospect and humbled by previous speculations ofwhat lay ahead. The legacy ofMary Shelley, who wrote of Frankenstein and his monster, as well as the dark accom- plishments hatched onthevery site of the conference, hovered over the proceedings like, as one participant put it, a bugaboo. Nevertheless, the mood was exuberant. Manyofthe scientists drawn to Los Alamos had long dreamed of an aggregate effort to create a new form oflife; their individual labors had looked toward that day. Now it had arrived. Thesignificance of the moment was later framed by a physicist named James Doyne Farmer, who coauthored a paper about the implications of this new science. Its abstract alone was perhaps as striking a description of nascent technology at the lab as any since the developmentof the atomic bomb. Those whoread it would have been In Silico well advised to take a deep breath—and perhaps suspenddisbelief until resuming aspiration—before reading the following prediction: Within fifty to a hundred years a new class of organismsis likely to emerge. These organismswill be artificial in the sense that they will originally be designed by humans. However, they will reproduce, and will evolve into somethingotherthan their original form;theywill be “alive” under any reasonable definition of the word. The advent ofartificial life will be the most significant historical event since the emergence of human beings. Artificial life, or a-life, is devoted to the creation and study oflifelike organisms and systems built by humans. Thestuff ofthis life is nonor- ganic matter, andits essenceis information: computers arethe kilns from which these new organisms emerge. Just as medical scientists have managed to tinker with life’s mechanisms in vitro, the biologists and computerscientists of a-life hopeto create life in silico. The degree to which this resembles real, ‘‘wet” life varies; many experimenters admit freely that their laboratory creations are simply simulations ofaspects of life. The goal of these practitioners of “weak” a-life is to illuminate and understand moreclearly the life that exists on earth and possibly elsewhere. As astronomer A. S. Eddington hassaid, ‘‘the contemplation in natural science of a wider domain thantheactual leads to a far better understanding ofthe actual.”’ By simulating a kind oflife different from that with which wearefamiliar, a-life scientists seek to explore paths that no form oflife in the universe has yet taken, the better to understand the concepts and limits oflife itself. Hopingthat the samesorts ofbehavior foundin nature will spontane- ously emerge from the simulations, sometimes scientists attempt to model directly processes characteristic of living systems. Biologists treat these artificial systems as the ultimate laboratory animals; their character- istics illuminate the traits ofknown organisms,but, since their composi- tion is transparent, they are much moreeasily analyzed thanrats,plants, or E. coli. Physicists pursue a-life in the hopethat the synthesis oflife will shed light on a related quest: the understanding ofall complex nonlinear ARTIFICIAL LIFE systems, which are thought to be ruled by universal forces not yet comprehended. By studying phenomena such asself-organization in a-life, these mysteries may soon be unraveled. The boldest practitioners of this science engage in “strong”’ a-life. They look toward the long-term developmentofactual living organisms whose essence is information. These creatures may be embodied in corporeal form—a-life robots—or they may live within a computer. Whichever, these creations, as Farmerinsisted, are intendedto be “‘alive under every reasonable definition of the word”—as muchas bacteria, plants, animals, and humanbeings. Many might consider this an absurd claim on the face of it. How could something inside a computer ever be considered alive? Could anything synthesized by humansever aspire to such a classification? Should not the term “‘life’’ be restricted to nature’s domain? The questionis difficult to answer, largely because we have no “reason- able definition” oflife. Nearly two thousand years ago, Aristotle made the observation that by ‘‘possessinglife,” one implied that “‘a thing can nourishitselfand decay.’ Most everyonealso agreedthat the capacity for self-reproduction is a necessary condition for life. From there, opinions diverged andstill do. One could devise a laundry list of qualities charac- teristic of life, but these inevitably fail. They are either overly dis- criminating or excessively lenient.
Recommended publications
  • Guiding Organic Management in a Service-Oriented Real-Time Middleware Architecture
    Guiding Organic Management in a Service-Oriented Real-Time Middleware Architecture Manuel Nickschas and Uwe Brinkschulte Institute for Computer Science University of Frankfurt, Germany {nickschas, brinks}@es.cs.uni-frankfurt.de Abstract. To cope with the ever increasing complexity of today's com- puting systems, the concepts of organic and autonomic computing have been devised. Organic or autonomic systems are characterized by so- called self-X properties such as self-conguration and self-optimization. This approach is particularly interesting in the domain of distributed, embedded, real-time systems. We have already proposed a service-ori- ented middleware architecture for such systems that uses multi-agent principles for implementing the organic management. However, organic management needs some guidance in order to take dependencies between services into account as well as the current hardware conguration and other application-specic knowledge. It is important to allow the appli- cation developer or system designer to specify such information without having to modify the middleware. In this paper, we propose a generic mechanism based on capabilities that allows describing such dependen- cies and domain knowledge, which can be combined with an agent-based approach to organic management in order to realize self-X properties. We also describe how to make use of this approach for integrating the middleware's organic management with node-local organic management. 1 Introduction Distributed, embedded systems are rapidly advancing in all areas of our lives, forming increasingly complex networks that are increasingly hard to handle for both system developers and maintainers. In order to cope with this explosion of complexity, also commonly referred to as the Software Crisis [1], the concepts of Autonomic [2,3] and Organic [4,5,6] Computing have been devised.
    [Show full text]
  • Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues
    OII / e-Horizons Forum Discussion Paper No. 14, January 2008 Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues by Malcolm Peltu Oxford Internet Institute Yorick Wilks Oxford Internet Institute OII / e-Horizons Forum Discussion Paper No. 14 Oxford Internet Institute University of Oxford 1 St Giles, Oxford OX1 3JS United Kingdom Forum Discussion Paper January 2008 © University of Oxford for the Oxford Internet Institute 2008 Close Engagements with Artificial Companions Foreword This paper summarizes discussions at the multidisciplinary forum1 held at the University of Oxford on 26 October 2007 entitled Artificial Companions in Society: Perspectives on the Present and Future, as well as an open meeting the previous day addressed by Sherry Turkle2. The event was organized by Yorick Wilks, Senior Research Fellow for the Oxford Internet Institute (OII)3, on behalf of the e-Horizons Institute4 and in association with the EU Integrated Project COMPANIONS. COMPANIONS is studying conversational software-based artificial agents that will get to know their owners over a substantial period. These could be developed to advise, comfort and carry out a wide range of functions to support diverse personal and social needs, such as to be ‘artificial companions’ for the elderly, helping their owners to learn, or assisting to sustain their owners’ fitness and health. The invited forum participants, including computer and social scientists, also discussed a range of related developments that use advanced artificial intelligence and human– computer interaction approaches. They examined key issues in building artificial companions, emphasizing their social, personal, emotional and ethical implications. This paper summarizes the main issues raised.
    [Show full text]
  • Organic Computing Doctoral Dissertation Colloquium 2018 B
    13 13 S. Tomforde | B. Sick (Eds.) Organic Computing Doctoral Dissertation Colloquium 2018 Organic Computing B. Sick (Eds.) B. | S. Tomforde S. Tomforde ISBN 978-3-7376-0696-7 9 783737 606967 V`:%$V$VGVJ 0QJ `Q`8 `8 V`J.:`R 1H@5 J10V`1 ? :VC Organic Computing Doctoral Dissertation Colloquium 2018 S. Tomforde, B. Sick (Editors) !! ! "#$%&'&$'$(&)(#(&$* + "#$%&'&$'$(&)(#$&,*&+ - .! ! /)!/#0// 12#$%'$'$()(#$, 23 & ! )))0&,)(#$' '0)/#5 6 7851 999!& ! kassel university press 7 6 Preface The Organic Computing Doctoral Dissertation Colloquium (OC-DDC) series is part of the Organic Computing initiative [9, 10] which is steered by a Special Interest Group of the German Society of Computer Science (Gesellschaft fur¨ Informatik e.V.). It provides an environment for PhD students who have their research focus within the broader Organic Computing (OC) community to discuss their PhD project and current trends in the corresponding research areas. This includes, for instance, research in the context of Autonomic Computing [7], ProActive Computing [11], Complex Adaptive Systems [8], Self-Organisation [2], and related topics. The main goal of the colloquium is to foster excellence in OC-related research by providing feedback and advice that are particularly relevant to the students’ studies and ca- reer development. Thereby, participants are involved in all stages of the colloquium organisation and the review process to gain experience. The OC-DDC 2018 took place in
    [Show full text]
  • Open Problems in Artificial Life
    Open Problems in Artificial Life Mark A. Bedau,† John S. McCaskill‡ Norman H. Packard§ Steen Rasmussen Chris Adami†† David G. Green‡‡ Takashi Ikegami§§ Abstract This article lists fourteen open problems in Kunihiko Kaneko artificial life, each of which is a grand challenge requiring a Thomas S. Ray††† major advance on a fundamental issue for its solution. Each problem is briefly explained, and, where deemed helpful, some promising paths to its solution are indicated. Keywords artificial life, evolution, self-organi- zation, emergence, self-replication, autopoeisis, digital life, artificial chemistry, selection, evolutionary learning, ecosystems, biosystems, astrobiology, evolvable hardware, dynamical hierarchies, origin of life, simulation, cultural evolution, 1 Introduction information theory At the dawn of the last century, Hilbert proposed a set of open mathematical problems. They proved to be an extraordinarily effective guideline for mathematical research in the following century. Based on a substantial body of existing mathematical theory, the challenges were both precisely formulated and positioned so that a significant body of missing theory needed to be developed to achieve their solution, thereby enriching mathematics as a whole. In contrast with mathematics, artificial life is quite young and essentially interdis- ciplinary. The phrase “artificial life” was coined by C. Langton [11], who envisaged an investigation of life as it is in the context of life as it could be. Although artificial life is fundamentally directed towards both the origins of biology and its future, the scope and complexity of its subject require interdisciplinary cooperation and collabo- ration. This broadly based area of study embraces the possibility of discovering lifelike behavior in unfamiliar settings and creating new and unfamiliar forms of life, and its major aim is to develop a coherent theory of life in all its manifestations, rather than an historically contingent documentation bifurcated by discipline.
    [Show full text]
  • Open-Endedness for the Sake of Open-Endedness
    Open-Endedness for the Sake Arend Hintze Michigan State University of Open-Endedness Department of Integrative Biology Department of Computer Science and Engineering BEACON Center for the Study of Evolution in Action [email protected] Abstract Natural evolution keeps inventing new complex and intricate forms and behaviors. Digital evolution and genetic algorithms fail to create the same kind of complexity, not just because we still lack the computational resources to rival nature, but Keywords because (it has been argued) we have not understood in principle how to create open-ended evolving systems. Much effort has been Open-ended evolution, computational model, complexity, diversity made to define such open-endedness so as to create forms of increasing complexity indefinitely. Here, however, a simple evolving computational system that satisfies all such requirements is presented. Doing so reveals a shortcoming in the definitions for open-ended evolution. The goal to create models that rival biological complexity remains. This work suggests that our current definitions allow for even simple models to pass as open-ended, and that our definitions of complexity and diversity are more important for the quest of open-ended evolution than the fact that something runs indefinitely. 1 Introduction Open-ended evolution has been identified as a key challenge in artificial life research [4]. Specifically, it has been acknowledged that there is a difference between an open-ended system and a system that just has a long run time. Large or slow systems will eventually converge on a solution, but open- ended systems will not and instead keep evolving. Interestingly, a system that oscillates or that is in a dynamic equilibrium [21] would continuously evolve, but does not count as an open-ended evolving system.
    [Show full text]
  • In BLACK CLOCK, Alaska Quarterly Review, the Rattling Wall and Trop, and She Is Co-Organizer of the Griffith Park Storytelling Series
    BLACK CLOCK no. 20 SPRING/SUMMER 2015 2 EDITOR Steve Erickson SENIOR EDITOR Bruce Bauman MANAGING EDITOR Orli Low ASSISTANT MANAGING EDITOR Joe Milazzo PRODUCTION EDITOR Anne-Marie Kinney POETRY EDITOR Arielle Greenberg SENIOR ASSOCIATE EDITOR Emma Kemp ASSOCIATE EDITORS Lauren Artiles • Anna Cruze • Regine Darius • Mychal Schillaci • T.M. Semrad EDITORIAL ASSISTANTS Quinn Gancedo • Jonathan Goodnick • Lauren Schmidt Jasmine Stein • Daniel Warren • Jacqueline Young COMMUNICATIONS EDITOR Chrysanthe Tan SUBMISSIONS COORDINATOR Adriana Widdoes ROVING GENIUSES AND EDITORS-AT-LARGE Anthony Miller • Dwayne Moser • David L. Ulin ART DIRECTOR Ophelia Chong COVER PHOTO Tom Martinelli AD DIRECTOR Patrick Benjamin GUIDING LIGHT AND VISIONARY Gail Swanlund FOUNDING FATHER Jon Wagner Black Clock © 2015 California Institute of the Arts Black Clock: ISBN: 978-0-9836625-8-7 Black Clock is published semi-annually under cover of night by the MFA Creative Writing Program at the California Institute of the Arts, 24700 McBean Parkway, Valencia CA 91355 THANK YOU TO THE ROSENTHAL FAMILY FOUNDATION FOR ITS GENEROUS SUPPORT Issues can be purchased at blackclock.org Editorial email: [email protected] Distributed through Ingram, Ingram International, Bertrams, Gardners and Trust Media. Printed by Lightning Source 3 Norman Dubie The Doorbell as Fiction Howard Hampton Field Trips to Mars (Psychedelic Flashbacks, With Scones and Jam) Jon Savage The Third Eye Jerry Burgan with Alan Rifkin Wounds to Bind Kyra Simone Photo Album Ann Powers The Sound of Free Love Claire
    [Show full text]
  • Artificial Intelligence: How Does It Work, Why Does It Matter, and What Can We Do About It?
    Artificial intelligence: How does it work, why does it matter, and what can we do about it? STUDY Panel for the Future of Science and Technology EPRS | European Parliamentary Research Service Author: Philip Boucher Scientific Foresight Unit (STOA) PE 641.547 – June 2020 EN Artificial intelligence: How does it work, why does it matter, and what can we do about it? Artificial intelligence (AI) is probably the defining technology of the last decade, and perhaps also the next. The aim of this study is to support meaningful reflection and productive debate about AI by providing accessible information about the full range of current and speculative techniques and their associated impacts, and setting out a wide range of regulatory, technological and societal measures that could be mobilised in response. AUTHOR Philip Boucher, Scientific Foresight Unit (STOA), This study has been drawn up by the Scientific Foresight Unit (STOA), within the Directorate-General for Parliamentary Research Services (EPRS) of the Secretariat of the European Parliament. To contact the publisher, please e-mail [email protected] LINGUISTIC VERSION Original: EN Manuscript completed in June 2020. DISCLAIMER AND COPYRIGHT This document is prepared for, and addressed to, the Members and staff of the European Parliament as background material to assist them in their parliamentary work. The content of the document is the sole responsibility of its author(s) and any opinions expressed herein should not be taken to represent an official position of the Parliament. Reproduction and translation for non-commercial purposes are authorised, provided the source is acknowledged and the European Parliament is given prior notice and sent a copy.
    [Show full text]
  • Think Complexity: Exploring Complexity Science in Python
    Think Complexity Version 2.6.2 Think Complexity Version 2.6.2 Allen B. Downey Green Tea Press Needham, Massachusetts Copyright © 2016 Allen B. Downey. Green Tea Press 9 Washburn Ave Needham MA 02492 Permission is granted to copy, distribute, transmit and adapt this work under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License: https://thinkcomplex.com/license. If you are interested in distributing a commercial version of this work, please contact the author. The LATEX source for this book is available from https://github.com/AllenDowney/ThinkComplexity2 iv Contents Preface xi 0.1 Who is this book for?...................... xii 0.2 Changes from the first edition................. xiii 0.3 Using the code.......................... xiii 1 Complexity Science1 1.1 The changing criteria of science................3 1.2 The axes of scientific models..................4 1.3 Different models for different purposes............6 1.4 Complexity engineering.....................7 1.5 Complexity thinking......................8 2 Graphs 11 2.1 What is a graph?........................ 11 2.2 NetworkX............................ 13 2.3 Random graphs......................... 16 2.4 Generating graphs........................ 17 2.5 Connected graphs........................ 18 2.6 Generating ER graphs..................... 20 2.7 Probability of connectivity................... 22 vi CONTENTS 2.8 Analysis of graph algorithms.................. 24 2.9 Exercises............................. 25 3 Small World Graphs 27 3.1 Stanley Milgram......................... 27 3.2 Watts and Strogatz....................... 28 3.3 Ring lattice........................... 30 3.4 WS graphs............................ 32 3.5 Clustering............................ 33 3.6 Shortest path lengths...................... 35 3.7 The WS experiment....................... 36 3.8 What kind of explanation is that?............... 38 3.9 Breadth-First Search.....................
    [Show full text]
  • Teaching and Learning with Digital Evolution: Factors Influencing Implementation and Student Outcomes
    TEACHING AND LEARNING WITH DIGITAL EVOLUTION: FACTORS INFLUENCING IMPLEMENTATION AND STUDENT OUTCOMES By Amy M Lark A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Curriculum, Instruction, and Teacher Education – Doctor of Philosophy 2014 ABSTRACT TEACHING AND LEARNING WITH DIGITAL EVOLUTION: FACTORS INFLUENCING IMPLEMENTATION AND STUDENT OUTCOMES By Amy M Lark Science literacy for all Americans has been the rallying cry of science education in the United States for decades. Regardless, Americans continue to fall short when it comes to keeping pace with other developed nations on international science education assessments. To combat this problem, recent national reforms have reinvigorated the discussion of what and how we should teach science, advocating for the integration of disciplinary core ideas, crosscutting concepts, and science practices. In the biological sciences, teaching the core idea of evolution in ways consistent with reforms is fraught with challenges. Not only is it difficult to observe biological evolution in action, it is nearly impossible to engage students in authentic science practices in the context of evolution. One way to overcome these challenges is through the use of evolving populations of digital organisms. Avida-ED is digital evolution software for education that allows for the integration of science practice and content related to evolution. The purpose of this study was to investigate the effects of Avida-ED on teaching and learning evolution and the nature of science. To accomplish this I conducted a nationwide, multiple-case study, documenting how instructors at various institutions were using Avida-ED in their classrooms, factors influencing implementation decisions, and effects on student outcomes.
    [Show full text]
  • How Artificial Intelligence Works
    BRIEFING How artificial intelligence works From the earliest days of artificial intelligence (AI), its definition focused on how its results appear to show intelligence, rather than the methods that are used to achieve it. Since then, AI has become an umbrella term which can refer to a wide range of methods, both current and speculative. It applies equally well to tools that help doctors to identify cancer as it does to self-replicating robots that could enslave humanity centuries from now. Since AI is simultaneously represented as high-risk, low-risk and everything in between, it is unsurprising that it attracts controversy. This briefing provides accessible introductions to some of the key techniques that come under the AI banner, grouped into three sections to give a sense the chronology of its development. The first describes early techniques, described as 'symbolic AI' while the second focuses on the 'data driven' approaches that currently dominate, and the third looks towards possible future developments. By explaining what is 'deep' about deep learning and showing that AI is more maths than magic, the briefing aims to equip the reader with the understanding they need to engage in clear-headed reflection about AI's opportunities and challenges, a topic that is discussed in the companion briefing, Why artificial intelligence matters. First wave: Symbolic artificial intelligence Expert systems In these systems, a human expert creates precise rules that a computer can follow, step by step, to decide how to respond to a given situation. The rules are often expressed in an 'if-then-else' format. Symbolic AI can be said to 'keep the human in the loop' because the decision-making process is closely aligned to how human experts make decisions.
    [Show full text]
  • Network Complexity of Foodwebs
    Network Complexity of Foodwebs Russell Standish School of Mathematics and Statistics, University of New South Wales Abstract Complexity as Information The notion of using information content as a complexity In previous work, I have developed an information theoretic measure is fairly simple. In most cases, there is an ob- complexity measure of networks. When applied to several vious prefix-free representation language within which de- real world food webs, there is a distinct difference in com- plexity between the real food web, and randomised control scriptions of the objects of interest can be encoded. There networks obtained by shuffling the network links. One hy- is also a classifier of descriptions that can determine if two pothesis is that this complexity surplus represents informa- descriptions correspond to the same object. This classifier is tion captured by the evolutionary process that generated the commonly called the observer, denoted O(x). network. To compute the complexity of some object x, count the In this paper, I test this idea by applying the same complex- number of equivalent descriptions ω(ℓ, x) of length ℓ that ity measure to several well-known artificial life models that map to the object x under the agreed classifier. Then the exhibit ecological networks: Tierra, EcoLab and Webworld. Contrary to what was found in real networks, the artificial life complexity of x is given in the limit as ℓ → ∞: generated foodwebs had little information difference between C(x) = lim ℓ log N − log ω(ℓ, x) (1) itself and randomly shuffled versions. ℓ→∞ where N is the size of the alphabet used for the representa- Introduction tion language.
    [Show full text]
  • Evolving Quorum Sensing in Digital Organisms
    Evolving Quorum Sensing in Digital Organisms Benjamin E. Beckmann and Phillip K. McKinley Department of Computer Science and Engineering 3115 Engineering Building Michigan State University East Lansing, Michigan 48824 {beckma24,mckinley}@cse.msu.edu ABSTRACT 1. INTRODUCTION For centuries it was thought that bacteria live asocial lives. The human body is made up of 1013 human cells. Yet, However, recent discoveries show many species of bacteria this number is an order of magnitude smaller than the num- communicate in order to perform tasks previously thought ber a bacterial cells living within the gastrointestinal tract to be limited to multicellular organisms. Central to this ca- of a single adult [4]. Although it was previously assumed pability is quorum sensing, whereby organisms detect cell that bacteria and other microorganisms rarely interact [24], density and use this information to trigger group behav- in 1979 Nealson and Hastings found evidence that bacterial iors. Quorum sensing is used by bacteria in the formation communities of Vibrio fischeri and Vibrio harveyi were able of biofilms, secretion of digestive enzymes and, in the case to perform a coordinated behavioral change, namely emit- of pathogenic bacteria, release of toxins or other virulence ting light, when the cell density rose above a certain thresh- factors. Indeed, methods to disrupt quorum sensing are cur- old [20]. This type of density-based behavioral change is rently being investigated as possible treatments for numer- called quorum sensing [27]. In quorum sensing (QS), con- ous diseases, including cystic fibrosis, epidemic cholera, and tinual secretion and detection of chemicals provide a way methicillin-resistant Staphylococcus aureus.
    [Show full text]