Artificial Life Programming in the Robust-First Attractor

Total Page:16

File Type:pdf, Size:1020Kb

Artificial Life Programming in the Robust-First Attractor DOI: http://dx.doi.org/10.7551/978-0-262-33027-5-ch097 Artificial life programming in the robust-first attractor David H. Ackley1 and Elena S. Ackley2 1University of New Mexico, Albuquerque, NM 87131 2Ackleyshack LLC, Placitas, NM 87043 [email protected] Downloaded from http://direct.mit.edu/isal/proceedings-pdf/ecal2015/27/554/1903815/978-0-262-33027-5-ch097.pdf by guest on 27 September 2021 Abstract Although the SDA robustness and security properties are dubious, and its scalability is rapidly dwindling, it has Despite mounting awareness of the liabilities of determinis- been so dominant that alternatives may seem unthinkable. tic CPU and RAM computing, across industry and academia One might imagine that fields like fault tolerance (IEEE, there remains little clear vision of a fundamental, general- 2013, e.g.,) or probabilistic algorithms (Karp, 1991) fall purpose alternative. To obtain indefinitely scalable computer architectures offering improved robustness and security, we outside the SDA, but by ‘virtually guaranteeing’ determin- have advocated a realignment of the roles of hardware and istic execution, they actually entrench it. The same is software based on artificial life principles. In this paper we true of many other non-traditional but still deterministic propose an active media computational abstraction to under- models, such as synchronous cellular automata (von Neu- lie such a hardware-software renegotiation. The active me- mann and Burks, 1966; Ulam, 1950; Toffoli and Margolus, dia framework is much in the spirit of probabilistic cellular automata, but designed for indefinite scalability and serious 1987, e.g.), data flow machines and systolic arrays (Borkar programmability, rather than simplicity and analytic tractabil- et al., 1988; Budzynowski and Heiser, 2013, e.g.), and ity. We discuss active media programming techniques based asynchronous circuit-level techniques such as GALS and on living systems principles, and present anecdotal data from RALA (Kishinevsky et al., 2007; Gershenfeld et al., 2010). sample programs to introduce a new programming language called ulam, that we are developing as an underlying lan- Probabilistic cellular automata (PCA) (Grinstein et al., guage for active media. 1985; Agapie et al., 2014, e.g.) do go decisively beyond determinism, and they are general enough to embrace the kind of models we explore—but their motivations and meth- Introduction ods are sharply divergent from the present effort. PCA work often presumes simple and stylized noise models, and As the hegemony of CPU and RAM declines, for the first proceeds—preferably by formal analysis—to derive insights time in decades significantly new computer architectures are into equilibrium distributions and other system properties. appearing—from the nothing-but-net neural architecture of But when such research begins by postulating a state transi- IBM’s TrueNorth (Merolla et al., 2014), to the memristor- tion matrix, the small matter of actual PCA programming is driven flat parallelism of HP’s “The Machine” (Williams, silently assumed away. Yes, the transition matrix is a pow- 2014). With the potential on the horizon for a major evo- erfully general device; no, you don’t want to program in it. lutionary transition in computer architecture, it is an oppor- tune time to reconnect with first principles before shortlist- Recently, there have been some serious programming re- ing successors. The result of such a process, we believe, will search efforts that, while remaining mostly traditional, do be the recognition of artificial life as a (perhaps the) major explicitly abandon determinism and accept some small out- force driving future architectural innovation. put errors—often with the motivation of increased parallel efficiency (Cappello et al., 2009; Elliott et al., 2014; Mis- Escape from the SDA ailovic et al., 2013; Renganarayana et al., 2012, e.g.). We cheer all such efforts but worry they may fail to gain traction Serial deterministic computing based on CPU and RAM is a because their incremental practicality leaves them struggling vast attractor, a valley deep and wide, in a notional space of up the sides of the SDA valley, with all the downhill direc- all possible models of computation. This Serial Determin- tions behind them. istic Attractor (SDA) is laced with interlocking design deci- sions surrounding its core demand for logical correctness— Colonize the RFA which allows the inherent fragility of extremely efficient software to be masked by extremely reliable hardware. Until There is at least one fundamental alternative, which we here a bug, or an attacker, appears. call the Robust-First Attractor (RFA), in the space of all pos- David H. Ackley, Elena S. Ackley (2015) Artificial life programming in the robust-first attractor. Proceedings of the European Conference on Artificial Life 2015, pp. 554-561 sible models of computation. We have been breaking trail in Stefanovic, 2003, e.g.); it is already possible in electron- the RFA for some time (Ackley and Cannon, 2011; Ack- ics (Ackley et al., 2013; Ganapati, 2009). ley, 2013b; Ackley et al., 2013; Ackley, 2013a; Ackley and Small, 2014a), and can report it is strikingly unlike the SDA, A new deal for hardware and software but at least as vast: It is a natural way to understand the com- Clearly, compared to an SDA computer architecture, the ac- putational properties of living systems, which have always tive media model represents a very different division of labor made do without the luxury of deterministic execution. between hardware and software, as large blocks like ‘pro- Life fills space, as long as suitable resources are available; cessor’ and ‘memory’ and ‘bus’—and their floorplanning— every RFA architecture must do the same, and that core de- are placed largely under software control. This refactoring mand for indefinite scalability is surrounded by interacting will presumably incur a hardware price-complexity penalty design decisions often deeply complementary to the SDA’s. something like FPGA vs ASIC or worse—but that, in turn, A von Neumann machine by itself simply isn’t an RFA ar- may be more than offset by enabling new optimizations akin Downloaded from http://direct.mit.edu/isal/proceedings-pdf/ecal2015/27/554/1903815/978-0-262-33027-5-ch097.pdf by guest on 27 September 2021 chitecture; it is just incomplete, and thus unevaluatable, until to RISC vs CISC, combined with the hair-down liberation a method is defined for tiling unbounded space with it. of merely best-effort hardware determinism. Most software-based artificial life models are designed So, while the programmable active media framework2 is 1 to run on single von Neumann machines. Unsurprisingly, likely a splendid deal for hardware, it may seem a brutal one- therefore, the properties of such models typically depend two punch for software, stunned by nondeterminism from critically on deterministic execution, as typified by the ut- below then flattened by expanded mission responsibilities ter collapse of constructs in Conway’s game of life when from above. We take that added software engineering com- facing even mild asynchrony (Bersini and Detours (1994); plexity as underlying the “hard to program” objection lev- see also Beer (2014)). eled against our approach in a discussion of a very interest- Determinism is a property of the small and the fragile; it ing spatial and parallel—though apparently deterministic— is fundamentally misaligned with living systems. It warps model of computation (Budzynowski and Heiser, 2013). our expectations; it’s time to move on. But here’s the thing: On the one hand, the software en- gineering job should be harder, because its relative simplic- Programmable active media ity was purchased with precisely those von Neumann ma- SDA models are well-suited to implementation in passive, chine features—a single processing locus, uniform passive “cold” materials, where uniformity rules, change is rare, and memory, reliability all on hardware—that led to its Achilles’ free energy is expensive—conditions where, indeed, living heels of unscalability and unsecurability. Serial determin- systems may survive but will rarely thrive. However, some ism was a simple, sensible starting point, but software en- environments are diverse in space, dynamic in time, and en- gineering and many related fields have emerged since von ergetically rich, bountiful, like a rain forest or a sunny day at Neumann’s time, and we now know quite a bit about con- the shore. We abstract such circumstances into active media structing, managing, and evolving complex systems. Look- computational models—unbounded spatial architectures in ing back from the RFA, for software still to be demanding which each discretized location performs logical state tran- general pointers and flat RAM and cache coherent global sitions based on its local neighborhood, but with uncertain determinism seems like clutching blankie. The future will and variable frequencies and only limited reliability. arrive anyway. An active medium can change spontaneously and is inher- That said, and on the other hand, software’s big promo- ently nondeterministic. In a programmable active medium tion becomes less terrifying as we get down to work, be- we get to pick its state transition function—to specify, up cause, like hope from Pandora’s box, “best effort” wafts up- to reliability limits, that certain neighborhood patterns shall wards from the nondeterministic hardware into the software stay constant like memory, say, while others produce transi- as well. As a system component, we’ll do our best with what tions like a processor or a data transport, or, indeed, act like we’ve got and what we get, but if things go really wrong, different types of hardware at different moments. The state we can simply delete ourselves and let our kin cover for us. transition function we supply is executed asynchronously in Correctness and robustness are measured by degrees and cir- parallel across the medium, avoiding overlapping state tran- cumstances in living systems; in the RFA they are highly sitions, again, with good but not guaranteed reliability.
Recommended publications
  • Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues
    OII / e-Horizons Forum Discussion Paper No. 14, January 2008 Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues by Malcolm Peltu Oxford Internet Institute Yorick Wilks Oxford Internet Institute OII / e-Horizons Forum Discussion Paper No. 14 Oxford Internet Institute University of Oxford 1 St Giles, Oxford OX1 3JS United Kingdom Forum Discussion Paper January 2008 © University of Oxford for the Oxford Internet Institute 2008 Close Engagements with Artificial Companions Foreword This paper summarizes discussions at the multidisciplinary forum1 held at the University of Oxford on 26 October 2007 entitled Artificial Companions in Society: Perspectives on the Present and Future, as well as an open meeting the previous day addressed by Sherry Turkle2. The event was organized by Yorick Wilks, Senior Research Fellow for the Oxford Internet Institute (OII)3, on behalf of the e-Horizons Institute4 and in association with the EU Integrated Project COMPANIONS. COMPANIONS is studying conversational software-based artificial agents that will get to know their owners over a substantial period. These could be developed to advise, comfort and carry out a wide range of functions to support diverse personal and social needs, such as to be ‘artificial companions’ for the elderly, helping their owners to learn, or assisting to sustain their owners’ fitness and health. The invited forum participants, including computer and social scientists, also discussed a range of related developments that use advanced artificial intelligence and human– computer interaction approaches. They examined key issues in building artificial companions, emphasizing their social, personal, emotional and ethical implications. This paper summarizes the main issues raised.
    [Show full text]
  • Open Problems in Artificial Life
    Open Problems in Artificial Life Mark A. Bedau,† John S. McCaskill‡ Norman H. Packard§ Steen Rasmussen Chris Adami†† David G. Green‡‡ Takashi Ikegami§§ Abstract This article lists fourteen open problems in Kunihiko Kaneko artificial life, each of which is a grand challenge requiring a Thomas S. Ray††† major advance on a fundamental issue for its solution. Each problem is briefly explained, and, where deemed helpful, some promising paths to its solution are indicated. Keywords artificial life, evolution, self-organi- zation, emergence, self-replication, autopoeisis, digital life, artificial chemistry, selection, evolutionary learning, ecosystems, biosystems, astrobiology, evolvable hardware, dynamical hierarchies, origin of life, simulation, cultural evolution, 1 Introduction information theory At the dawn of the last century, Hilbert proposed a set of open mathematical problems. They proved to be an extraordinarily effective guideline for mathematical research in the following century. Based on a substantial body of existing mathematical theory, the challenges were both precisely formulated and positioned so that a significant body of missing theory needed to be developed to achieve their solution, thereby enriching mathematics as a whole. In contrast with mathematics, artificial life is quite young and essentially interdis- ciplinary. The phrase “artificial life” was coined by C. Langton [11], who envisaged an investigation of life as it is in the context of life as it could be. Although artificial life is fundamentally directed towards both the origins of biology and its future, the scope and complexity of its subject require interdisciplinary cooperation and collabo- ration. This broadly based area of study embraces the possibility of discovering lifelike behavior in unfamiliar settings and creating new and unfamiliar forms of life, and its major aim is to develop a coherent theory of life in all its manifestations, rather than an historically contingent documentation bifurcated by discipline.
    [Show full text]
  • Open-Endedness for the Sake of Open-Endedness
    Open-Endedness for the Sake Arend Hintze Michigan State University of Open-Endedness Department of Integrative Biology Department of Computer Science and Engineering BEACON Center for the Study of Evolution in Action [email protected] Abstract Natural evolution keeps inventing new complex and intricate forms and behaviors. Digital evolution and genetic algorithms fail to create the same kind of complexity, not just because we still lack the computational resources to rival nature, but Keywords because (it has been argued) we have not understood in principle how to create open-ended evolving systems. Much effort has been Open-ended evolution, computational model, complexity, diversity made to define such open-endedness so as to create forms of increasing complexity indefinitely. Here, however, a simple evolving computational system that satisfies all such requirements is presented. Doing so reveals a shortcoming in the definitions for open-ended evolution. The goal to create models that rival biological complexity remains. This work suggests that our current definitions allow for even simple models to pass as open-ended, and that our definitions of complexity and diversity are more important for the quest of open-ended evolution than the fact that something runs indefinitely. 1 Introduction Open-ended evolution has been identified as a key challenge in artificial life research [4]. Specifically, it has been acknowledged that there is a difference between an open-ended system and a system that just has a long run time. Large or slow systems will eventually converge on a solution, but open- ended systems will not and instead keep evolving. Interestingly, a system that oscillates or that is in a dynamic equilibrium [21] would continuously evolve, but does not count as an open-ended evolving system.
    [Show full text]
  • Artificial Intelligence: How Does It Work, Why Does It Matter, and What Can We Do About It?
    Artificial intelligence: How does it work, why does it matter, and what can we do about it? STUDY Panel for the Future of Science and Technology EPRS | European Parliamentary Research Service Author: Philip Boucher Scientific Foresight Unit (STOA) PE 641.547 – June 2020 EN Artificial intelligence: How does it work, why does it matter, and what can we do about it? Artificial intelligence (AI) is probably the defining technology of the last decade, and perhaps also the next. The aim of this study is to support meaningful reflection and productive debate about AI by providing accessible information about the full range of current and speculative techniques and their associated impacts, and setting out a wide range of regulatory, technological and societal measures that could be mobilised in response. AUTHOR Philip Boucher, Scientific Foresight Unit (STOA), This study has been drawn up by the Scientific Foresight Unit (STOA), within the Directorate-General for Parliamentary Research Services (EPRS) of the Secretariat of the European Parliament. To contact the publisher, please e-mail [email protected] LINGUISTIC VERSION Original: EN Manuscript completed in June 2020. DISCLAIMER AND COPYRIGHT This document is prepared for, and addressed to, the Members and staff of the European Parliament as background material to assist them in their parliamentary work. The content of the document is the sole responsibility of its author(s) and any opinions expressed herein should not be taken to represent an official position of the Parliament. Reproduction and translation for non-commercial purposes are authorised, provided the source is acknowledged and the European Parliament is given prior notice and sent a copy.
    [Show full text]
  • How Artificial Intelligence Works
    BRIEFING How artificial intelligence works From the earliest days of artificial intelligence (AI), its definition focused on how its results appear to show intelligence, rather than the methods that are used to achieve it. Since then, AI has become an umbrella term which can refer to a wide range of methods, both current and speculative. It applies equally well to tools that help doctors to identify cancer as it does to self-replicating robots that could enslave humanity centuries from now. Since AI is simultaneously represented as high-risk, low-risk and everything in between, it is unsurprising that it attracts controversy. This briefing provides accessible introductions to some of the key techniques that come under the AI banner, grouped into three sections to give a sense the chronology of its development. The first describes early techniques, described as 'symbolic AI' while the second focuses on the 'data driven' approaches that currently dominate, and the third looks towards possible future developments. By explaining what is 'deep' about deep learning and showing that AI is more maths than magic, the briefing aims to equip the reader with the understanding they need to engage in clear-headed reflection about AI's opportunities and challenges, a topic that is discussed in the companion briefing, Why artificial intelligence matters. First wave: Symbolic artificial intelligence Expert systems In these systems, a human expert creates precise rules that a computer can follow, step by step, to decide how to respond to a given situation. The rules are often expressed in an 'if-then-else' format. Symbolic AI can be said to 'keep the human in the loop' because the decision-making process is closely aligned to how human experts make decisions.
    [Show full text]
  • Biological Computation: a Road
    918 Biological Computation: A Road to Complex Engineered Systems Nelson Alfonso Gómez-Cruz Modeling and Simulation Laboratory Universidad del Rosario Bogotá, Colombia [email protected] Carlos Eduardo Maldonado Research Professor Universidad del Rosario Bogotá, Colombia [email protected] Provided that there is no theoretical frame for complex engineered systems (CES) as yet, this paper claims that bio-inspired engineering can help provide such a frame. Within CES bio- inspired systems play a key role. The disclosure from bio-inspired systems and biological computation has not been sufficiently worked out, however. Biological computation is to be taken as the processing of information by living systems that is carried out in polynomial time, i.e., efficiently; such processing however is grasped by current science and research as an intractable problem (for instance, the protein folding problem). A remark is needed here: P versus NP problems should be well defined and delimited but biological computation problems are not. The shift from conventional engineering to bio-inspired engineering needs bring the subject (or problem) of computability to a new level. Within the frame of computation, so far, the prevailing paradigm is still the Turing-Church thesis. In other words, conventional 919 engineering is still ruled by the Church-Turing thesis (CTt). However, CES is ruled by CTt, too. Contrarily to the above, we shall argue here that biological computation demands a more careful thinking that leads us towards hypercomputation. Bio-inspired engineering and CES thereafter, must turn its regard toward biological computation. Thus, biological computation can and should be taken as the ground for engineering complex non-linear systems.
    [Show full text]
  • What Was Life? Answers from Three Limit Biologies Author(S): Stefan Helmreich Source: Critical Inquiry, Vol
    What Was Life? Answers from Three Limit Biologies Author(s): Stefan Helmreich Source: Critical Inquiry, Vol. 37, No. 4 (Summer 2011), pp. 671-696 Published by: The University of Chicago Press Stable URL: http://www.jstor.org/stable/10.1086/660987 . Accessed: 25/08/2011 13:15 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. The University of Chicago Press is collaborating with JSTOR to digitize, preserve and extend access to Critical Inquiry. http://www.jstor.org What Was Life? Answers from Three Limit Biologies Stefan Helmreich “What was life? No one knew.” —THOMAS MANN, The Magic Mountain What is life? A gathering consensus in anthropology, science studies, and philosophy of biology suggests that the theoretical object of biology, “life,” is today in transformation, if not dissolution. Proliferating repro- ductive technologies, along with genomic reshufflings of biomatter in such practices as cloning, have unwound the facts of life.1 Biotechnology, bio- This paper grew from a presentation at “Vitalism Revisited: History, Philosophy, Biology,” at the Center for Interdisciplinary Studies in Science and Cultural Theory, Duke University, 22 Mar. 2008. I thank Barbara Herrnstein Smith for inviting me. The paper went through revision for “Extreme: Histories and Economies of Humanness Inside Outerspaces,” at the American Anthropological Association meeting, 2–6 Dec.
    [Show full text]
  • Artificial Intelligence in Computer Graphics: a Constructionist Approach
    Artificial Intelligence in Computer Graphics: A Constructionist Approach Kristinn R.Thórisson,Christopher ology aims to help coordinate the effort. approach to building AI systems. The Pennock,Thor List,John DiPirro There is also a lack of incremental accumula- Constructionist AI Methodology (CAIM) is an Communicative Machines tion of knowledge in AI and related computer emerging set of principles for designing and graphics. By supporting reuse of prior work implementing interactive intelligences, Introduction we enable the building of increasingly speeding up implementation of relatively The ultimate embodiment of artificial intelli- powerful systems, as core system elements complex, multi-functional systems with full, gence (AI) has traditionally been seen as do not need to be built from scratch. real-time perception-action loop capabilities. physical robots. However, hardware is expen- Background and Motivation It helps novices as well as experts with the sive to construct and difficult to modify. The The creation of believable, interactive charac- creation of embodied, virtual agents that can connection between graphics and AI is ters is a challenge familiar to many game perceive and act in real time and interact with becoming increasingly strong, and, on the one developers. Systems that have to respond to virtual as well as real environments. The hand, it is now clear that AI research can asynchronous inputs in real time, where the embodiment of such humanoids can come in benefit tremendously from embodiment in inputs’ timing is largely unpredictable, can various levels of completeness, from just a virtual worlds [1, 4, 5]. On the other hand, pose a serious challenge to the system design.
    [Show full text]
  • Art and Artificial Life – a Primer
    Art and Artificial Life – a Primer Simon Penny University of California, Irvine [email protected] ABSTRACT December 2009) which is a testament to the enduring and It was not until the late 1980s that the term ‘Artificial Life’ arose inspirational intellectual significance of ideas associated with as a descriptor of a range of (mostly) computer based research Artificial Life. practices which sought alternatives to conventional Artificial Intelligence methods as a source of (quasi-) intelligent behavior in technological systems and artifacts. These practices included Artificial Life could not have emerged as a persuasive paradigm reactive and bottom-up robotics, computational systems which without the easy availability of computation. This is not simply to simulated evolutionary and genetic processes, and are range of proclaim, as did Christopher Langton, that Artificial Life was an other activities informed by biology and complexity theory. A exploration of life on a non-carbon substrate, but that Artificial general desire was to capture, harness or simulate the generative Life is ‘native’ to computing in the sense that large scale iterative and ‘emergent’ qualities of ‘nature’ - of evolution, co-evolution process is crucial to the procedures which generate (most) and adaptation. ‘Emergence’ was a keyword in the discourse. Two artificial life phenomena. The notion that Artificial Life is life decades later, the discourses of Artificial Life continues to have created an ethico-philosophical firestorm concerning intelligence, intellectual force,
    [Show full text]
  • Artificial Life
    ARTIFICIAL LIFE Mark A. Bedau Contemporary artificial life (also known as “ALife”) is an interdisciplinary study of life and life-like processes. Its two most important qualities are that it focuses on the essential rather than the contingent features of living systems and that it attempts to understand living systems by artificially synthesizing extremely simple forms of them. These two qualities are connected. By synthesizing simple systems that are very life-like and yet very unfamiliar, artificial life constructively explores the boundaries of what is possible for life. At the moment, artificial life uses three different kinds of synthetic methods. “Soft” artificial life creates computer simula- tions or other purely digital constructions that exhibit life-like behavior. “Hard” artificial life produces hardware implementations of life-like systems. “Wet” artifi- cial life involves the creation of life-like systems in a laboratory using biochemical materials. Contemporary artificial life is vigorous and diverse. So this chapter’s first goal is to convey what artificial life is like. It first briefly reviews the history of artificial life and illustrates the current research thrusts in contemporary “soft”, “hard”, and “wet” artificial life with respect to individual cells, whole organisms, and evolving populations. Artificial life also raises and informs a number of philosophical is- sues concerning such things as emergence, evolution, life, mind, and the ethics of creating new forms of life from scratch. This chapter’s second goal is to illustrate these philosophical issues, discuss some of their complexities, and suggest the most promising avenues for making further progress. 1 HISTORY AND METHODOLOGY Contemporary artificial life became known as such when Christopher Langton coined the phrase “artificial life” in the 1980s.
    [Show full text]
  • Applying Evolutionary Computation to Mitigate Uncertainty in Dynamically-Adaptive, High-Assurance Middleware
    J Internet Serv Appl (2012) 3:51–58 DOI 10.1007/s13174-011-0049-4 SI: FOME - THE FUTURE OF MIDDLEWARE Applying evolutionary computation to mitigate uncertainty in dynamically-adaptive, high-assurance middleware Philip K. McKinley · Betty H.C. Cheng · Andres J. Ramirez · Adam C. Jensen Received: 31 October 2011 / Accepted: 12 November 2011 / Published online: 3 December 2011 © The Brazilian Computer Society 2011 Abstract In this paper, we explore the integration of evolu- systems need to perform complex, distributed tasks despite tionary computation into the development and run-time sup- lossy wireless communication, limited resources, and uncer- port of dynamically-adaptable, high-assurance middleware. tainty in sensing the environment. However, even systems The open-ended nature of the evolutionary process has been that operate entirely within cyberspace need to account for shown to discover novel solutions to complex engineering changing network and load conditions, component failures, problems. In the case of high-assurance adaptive software, and exposure to a wide variety of cyber attacks. however, this search capability must be coupled with rigor- A dynamically adaptive system (DAS) monitors itself and ous development tools and run-time support to ensure that its execution environment at run time. This monitoring en- the resulting systems behave in accordance with require- ables a DAS not only to detect conditions warranting a re- ments. Early investigations are reviewed, and several chal- configuration, but also to determine when and how to safely lenging problems and possible research directions are dis- reconfigure itself in order to deliver acceptable behavior be- cussed. fore, during, and after adaptation [2].
    [Show full text]
  • On the Possibility of Robots Having Emotions
    Georgia State University ScholarWorks @ Georgia State University Philosophy Theses Department of Philosophy Summer 8-12-2014 On the Possibility of Robots Having Emotions Cameron Hamilton Follow this and additional works at: https://scholarworks.gsu.edu/philosophy_theses Recommended Citation Hamilton, Cameron, "On the Possibility of Robots Having Emotions." Thesis, Georgia State University, 2014. https://scholarworks.gsu.edu/philosophy_theses/150 This Thesis is brought to you for free and open access by the Department of Philosophy at ScholarWorks @ Georgia State University. It has been accepted for inclusion in Philosophy Theses by an authorized administrator of ScholarWorks @ Georgia State University. For more information, please contact [email protected]. ON THE POSSIBILITY OF ROBOTS HAVING EMOTIONS by CAMERON REID HAMILTON Under the Direction of Andrea Scarantino ABSTRACT I argue against the commonly held intuition that robots and virtual agents will never have emotions by contending robots can have emotions in a sense that is functionally similar to hu- mans, even if the robots' emotions are not exactly equivalent to those of humans. To establish a foundation for assessing the robots' emotional capacities, I first define what emotions are by characterizing the components of emotion consistent across emotion theories. Second, I dissect the affective-cognitive architecture of MIT's Kismet and Leonardo, two robots explicitly de- signed to express emotions and to interact with humans, in order to explore whether they have emotions.
    [Show full text]