The Stochastic Thermodynamics of Computation

Total Page:16

File Type:pdf, Size:1020Kb

The Stochastic Thermodynamics of Computation Journal of Physics A: Mathematical and Theoretical ACCEPTED MANUSCRIPT The stochastic thermodynamics of computation To cite this article before publication: David Wolpert et al 2019 J. Phys. A: Math. Theor. in press https://doi.org/10.1088/1751-8121/ab0850 Manuscript version: Accepted Manuscript Accepted Manuscript is “the version of the article accepted for publication including all changes made as a result of the peer review process, and which may also include the addition to the article by IOP Publishing of a header, an article ID, a cover sheet and/or an ‘Accepted Manuscript’ watermark, but excluding any other editing, typesetting or other changes made by IOP Publishing and/or its licensors” This Accepted Manuscript is © 2018 IOP Publishing Ltd. During the embargo period (the 12 month period from the publication of the Version of Record of this article), the Accepted Manuscript is fully protected by copyright and cannot be reused or reposted elsewhere. As the Version of Record of this article is going to be / has been published on a subscription basis, this Accepted Manuscript is available for reuse under a CC BY-NC-ND 3.0 licence after the 12 month embargo period. After the embargo period, everyone is permitted to use copy and redistribute this article for non-commercial purposes only, provided that they adhere to all the terms of the licence https://creativecommons.org/licences/by-nc-nd/3.0 Although reasonable endeavours have been taken to obtain all necessary permissions from third parties to include their copyrighted content within this article, their full citation and copyright line may not be present in this Accepted Manuscript version. Before using any content from this article, please refer to the Version of Record on IOPscience once published for full citation and copyright details, as permissions will likely be required. All third party content is fully copyright protected, unless specifically stated otherwise in the figure caption in the Version of Record. View the article online for updates and enhancements. This content was downloaded from IP address 129.59.95.115 on 06/03/2019 at 17:27 Page 1 of 110 AUTHOR SUBMITTED MANUSCRIPT - JPhysA-110712.R1 1 2 3 The Stochastic Thermodynamics of Computation 4 5 6 David H. Wolpert 7 Santa Fe Institute, Santa Fe, New Mexico 8 9 Complexity Science Hub, Vienna 10 Arizona State University, Tempe, Arizona, 11 12 http://davidwolpert.weebly.com 13 14 One of the central concerns of computer science is how the resources needed to per- 15 16 form a given computation depend on that computation. Moreover, one of the major re- 17 source requirements of computers — ranging from biological cells to human brains to high- 18 19 performance (engineered) computers — is the energy used to run them, i.e., the thermody- 20 namic costs of running them. Those thermodynamic costs of performing a computation has 21 22 been a long-standing focus of research in physics, going back (at least) to the early work of 23 Landauer, in which he argued that the thermodynamic cost of erasing a bit in any physical 24 25 system is at least kT ln[2]. 26 One of the most prominent aspects of computers is that they are inherently non- 27 28 equilibrium systems. However, the research by Landauer and co-workers was done when 29 non-equilibrium statistical physics was still in its infancy, requiring them to rely on equilib- 30 31 rium statistical physics. This limited the breadth of issues this early research could address, 32 leading them to focus on the number of times a bit is erased during a computation — an 33 34 issue having little bearing on the central concerns of computer science theorists. 35 Since then there have been major breakthroughs in nonequilibrium statistical physics, 36 37 leading in particular to the new subfield of “stochastic thermodynamics”. These break- 38 throughs have allowed us to put the thermodynamic analysis of bit erasure on a fully formal 39 40 (nonequilibrium) footing. They are also allowing us to investigate the myriad aspects of the 41 relationship between statistical physics and computation, extending well beyond the issue of 42 43 how much work is required to erase a bit. 44 In this paper I review some of this recent work on the “stochastic thermodynamics of 45 46 computation”. After reviewing the salient parts of information theory, computer science the- 47 ory, and stochastic thermodynamics, I summarize what has been learned about the entropic 48 49 costs of performing a broad range of computations, extending from bit erasure to loop-free 50 circuits to logically reversible circuits to information ratchets to Turing machines. These 51 52 results reveal new, challenging engineering problems for how to design computers to have 53 minimal thermodynamic costs. They also allow us to start to combine computer science 54 55 theory and stochastic thermodynamics at a foundational level, thereby expanding both. 56 57 58 59 60 Accepted Manuscript AUTHOR SUBMITTED MANUSCRIPT - JPhysA-110712.R1 Page 2 of 110 2 1 2 3 I. INTRODUCTION 4 5 6 A. Computers ranging from cells to chips to human brains 7 8 9 Paraphrasing Landauer, real world computation involves thermodynamic costs [3] — and those 10 costs can be massive. Estimates are that computing accounts for ∼ 4% of current total US energy 11 12 usage in 2018 — and all of that energy ultimately goes into waste heat. 13 14 Such major thermodynamic costs arise in naturally occurring, biological computers as well as 15 the artificial computers we have constructed. Indeed, the comparison of thermodynamic costs in 16 17 artificial and biological computers can be fascinating. For example, a large fraction of the energy 18 19 budget of a cell goes to translating RNAs into sequences of amino acids (i.e., proteins), in the cell’s 20 21 ribosome. The thermodynamic efficiency of this computation — the amount of heat produced by a 22 ribosome per elementary operation — is many orders of magnitude superior to the thermodynamic 23 24 efficiency of my current artificial computers [4]. Are there “tricks” that cells use to reduce their 25 26 costs of computation that we could exploit in our artificial computers? 27 More speculatively, the human brain consumes ∼ 10 − 20% of all the calories a human being 28 29 requires to live [5]. The need to continually gather all those extra calories is a massive evolutionary 30 31 fitness cost that our ancestors faced. Does this fitness cost explain why human levels of intelligence 32 (i.e., of neurobiological computation) are so rare in the history of life on earth? 33 34 These examples involving both artificial and biological computers hint at the deep connections 35 36 between computation and statistical physics. Indeed, the relationship between computation and 37 statistical physics has arguably been a major focus of physics research at least since Maxwell’s demon 38 39 was introduced. A particularly important advance was made in the middle of the last century when 40 41 Landauer (and then Bennett) famously argued that erasing a bit in a computation results in a 42 43 entropic cost of at least kT ln[2], where kB is Boltzmann’s constant and T is the temperature of a 44 heat bath connected to the system. 45 46 This early work was grounded in the tools of equilibrium statistical physics. However, computers 47 48 are highly nonequilbrium systems. As a result, this early work was necessarily semiformal, and there 49 were many questions it could not address. 50 51 On the other hand, in the last few decades there have been major breakthroughs in nonequi- 52 53 librium statistical physics. Some of the most important of these breakthroughs now allow us to 54 analyze the thermodynamic behavior of any system that can be modeled with a time-inhomogeneous 55 56 continuous-time Markov chain (CTMC), even if it is open, arbitrarily far from equilibrium, and un- 57 58 59 60 Accepted Manuscript Page 3 of 110 AUTHOR SUBMITTED MANUSCRIPT - JPhysA-110712.R1 3 1 2 3 dergoing arbitrary external driving. In particular, we can now decompose the time-derivative of 4 5 the (Shannon) entropy of such a system into an “entropy production rate”, quantifying the rate 6 7 of change of the total entropy of the system and its environment, minus a “entropy flow rate”, 8 quantifying the rate of entropy exiting the system into its environment. Crucially, the entropy 9 10 production rate is non-negative, regardless of the CTMC. So if it ever adds a nonzero amount to 11 12 system entropy, its subsequent evolution cannot undo that increase in entropy. (For this reason 13 it is sometimes referred to as irreversible entropy production.) This is the modern understanding 14 15 of the second law of thermodynamics, for systems undergoing Markovian dynamics. In contrast 16 17 to entropy production, entropy flow can be negative or positive. So even if entropy flow increases 18 system entropy during one time interval (i.e., entropy flows into the system), often its subsequent 19 20 evolution can undo that increase. 21 22 This decomposition of the rate of change of entropy is the starting point for the field of “stochastic 23 thermodynamics”. However, the decomposition actually holds even in scenarios that have nothing 24 25 to do with thermodynamics, where there are no heat baths, work reservoirs, or the like coupled 26 27 to the system. Accordingly, I will refer to the dynamics of the entropy production rate, entropy 28 29 flow rate, etc. as the “entropy dynamics” of the physical system. In addition, integrating this 30 decomposition over a non-zero time interval, we see that the total change in entropy of the system 31 32 equals the total entropy flow plus the total entropy production.
Recommended publications
  • The Impotent Fury of William Dembski
    THE DESIGN REVOLUTION? How William Dembski Is Dodging Questions About Intelligent Design By Mark Perakh Who is William A. Dembski? We are told that he has PhD degrees in mathematics and philosophy plus more degrees - in theology and what not – a long list of degrees indeed. [1] To acquire all those degrees certainly required an unconventional penchant for getting as many degrees as possible. We all know that degrees alone do not make a person a scientist. Scientific degrees are not like ranks in the military where a general is always above a mere colonel. Degrees are only a formal indicator of a person’s educational status. A scientist’s reputation and authority are based on his degrees only to a negligible extent. What really attests to a person’s status in science is publications in professional journals and anthologies and references to one’s work by colleagues. This is the domain where Dembski has so far remained practically invisible. All his multiple publications have little or nothing to do with science. He is a mathematician who did not prove any theorem and derived not a single formula. When he writes about probability theory or information theory -- on which he is proclaimed to be an expert -- the real experts in these fields (using the words of the prominent mathematician David Wolpert) “squint, furrow one's brows, and then shrug.” [2] When encountering critique of his work, Dembski is selective in choosing when to reply to his critics and when to ignore their critique. His preferred targets for replies are those critics who do not boast comparable long lists of formal credentials – this enables him to contemptuously dismiss the critical comments by pointing to the alleged lack of qualification of his opponents while avoiding answering the essence of their critical remarks.
    [Show full text]
  • Multi-Agent Economics and the Emergence of Critical Markets
    Multi-agent Economics and the Emergence of Critical Markets Michael S. Harr´e1 1Complex Systems Research Group, Faculty of Engineering and IT, The University of Sydney, Sydney, Australia. Abstract The dual crises of the sub-prime mortgage crisis and the global financial crisis has prompted a call for explanations of non-equilibrium market dynamics. Recently a promising approach has been the use of agent based models (ABMs) to simulate aggregate market dynamics. A key aspect of these models is the endogenous emergence of critical transitions between equilibria, i.e. market collapses, caused by multiple equilibria and changing market parameters. Several research themes have developed microeconomic based models that include multiple equilibria: social decision theory (Brock and Durlauf), quantal response models (McKelvey and Palfrey), and strategic complementarities (Goldstein). A gap that needs to be filled in the literature is a unified analysis of the relationship between these models and how aggregate criticality emerges from the individual agent level. This article reviews the agent-based foundations of markets starting with the individual agent perspective of McFadden and the aggregate perspective of catastrophe theory emphasising connections between the different approaches. It is shown that changes in the uncertainty agents have in the value of their interactions with one another, even if these changes are one-sided, plays a central role in systemic market risks such as market instability and the twin crises effect. These interactions can endogenously cause crises that are an emergent phenomena of markets. 1 Introduction Multi-agent models have grown in popularity [2, 5, 64, 9, 34, 37] as a way in which to simu- late complex market dynamics that might have no closed form solutions.
    [Show full text]
  • What Is Important About the No Free Lunch Theorems? 3
    What is important about the No Free Lunch theorems? David H. Wolpert Abstract The No Free Lunch theorems prove that under a uniform distribution over induction problems (search problemsor learning problems), all induction algorithms perform equally. As I discuss in this chapter, the importanceofthe theoremsarises by using them to analyze scenarios involving non-uniform distributions, and to compare different algorithms, without any assumption about the distribution over problems at all. In particular, the theorems prove that anti-cross-validation (choosing among a set of candidate algorithms based on which has worst out-of-sample behavior) performs as well as cross-validation, unless one makes an assumption — which has never been formalized — about how the distribution over induction problems, on the one hand,is relatedto the set of algorithmsone is choosingamong using (anti-)cross validation, on the other. In addition, they establish strong caveats concerning the significance of the many results in the literature which establish the strength of a particular algorithm without assuming a particular distribution. They also motivate a “dictionary” between supervised learning and improve blackbox optimization, which allows one to “translate” techniques from supervised learning into the domain of blackbox optimization, thereby strengthening blackbox optimization algorithms. In addition to these topics, I also briefly discuss their implications for philosophy of science. 1 Introduction arXiv:2007.10928v1 [cs.LG] 21 Jul 2020 The first of what are now called the No Free Lunch (NFL) theorems were published in [1]. Soon after publication they were popularizedin [2], building on a preprint ver- sion of [1]. Those first theorems focused on (supervised) machine learning.
    [Show full text]
  • Conference Program
    The Third International Conference on Knowledge Discovery and Data Mining August 14–17 1997, Newport Beach, California Sponsored by the American Association for Artificial Intelligence Cosponsored by General Motors Research, The National Institute for Statistical Sciences, and NCR Corporation In cooperation with The American Statistical Association Welcome to KDD-97! KDD-97 Organization General Conference Chair William Eddy, Carnegie Mellon Sally Morton, Rand Corporation University Richard Muntz, University of California Ramasamy Uthurusamy, General Motors Charles Elkan, University of California at at Los Angeles Corporation San Diego Raymond Ng, University of British Usama Fayyad, Microsoft Research Columbia Program Cochairs Ronen Feldman, Bar-Ilan University, Steve Omohundro, NEC Research David Heckerman, Microsoft Research Israel Gregory Piatetsky-Shapiro, Heikki Mannila, University of Helsinki, Jerry Friedman, Stanford University GTE Laboratories Finland Dan Geiger, Technion, Israel Daryl Pregibon, Bell Laboratories Daryl Pregibon, AT&T Laboratories Clark Glymour, Carnegie Mellon Raghu Ramakrishnan, University of University Wisconsin, Madison Publicity Chair Moises Goldszmidt, SRI International Patricia Riddle, Boeing Computer Georges Grinstein, University of Services Paul Stolorz, Jet Propulsion Laboratory Massachusetts, Lowell Ted Senator, NASD Regulation Inc. Tutorial Chair Jiawei Han, Simon Fraser University, Jude Shavlik, University of Wisconsin at Canada Madison Padhraic Smyth, University of California, David Hand, Open University,
    [Show full text]
  • David Wolpert Research Statement
    David Wolpert Research Statement Although my degrees were in physics (Princeton, then the Kavli Institute for Theoretical Physics), my interests are far broader. Reflecting this I have worked at the Santa Fe Institute, the Neurosciences Institute at Rockefeller University, Los Alamos’ Center for Nonlinear Studies, TXN (a data-mining startup), and IBM. Currently I am a senior computer scientist at NASA. Reflecting this breadth of my interests, I have published work on the following topics, most of it being on the first four: Multi-agent systems and distributed control Operations research (in particular nonlinear and heuristic optimization) Machine learning Statistics (Bayesian, Sampling theory, and Maxent) Nanotechnology Molecular biology Computation theory Game theory Several branches of mathematics. In the last few years my research focus has been five-fold: 1) PROBABILITY COLLECTIVES: By casting both of them in terms of information theory, game theory and statistical physics are, mathematically, identical. Intuitively, players in a game and their reward functions are identical to particles in a system and their energy functions. This identity allows one to “cut and paste” techniques between these fields, creating a hybrid richer than either by itself. So for example, the proper quantification of the bounded rationality of an agent turns out to be formally identical to a particle’s temperature. As another example, techniques in statistical physics used to analyze systems with varying numbers of particles can be applied to games with varying numbers of agents. Along with the members of my group and collaborators at Oxford, Stanford, Berkeley, and Los Alamos, I have been investigating the extremely rich theory arising from this hybridization.
    [Show full text]
  • LA-UR-11-06242 Approved for Public Release; Distribution Is Unlimited
    LA-UR-11-06242 Approved for public release; distribution is unlimited. Title: Modeling Humans as Reinforcement Learners: How to Predict Human Behavior in Multi-Stage Games Author(s): Ritchie Lee David Wolpert Scott Backhaus Russell Bent James Bono Brendan Tracey Intended for: 25th Annual Conference on Neural Information Processing Systems, Workshop on Decision Making with Multiple Imperfect Decision Makers, Granda, Spain, December 2011 Los Alamos National Laboratory, an affirmative action/equal opportunity employer, is operated by the Los Alamos National Security, LLC for the National Nuclear Security Administration of the U.S. Department of Energy under contract DE-AC52-06NA25396. By acceptance of this article, the publisher recognizes that the U.S. Government retains a nonexclusive, royalty-free license to publish or reproduce the published form of this contribution, or to allow others to do so, for U.S. Government purposes. Los Alamos National Laboratory requests that the publisher identify this article as work performed under the auspices of the U.S. Department of Energy. Los Alamos National Laboratory strongly supports academic freedom and a researcher’s right to publish; as an institution, however, the Laboratory does not endorse the viewpoint of a publication or guarantee its technical correctness. Form 836 (7/06) Modeling Humans as Reinforcement Learners: How to Predict Human Behavior in Multi-Stage Games Ritchie Lee David H. Wolpert Carnegie Mellon University Silicon Valley Intelligent Systems Division NASA Ames Research Park MS23-11 NASA Ames Research Center MS269-1 Moffett Field, CA 94035 Moffett Field, CA 94035 [email protected] [email protected] Scott Backhaus Russell Bent Los Alamos National Laboratory Los Alamos National Laboratory MS K764, Los Alamos, NM 87545 MS C933, Los Alamos, NM 87545 [email protected] [email protected] James Bono Brendan Tracey Department of Economics Department of Aeronautics and Astronautics American University Stanford University 4400 Massachusetts Ave.
    [Show full text]
  • Arxiv:2103.11956V2 [Cs.LG] 28 Jun 2021 the Implications of the No
    The Implications of the No-Free-Lunch Theorems for Meta-induction David H. Wolpert Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM, 87501, http://davidwolpert.weebly.com Abstract The important recent book by G. Schurz [1] appreciates that the no-free- lunch theorems (NFL) have major implications for the problem of (meta) induction. Here I review the NFL theorems, emphasizing that they do not only concern the case where there is a uniform prior — they prove that there are “as many priors” (loosely speaking) for which any induction algo- rithm A out-generalizes some induction algorithm B as vice-versa. Impor- tantly though, in addition to the NFL theorems, there are many free lunch theorems. In particular, the NFL theorems can only be used to compare the marginal expected performance of an induction algorithm A with the marginal expected performance of an induction algorithm B. There is a rich set of free lunches which instead concern the statistical correlations among the generalization errors of induction algorithms. As I describe, the meta- induction algorithms that Schurz advocates as a “solution to Hume’s prob- lem” are simply examples of such a free lunch based on correlations among the generalization errors of induction algorithms. I end by pointing out that the prior that Schurz advocates, which is uniform over bit frequencies rather than bit patterns, is contradicted by thousands of experiments in statistical physics and by the great success of the maximum entropy procedure in in- arXiv:2103.11956v2 [cs.LG] 28 Jun 2021 ductive inference. “There is only limited value In knowledge derived from experience.
    [Show full text]
  • Science Illuminating Our Complex World
    Science illuminating our complex world 2010 ANNUAL REPORT >>insight A unIque reSeArch communIty. A new kInd oF ScIence. To unravel the complex systems most critical to our future – economies, ecosystems, world conflict, disease, human progress, and global climate, for example – a new kind of science is required, one that merges the riches of many disciplines, from physics and biology to the social sciences and the humanities. For 27 years the Santa Fe Institute, the world locus of complex systems science, has devoted itself to creating a unique research community. In seeking the theoretical foundations and patterns underly- SFI Leadership ing our world’s toughest problems, SFI’s scientists Jerry Sabloff, maintain the highest degree of scientific rigor, col- President laborate across disciplines, create and explore new Nancy Deutsch, scientific frontiers, spread the ideas and methods of Vice President for Development complexity science to other institutions, prepare the Doug Erwin, next generation of complexity scholars, and encour- Chair of the Faculty, 2011-2013 age the practical application of SFI’s results. David Krakauer, Chair of the Faculty, 2009-2011 Founded in 1984, SFI is a private, independent, nonprofit research institution located in Santa Fe, Ginger Richardson, editor writing Photography Laura Ware Vice President for Education New Mexico. Its scientific and educational missions John German Krista Zala InSight Foto Inc. Mark Earthy are supported by philanthropic individuals and [email protected] Chris Wood, Rachel Miller Peter Ollivetti Printing Vice President for Administration foundations, forward-thinking partner companies, design Rachel Feldman Nicolas Righetti Starline Printing Director, Business Network and government science agencies. Michael Vittitow Jessica Flack Linda A.
    [Show full text]
  • Curriculum Vitae
    Curriculum Vitae Simon DeDeo Indiana University / [email protected] / http://santafe.edu/~simon Contents I. Positions Held 1 II. Grants 2 III. Education 2 IV. Papers 2 A. Working Papers 2 B. Publications 3 V. Mentorship 5 VI. Invited Talks and Meetings 6 VII. Indiana University Service 8 VIII. Santa Fe Institute Complex System Summer School 8 IX. Extramural Service and Outreach 9 I. POSITIONS HELD • Assistant Professor, School of Informatics and Computing; Faculty in Cognitive Sci- ence, College of Arts & Sciences. Indiana University, 2014|. • External Professor, Santa Fe Institute, Santa Fe, New Mexico, 2014|. • Research Fellow of the Santa Fe Institute, Santa Fe, New Mexico, 2013. Omidyar Fellow, 2010{2012; Interdisciplinary research with focus on theory of computation, biological information processing, social decision-making, and collective phenomena. • Postdoctoral Fellow of the Institute for the Physics and Mathematics of the Universe, University of Tokyo, 2009. Focus on effective theories and statistical physics. • Postdoctoral Fellow of the Kavli Institute for Cosmological Physics, University of Chicago, 2006{2009. Focus on theoretical and observational cosmological physics. 2 II. GRANTS • \The Small-Number Limit of Biological Information Processing." National Science Foundation, EF-1137929. Lead PI. $339,457 over three years. Awarded. 2011{2014. • REU Supplement for R. Hawkins. NSF SMA-1005075, \Transdisciplinary Research Through Computational Modeling." $12,014. Awarded. 2013. • REU Supplement for R. Gardu~no. Amendment to NSF PHY-0706174, \A Broad Research Program in the Sciences of Complexity." $12,014. Awarded. 2012. • \The Landscape of Multiscale Computation." TeraGrid Resource Allocation, TG- IBN110006. 2 × 105 CPU hours over one year. 2011{2012. • Graduate Fellow, National Science Foundation.
    [Show full text]
  • Maximum Entropy and Bayesian Methods Fundamental Theories of Physics
    Maximum Entropy and Bayesian Methods Fundamental Theories of Physics An International Book Series on The Fundamental Theories of Physics: Their Clarification, Development and Application Editor: ALWYN VAN DER MERWE University ofDenver, U.S.A. Editorial Advisory Board: LAWRENCE P. HORWITZ, Tel-Aviv University, Israel BRIAN D. JOSEPHSON, University of Cambridge, U.K. CLIVE KILMISTER, University ofLondon, u.K. PEKKA J. LAHTI, University of Turku, Finland GUNTER LUDWIG, Philipps-Universitiit, Marburg, Germany ASHER PERES, Israel Institute of Technology, Israel NATHAN ROSEN, Israel Institute of Technology, Israel EDUARD PROGOVECKI, University of Toronto, Canada MENDEL SACHS, State University ofNew York at Buffalo, U.S.A. ABDUS SALAM, International Centre for Theoretical Physics, Trieste, Italy HANS-JURGEN TREDER, ZentralinstitutjUr Astrophysik der Akademie der Wissenschaften, Germany Volume 79 Maximum Entropy and Bayesian Methods Santa Fe, New Mexico, U.S.A., 1995 Proceedings of the Fifteenth International Workshop on Maximum Entropy and Bayesian Methods edited by Kenneth M. Hanson Dynamic Experimentation Division, Los Alamos National Laboratory, Los Alamos, New Mexico, U.S.A. and Richard N. Silver Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico, U.S.A. SPRINGER-SCIENCE +BUSINESS MEDIA, B. V. A C.I.P. Catalogue record for this book is available from the Library of Congress ISBN 978-94-010-6284-8 ISBN 978-94-011-5430-7 (eBook) DOI 10.1007/978-94-011-5430-7 Printed on acid-free paper All Rights Reserved © 1996 Springer Science+Business Media Dordrecht Originally published by Kluwer Academic Publishers in 1996 No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, electronic or mechanical, inc1uding photocopying, recording or by any information storage and retrieval system, without written permission from the copyright owner.
    [Show full text]
  • Game-Theoretic Management of Interacting Adaptive Systems
    Game-theoretic Management of Interacting Adaptive Systems David Wolpert Nilesh Kulkarni NASA Ames Research Center Perot Systems INC, NASA Ames Research Center MS 269-1 MS 269-1 Moffett Field, CA 94035 Moffett Field, CA 94035 Email: [email protected] Email: [email protected] Abstract— A powerful technique for optimizing an evolving complete control over the behavior of the players. That is system “agent” is co-evolution, in which one evolves the agent’s always true if there are communication restrictions that prevent environment at the same time that one evolves the agent. Here us from having full and continual access to all of the players. we consider such co-evolution when there is more than one agent, and the agents interact with one another. By definition, It is also always true if some of the players are humans. Such the environment of such a set of agents defines a non-cooperative limitations onour control are also typical when the players are game they are playing. So in this setting co-evolution means using software programs written by third party vendors. a “manager” to adaptively change the game the agents play, On the other hand, even when we cannot completely control in such a way that their resultant behavior optimizes a utility the players, often we can set / modify some aspects of the function of the manager. We introduce a fixed-point technique for doing this, and illustrate it on computer experiments. game among the players. As examples, we might have a manager external to the players who can modify their utility I.
    [Show full text]
  • Debating Design: from Darwin To
    P1: IRK 0521829496agg.xml CY335B/Dembski 0 521 82949 6 April 13, 2004 10:0 Debating Design From Darwin to DNA Edited by WILLIAM A. DEMBSKI Baylor University MICHAEL RUSE Florida State University iii CAMBRIDGE UNIVERSITY PRESS Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521829496 © Cambridge University Press 2004, 2006 This publication is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published in print format 2004 ISBN-13 978-0-511-33751-2 eBook (EBL) ISBN-10 0-511-33751-5 eBook (EBL) ISBN-13 978-0-521-82949-6 hardback ISBN-10 0-521-82949-6 hardback ISBN-13 978-0-521-70990-3 paperback ISBN-10 0-521-70990-3 paperback Cambridge University Press has no responsibility for the persistence or accuracy of urls for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate. P1: IRK 0521829496agg.xml CY335B/Dembski 0 521 82949 6 April 13, 2004 10:0 Contents Notes on Contributors page vii introduction 1. General Introduction3 William A. Dembski and Michael Ruse 2. The Argument from Design: A Brief History 13 Michael Ruse 3. Who’s Afraid of ID? A Survey of the Intelligent Design Movement 32 Angus Menuge part i: darwinism 4.
    [Show full text]