On the Differences Between Human and Machine Intelligence

Total Page:16

File Type:pdf, Size:1020Kb

On the Differences Between Human and Machine Intelligence On the Differences between Human and Machine Intelligence Roman V. Yampolskiy Computer Science and Engineering, University of Louisville [email protected] Abstract [Legg and Hutter, 2007a]. However, widespread implicit as- Terms Artificial General Intelligence (AGI) and Hu- sumption of equivalence between capabilities of AGI and man-Level Artificial Intelligence (HLAI) have been HLAI appears to be unjustified, as humans are not general used interchangeably to refer to the Holy Grail of Ar- intelligences. In this paper, we will prove this distinction. tificial Intelligence (AI) research, creation of a ma- Others use slightly different nomenclature with respect to chine capable of achieving goals in a wide range of general intelligence, but arrive at similar conclusions. “Lo- environments. However, widespread implicit assump- cal generalization, or “robustness”: … “adaptation to tion of equivalence between capabilities of AGI and known unknowns within a single task or well-defined set of HLAI appears to be unjustified, as humans are not gen- tasks”. … Broad generalization, or “flexibility”: “adapta- eral intelligences. In this paper, we will prove this dis- tion to unknown unknowns across a broad category of re- tinction. lated tasks”. …Extreme generalization: human-centric ex- treme generalization, which is the specific case where the 1 Introduction1 scope considered is the space of tasks and domains that fit within the human experience. We … refer to “human-cen- Imagine that tomorrow a prominent technology company tric extreme generalization” as “generality”. Importantly, as announces that they have successfully created an Artificial we deliberately define generality here by using human cog- Intelligence (AI) and offers for you to test it out. You decide nition as a reference frame …, it is only “general” in a lim- to start by testing developed AI for some very basic abilities ited sense. … To this list, we could, theoretically, add one such as multiplying 317 by 913, and memorizing your phone more entry: “universality”, which would extend “general- number. To your surprise, the system fails on both tasks. ity” beyond the scope of task domains relevant to humans, When you question the system’s creators, you are told that to any task that could be practically tackled within our uni- their AI is human-level artificial intelligence (HLAI) and as verse (note that this is different from “any task at all” as un- most people cannot perform those tasks neither can their AI. derstood in the assumptions of the No Free Lunch theorem In fact, you are told, many people can’t even compute 13 x [Wolpert and Macready, 1997; Wolpert, 2012]).” [Chollet, 17, or remember name of a person they just met, or recog- 2019]. nize their coworker outside of the office, or name what they had for breakfast last Tuesday2. The list of such limitations 2 Prior work is quite significant and is the subject of study in the field of Artificial Stupidity [Trazzi and Yampolskiy, 2018; Trazzi We call some problems ‘easy’, because they come naturally and Yampolskiy, 2020]. to us like understanding speech or walking and we call other Terms Artificial General Intelligence (AGI) [Goertzel et problems ‘hard’ like playing Go or violin, because those are al., 2015] and Human-Level Artificial Intelligence (HLAI) not human universals and require a lot of talent and effort [Baum et al., 2011] have been used interchangeably (see [Yampolskiy, 2012]. We ignore ‘impossible’ for humans to [Barrat, 2013], or “(AGI) is the hypothetical intelligence of master domains, since we mostly don’t even know about a machine that has the capacity to understand or learn any them or see them as important. As LeCun puts it: “[W]e can't intellectual task that a human being can.” [Anonymous, imagine tasks that are outside of our comprehension, right, Retrieved July 3, 2020]) to refer to the Holy Grail of Artifi- so we think, we think we are general, because we're general cial Intelligence (AI) research, creation of a machine capa- of all the things that we can apprehend, but there is a huge ble of: achieving goals in a wide range of environments world out there of things that we have no idea” [LeCun, August 31, 2019]. Others, agree: “we might not even be 1 Copyright © 2021 for this paper by its authors. Use permitted 2Some people could do that and more, for example 100,000 digits under Creative Commons License Attribution 4.0 International of π have been memorized using special mnemonics. (CC BY 4.0). aware of the type of cognitive abilities we score poorly on.” such a thing as universal intelligence, and if human intelli- [Barnett, December 23, 2019]. gence is an implementation of it, then this algorithm of uni- This is most obvious in how we test for intelligence. For versal intelligence should be the end goal of our field, and example, Turing Test [Turing, 1950], by definition, doesn’t reverse-engineering the human brain could be the shortest test for universal general intelligence, only for human-level path to reach it. It would make our field close-ended: a riddle intelligence in human domains of expertise. Like a drunkard to be solved. If, on the other hand, human intelligence is a searching for his keys under the light because there it is eas- broad but ad-hoc cognitive ability that generalizes to hu- ier to find them, we fall for the Streetlight effect observation man-relevant tasks but not much else, this implies that AI is bias only searching for intelligence in domains we can easily an open-ended, fundamentally anthropocentric pursuit, tied comprehend [Yampolskiy, 2019]. “The g factor, by defini- to a specific scope of applicability.” [Chollet, 2019]. tion, represents the single cognitive ability common to suc- Humans have general capability only in those human ac- cess across all intelligence tests, emerging from applying cessible domains and likewise artificial neural networks in- factor analysis to test results across a diversity of tests and spired by human brain architecture do unreasonably well in individuals. But intelligence tests, by construction, only en- the same domains. Recent work by Tegmark et al. shows compass tasks that humans can perform – tasks that are im- that deep neural networks would not perform as well in ran- mediately recognizable and understandable by humans (an- domly generated domains as they do in those domains hu- thropocentric bias), since including tasks that humans mans consider important, as they map well to physical prop- couldn’t perform would be pointless. Further, psychomet- erties of our universe. “We have shown that the success of rics establishes measurement validity by demonstrating pre- deep and cheap (low-parameter-count) learning depends not dictiveness with regard to activities that humans value (e.g. only on mathematics but also on physics, which favors cer- scholastic success): the very idea of a “valid” measure of tain classes of exceptionally simple probability distributions intelligence only makes sense within the frame of reference that deep learning is uniquely suited to model. We argued of human values.” [Chollet, 2019]. that the success of shallow neural networks hinges on sym- Moravec further elaborates the difference between future metry, locality, and polynomial log-probability in data from machines and humans: “Computers are universal machines, or inspired by the natural world, which favors sparse low- their potential extends uniformly over a boundless expanse order polynomial Hamiltonians that can be efficiently ap- of tasks. Human potentials, on the other hand, are strong in proximated.” [Lin et al., 2017]. areas long important for survival, but weak in things far re- moved. Imagine a “landscape of human competence,” hav- 3 Humans are not AGI ing lowlands with labels like “arithmetic” and “rote memo- An agent is general (universal [Hutter, 2004]) if it can learn rization,” foothills like “theorem proving” and “chess play- anything another agent can learn. We can think of a true AGI ing,” and high mountain peaks labeled “locomotion,” agent as a superset of all possible NAIs (including capacity “hand-eye coordination” and “social interaction.” Advanc- to solve AI-Complete problems [Yampolskiy, 2013]). Some ing computer performance is like water slowly flooding the agents have limited domain generality, meaning they are landscape. A half century ago it began to drown the low- general, but not in all possible domains. The number of do- lands, driving out human calculators and record clerks, but mains in which they are general may still be Dedekind-infi- leaving most of us dry. Now the flood has reached the foot- nite, but it is a strict subset of domains in which AGI is ca- hills, and our outposts there are contemplating retreat. We pable of learning. For an AGI it’s domain of performance is feel safe on our peaks, but, at the present rate, those too will any efficiently learnable capability, while humans have a be submerged within another half century.” [Moravec, smaller subset of competence. Non-human animals in turn 1998]. may have an even smaller repertoire of capabilities, but are Chollet writes: “How general is human intelligence? The nonetheless general in that subset. This means that humans No Free Lunch theorem [Wolpert and Macready, 1997; can do things animals cannot and AGI will be able to do Wolpert, 2012] teaches us that any two optimization algo- something no human can. If an AGI is restricted only to do- rithms (including human intelligence) are equivalent when mains and capacity of human expertise, it is the same as their performance is averaged across every possible prob- HLAI. lem, i.e. algorithms should be tailored to their target problem Humans are also not all in the same set, as some are ca- in order to achieve better-than-random performance. How- pable of greater generality (G factor [Jensen, 1998]) and can ever, what is meant in this context by “every possible prob- succeed in domains, in which others cannot.
Recommended publications
  • Understanding Agent Incentives Using Causal Influence Diagrams
    Understanding Agent Incentives using Causal Influence Diagrams∗ Part I: Single Decision Settings Tom Everitt Pedro A. Ortega Elizabeth Barnes Shane Legg September 9, 2019 Deepmind Agents are systems that optimize an objective function in an environ- ment. Together, the goal and the environment induce secondary objectives, incentives. Modeling the agent-environment interaction using causal influ- ence diagrams, we can answer two fundamental questions about an agent’s incentives directly from the graph: (1) which nodes can the agent have an incentivize to observe, and (2) which nodes can the agent have an incentivize to control? The answers tell us which information and influence points need extra protection. For example, we may want a classifier for job applications to not use the ethnicity of the candidate, and a reinforcement learning agent not to take direct control of its reward mechanism. Different algorithms and training paradigms can lead to different causal influence diagrams, so our method can be used to identify algorithms with problematic incentives and help in designing algorithms with better incentives. Contents 1. Introduction 2 2. Background 3 3. Observation Incentives 7 4. Intervention Incentives 13 arXiv:1902.09980v6 [cs.AI] 6 Sep 2019 5. Related Work 19 6. Limitations and Future Work 22 7. Conclusions 23 References 24 A. Representing Uncertainty 29 B. Proofs 30 ∗ A number of people have been essential in preparing this paper. Ryan Carey, Eric Langlois, Michael Bowling, Tim Genewein, James Fox, Daniel Filan, Ray Jiang, Silvia Chiappa, Stuart Armstrong, Paul Christiano, Mayank Daswani, Ramana Kumar, Jonathan Uesato, Adria Garriga, Richard Ngo, Victoria Krakovna, Allan Dafoe, and Jan Leike have all contributed through thoughtful discussions and/or by reading drafts at various stages of this project.
    [Show full text]
  • Classification Schemas for Artificial Intelligence Failures
    Journal XX (XXXX) XXXXXX https://doi.org/XXXX/XXXX Classification Schemas for Artificial Intelligence Failures Peter J. Scott1 and Roman V. Yampolskiy2 1 Next Wave Institute, USA 2 University of Louisville, Kentucky, USA [email protected], [email protected] Abstract In this paper we examine historical failures of artificial intelligence (AI) and propose a classification scheme for categorizing future failures. By doing so we hope that (a) the responses to future failures can be improved through applying a systematic classification that can be used to simplify the choice of response and (b) future failures can be reduced through augmenting development lifecycles with targeted risk assessments. Keywords: artificial intelligence, failure, AI safety, classification 1. Introduction Artificial intelligence (AI) is estimated to have a $4-6 trillion market value [1] and employ 22,000 PhD researchers [2]. It is estimated to create 133 million new roles by 2022 but to displace 75 million jobs in the same period [6]. Projections for the eventual impact of AI on humanity range from utopia (Kurzweil, 2005) (p.487) to extinction (Bostrom, 2005). In many respects AI development outpaces the efforts of prognosticators to predict its progress and is inherently unpredictable (Yampolskiy, 2019). Yet all AI development is (so far) undertaken by humans, and the field of software development is noteworthy for unreliability of delivering on promises: over two-thirds of companies are more likely than not to fail in their IT projects [4]. As much effort as has been put into the discipline of software safety, it still has far to go. Against this background of rampant failures we must evaluate the future of a technology that could evolve to human-like capabilities, usually known as artificial general intelligence (AGI).
    [Show full text]
  • Approximate Universal Artificial Intelligence And
    APPROXIMATEUNIVERSALARTIFICIAL INTELLIGENCE AND SELF-PLAY LEARNING FORGAMES Doctor of Philosophy Dissertation School of Computer Science and Engineering joel veness supervisors Kee Siong Ng Marcus Hutter Alan Blair William Uther John Lloyd January 2011 Joel Veness: Approximate Universal Artificial Intelligence and Self-play Learning for Games, Doctor of Philosophy Disserta- tion, © January 2011 When we write programs that learn, it turns out that we do and they don’t. — Alan Perlis ABSTRACT This thesis is split into two independent parts. The first is an investigation of some practical aspects of Marcus Hutter’s Uni- versal Artificial Intelligence theory [29]. The main contributions are to show how a very general agent can be built and analysed using the mathematical tools of this theory. Before the work presented in this thesis, it was an open question as to whether this theory was of any relevance to reinforcement learning practitioners. This work suggests that it is indeed relevant and worthy of future investigation. The second part of this thesis looks at self-play learning in two player, determin- istic, adversarial turn-based games. The main contribution is the introduction of a new technique for training the weights of a heuristic evaluation function from data collected by classical game tree search algorithms. This method is shown to outperform previous self-play training routines based on Temporal Difference learning when applied to the game of Chess. In particular, the main highlight was using this technique to construct a Chess program that learnt to play master level Chess by tuning a set of initially random weights from self play games.
    [Show full text]
  • Arxiv:2010.12268V1 [Cs.LG] 23 Oct 2020 Trained on a New Target Task Rapidly Loses in Its Ability to Solve Previous Source Tasks
    A Combinatorial Perspective on Transfer Learning Jianan Wang Eren Sezener David Budden Marcus Hutter Joel Veness DeepMind [email protected] Abstract Human intelligence is characterized not only by the capacity to learn complex skills, but the ability to rapidly adapt and acquire new skills within an ever-changing environment. In this work we study how the learning of modular solutions can allow for effective generalization to both unseen and potentially differently distributed data. Our main postulate is that the combination of task segmentation, modular learning and memory-based ensembling can give rise to generalization on an exponentially growing number of unseen tasks. We provide a concrete instantiation of this idea using a combination of: (1) the Forget-Me-Not Process, for task segmentation and memory based ensembling; and (2) Gated Linear Networks, which in contrast to contemporary deep learning techniques use a modular and local learning mechanism. We demonstrate that this system exhibits a number of desirable continual learning properties: robustness to catastrophic forgetting, no negative transfer and increasing levels of positive transfer as more tasks are seen. We show competitive performance against both offline and online methods on standard continual learning benchmarks. 1 Introduction Humans learn new tasks from a single temporal stream (online learning) by efficiently transferring experience of previously encountered tasks (continual learning). Contemporary machine learning algorithms struggle in both of these settings, and few attempts have been made to solve challenges at their intersection. Despite obvious computational inefficiencies, the dominant machine learning paradigm involves i.i.d. sampling of data at massive scale to reduce gradient variance and stabilize training via back-propagation.
    [Show full text]
  • Universal Artificial Intelligence Philosophical, Mathematical, and Computational Foundations of Inductive Inference and Intelligent Agents the Learn
    Universal Artificial Intelligence philosophical, mathematical, and computational foundations of inductive inference and intelligent agents the learn Marcus Hutter Australian National University Canberra, ACT, 0200, Australia http://www.hutter1.net/ ANU Universal Artificial Intelligence - 2 - Marcus Hutter Abstract: Motivation The dream of creating artificial devices that reach or outperform human intelligence is an old one, however a computationally efficient theory of true intelligence has not been found yet, despite considerable efforts in the last 50 years. Nowadays most research is more modest, focussing on solving more narrow, specific problems, associated with only some aspects of intelligence, like playing chess or natural language translation, either as a goal in itself or as a bottom-up approach. The dual, top down approach, is to find a mathematical (not computational) definition of general intelligence. Note that the AI problem remains non-trivial even when ignoring computational aspects. Universal Artificial Intelligence - 3 - Marcus Hutter Abstract: Contents In this course we will develop such an elegant mathematical parameter-free theory of an optimal reinforcement learning agent embedded in an arbitrary unknown environment that possesses essentially all aspects of rational intelligence. Most of the course is devoted to giving an introduction to the key ingredients of this theory, which are important subjects in their own right: Occam's razor; Turing machines; Kolmogorov complexity; probability theory; Solomonoff induction; Bayesian
    [Show full text]
  • AI and You Transcript Guest: Roman Yampolskiy Episode 16 First Aired: Monday, October 5, 2020
    AI and You Transcript Guest: Roman Yampolskiy Episode 16 First Aired: Monday, October 5, 2020 Hi, welcome to episode 16. Today we have a feast, because I am talking with Dr. Roman Yampolskiy, a legend within the AI community. He is a tenured associate professor in the department of Computer Engineering and Computer Science at the University of Louisville in Kentucky where he is the director of Cyber Security. Look up his home page and Wikipedia entry because he has qualifications too numerous to detail here. Some of those include that he is also a Visiting Fellow at the Singularity Institute, he was also one of the scientists at the Asilomar conference on the future of AI, and the author of over 100 publications, including the book Artificial Superintelligence: A Futuristic Approach, which I would strongly recommend, and many peer-reviewed papers, including one coauthored with yours truly. Most of them are on the subject of AI Safety. In this episode we will talk mostly about a recent paper of his, a table-thumping 73 pages titled “On Controllability of Artificial Intelligence,” which you may recognize is talking about what we call the Control Problem. That’s the most fundamental problem in the whole study of the long-term impact of AI, and Roman explains it very well in this episode. If you’re looking at the possible future of artificial intelligence having serious negative as well as positive consequences for everyone, which is a concept that’s gotten a lot of popularization in the last few years, of course, and you’re finding computer professionals pooh-poohing that idea because AI right now requires a great deal of specialized effort on their part to do even narrow tasks, and the idea of it becoming an existential threat is just too sensationalist, the product of watching too many Terminator movies, then pay attention to Roman, because he has impeccable computer science credentials and yet does not shy away from exploring fundamental threats of AI.
    [Show full text]
  • The Age of Artificial Intelligence an Exploration
    THE AGE OF ARTIFICIAL INTELLIGENCE AN EXPLORATION Edited by Steven S. Gouveia University of Minho, Portugal Cognitive Science and Psychology Copyright © 2020 by the Authors. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of Vernon Art and Science Inc. www.vernonpress.com In the Americas: In the rest of the world: Vernon Press Vernon Press 1000 N West Street, C/Sancti Espiritu 17, Suite 1200, Wilmington, Malaga, 29006 Delaware 19801 Spain United States Cognitive Science and Psychology Library of Congress Control Number: 2020931461 ISBN: 978-1-62273-872-4 Product and company names mentioned in this work are the trademarks of their respective owners. While every care has been taken in preparing this work, neither the authors nor Vernon Art and Science Inc. may be held responsible for any loss or damage caused or alleged to be caused directly or indirectly by the information contained in it. Every effort has been made to trace all copyright holders, but if any have been inadvertently overlooked the publisher will be pleased to include any necessary credits in any subsequent reprint or edition. Cover design by Vernon Press using elements designed by FreePik. TABLE OF CONTENTS LIST OF ACRONYMS vii LIST OF FIGURES xi LIST OF TABLES xiii INTRODUCTION xv SECTION I: INTELLIGENCE IN ARTIFICIAL INTELLIGENCE 1 CHAPTER 1 TOWARDS THE MATHEMATICS OF INTELLIGENCE 3 Soenke Ziesche Maldives National University, Maldives Roman V. Yampolskiy University of Louisville, USA CHAPTER 2 MINDS , BRAINS AND TURING 15 Stevan Harnad Université du Québec à Montréal, Canada; University of Southampton, UK CHAPTER 3 THE AGE OF POST -INTELLIGENT DESIGN 27 Daniel C.
    [Show full text]
  • Leakproofing the Singularity Artificial Intelligence Confinement Problem
    Roman V.Yampolskiy Leakproofing the Singularity Artificial Intelligence Confinement Problem Abstract: This paper attempts to formalize and to address the ‘leakproofing’ of the Singularity problem presented by David Chalmers. The paper begins with the definition of the Artificial Intelli- gence Confinement Problem. After analysis of existing solutions and their shortcomings, a protocol is proposed aimed at making a more secure confinement environment which might delay potential negative effect from the technological singularity while allowing humanity to benefit from the superintelligence. Keywords: AI-Box, AI Confinement Problem, Hazardous Intelligent Software, Leakproof Singularity, Oracle AI. ‘I am the slave of the lamp’ Genie from Aladdin 1. Introduction With the likely development of superintelligent programs in the near future, many scientists have raised the issue of safety as it relates to such technology (Yudkowsky, 2008; Bostrom, 2006; Hibbard, 2005; Chalmers, 2010; Hall, 2000). A common theme in Artificial Intelli- gence (AI)1 safety research is the possibility of keeping a super- intelligent agent in a sealed hardware so as to prevent it from doing any harm to humankind. Such ideas originate with scientific Correspondence: Roman V. Yampolskiy, Department of Computer Engineering and Computer Sci- ence University of Louisville. Email: [email protected] [1] In this paper the term AI is used to represent superintelligence. Journal of Consciousness Studies, 19, No. 1–2, 2012, pp. 194–214 Copyright (c) Imprint Academic 2011 For personal use only -- not for reproduction LEAKPROOFING THE SINGULARITY 195 visionaries such as Eric Drexler, who has suggested confining transhuman machines so that their outputs could be studied and used safely (Drexler, 1986).
    [Show full text]
  • QKSA: Quantum Knowledge Seeking Agent
    QKSA: Quantum Knowledge Seeking Agent motivation, core thesis and baseline framework Aritra Sarkar Department of Quantum & Computer Engineering, Faculty of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, Delft, The Netherlands 1 Introduction In this article we present the motivation and the core thesis towards the implementation of a Quantum Knowledge Seeking Agent (QKSA). QKSA is a general reinforcement learning agent that can be used to model classical and quantum dynamics. It merges ideas from universal artificial general intelligence, con- structor theory and genetic programming to build a robust and general framework for testing the capabilities of the agent in a variety of environments. It takes the artificial life (or, animat) path to artificial general intelligence where a population of intelligent agents are instantiated to explore valid ways of modeling the perceptions. The multiplicity and survivability of the agents are defined by the fitness, with respect to the explainability and predictability, of a resource-bounded computational model of the environment. This general learning approach is then employed to model the physics of an environment based on subjective ob- server states of the agents. A specific case of quantum process tomography as a general modeling principle is presented. The various background ideas and a baseline formalism is discussed in this article which sets the groundwork for the implementations of the QKSA that are currently in active development. Section 2 presents a historic overview of the motivation behind this research In Section 3 we survey some general reinforcement learning models and bio-inspired computing techniques that forms a baseline for the design of the QKSA.
    [Show full text]
  • Agent Foundations for Aligning Machine Intelligence with Human Interests: a Technical Research Agenda
    Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda In The Technological Singularity: Managing the Journey. Springer. 2017 Nate Soares and Benya Fallenstein Machine Intelligence Research Institute fnate,[email protected] Contents 1 Introduction 1 1.1 Why These Problems? . .2 2 Highly Reliable Agent Designs 3 2.1 Realistic World-Models . .4 2.2 Decision Theory . .5 2.3 Logical Uncertainty . .6 2.4 Vingean Reflection . .7 3 Error-Tolerant Agent Designs 8 4 Value Specification 9 5 Discussion 11 5.1 Toward a Formal Understanding of the Problem . 11 5.2 Why Start Now? . 11 1 Introduction Since artificial agents would not share our evolution- ary history, there is no reason to expect them to be The property that has given humans a dominant ad- driven by human motivations such as lust for power. vantage over other species is not strength or speed, but However, nearly all goals can be better met with more intelligence. If progress in artificial intelligence continues resources (Omohundro 2008). This suggests that, by unabated, AI systems will eventually exceed humans in default, superintelligent agents would have incentives general reasoning ability. A system that is \superintelli- to acquire resources currently being used by humanity. gent" in the sense of being \smarter than the best human (Just as artificial agents would not automatically acquire brains in practically every field” could have an enormous a lust for power, they would not automatically acquire a impact upon humanity (Bostrom 2014). Just as human human sense of fairness, compassion, or conservatism.) intelligence has allowed us to develop tools and strate- Thus, most goals would put the agent at odds with gies for controlling our environment, a superintelligent human interests, giving it incentives to deceive or ma- system would likely be capable of developing its own nipulate its human operators and resist interventions tools and strategies for exerting control (Muehlhauser designed to change or debug its behavior (Bostrom 2014, and Salamon 2012).
    [Show full text]
  • The Fastest and Shortest Algorithm for All Well-Defined Problems
    Technical Report IDSIA-16-00, 3 March 2001 ftp://ftp.idsia.ch/pub/techrep/IDSIA-16-00.ps.gz The Fastest and Shortest Algorithm for All Well-Defined Problems1 Marcus Hutter IDSIA, Galleria 2, CH-6928 Manno-Lugano, Switzerland [email protected] http://www.idsia.ch/∼marcus Key Words Acceleration, Computational Complexity, Algorithmic Information Theory, Kol- mogorov Complexity, Blum’s Speed-up Theorem, Levin Search. Abstract An algorithm M is described that solves any well-defined problem p as quickly as the fastest algorithm computing a solution to p, save for a factor of 5 and low- order additive terms. M optimally distributes resources between the execution arXiv:cs/0206022v1 [cs.CC] 14 Jun 2002 of provably correct p-solving programs and an enumeration of all proofs, including relevant proofs of program correctness and of time bounds on program runtimes. M avoids Blum’s speed-up theorem by ignoring programs without correctness proof. M has broader applicability and can be faster than Levin’s universal search, the fastest method for inverting functions save for a large multiplicative constant. An extension of Kolmogorov complexity and two novel natural measures of function complexity are used to show that the most efficient program computing some function f is also among the shortest programs provably computing f. 1Published in the International Journal of Foundations of Computer Science, Vol. 13, No. 3, (June 2002) 431–443. Extended version of An effective Procedure for Speeding up Algorithms (cs.CC/0102018) presented at Workshops MaBiC-2001 & TAI-2001. Marcus Hutter, The Fastest and Shortest Algorithm 1 1 Introduction & Main Result Searching for fast algorithms to solve certain problems is a central and difficult task in computer science.
    [Show full text]
  • Uncontrollability of Artificial Intelligence
    Uncontrollability of Artificial Intelligence Roman V. Yampolskiy Computer Science and Engineering, University of Louisville [email protected] Abstract means. However, it is a standard practice in computer sci- ence to first show that a problem doesn’t belong to a class Invention of artificial general intelligence is pre- of unsolvable problems [Turing, 1936; Davis, 2004] before dicted to cause a shift in the trajectory of human civ- investing resources into trying to solve it or deciding what ilization. In order to reap the benefits and avoid pit- approaches to try. Unfortunately, to the best of our falls of such powerful technology it is important to be knowledge no mathematical proof or even rigorous argu- able to control it. However, possibility of controlling mentation has been published demonstrating that the AI artificial general intelligence and its more advanced control problem may be solvable, even in principle, much version, superintelligence, has not been formally es- less in practice. tablished. In this paper we argue that advanced AI Yudkowsky considers the possibility that the control can’t be fully controlled. Consequences of uncontrol- problem is not solvable, but correctly insists that we should lability of AI are discussed with respect to future of study the problem in great detail before accepting such grave humanity and research on AI, and AI safety and se- limitation, he writes: “One common reaction I encounter is curity. for people to immediately declare that Friendly AI is an im- 1 possibility, because any sufficiently powerful AI will be 1 Introduction able to modify its own source code to break any constraints The unprecedented progress in Artificial Intelligence (AI) placed upon it.
    [Show full text]