Downloads/Handbook of Psychology Vol .Pdf#Page=47
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
The Future of AI: Opportunities and Challenges
The Future of AI: Opportunities and Challenges Puerto Rico, January 2-5, 2015 ! Ajay Agrawal is the Peter Munk Professor of Entrepreneurship at the University of Toronto's Rotman School of Management, Research Associate at the National Bureau of Economic Research in Cambridge, MA, Founder of the Creative Destruction Lab, and Co-founder of The Next 36. His research is focused on the economics of science and innovation. He serves on the editorial boards of Management Science, the Journal of Urban Economics, and The Strategic Management Journal. & Anthony Aguirre has worked on a wide variety of topics in theoretical cosmology, ranging from intergalactic dust to galaxy formation to gravity physics to the large-scale structure of inflationary universes and the arrow of time. He also has strong interest in science outreach, and has appeared in numerous science documentaries. He is a co-founder of the Foundational Questions Institute and the Future of Life Institute. & Geoff Anders is the founder of Leverage Research, a research institute that studies psychology, cognitive enhancement, scientific methodology, and the impact of technology on society. He is also a member of the Effective Altruism movement, a movement dedicated to improving the world in the most effective ways. Like many of the members of the Effective Altruism movement, Geoff is deeply interested in the potential impact of new technologies, especially artificial intelligence. & Blaise Agüera y Arcas works on machine learning at Google. Previously a Distinguished Engineer at Microsoft, he has worked on augmented reality, mapping, wearable computing and natural user interfaces. He was the co-creator of Photosynth, software that assembles photos into 3D environments. -
Deepmind Lab
DeepMind Lab Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Küttler, Andrew Lefrancq, Simon Green, Víctor Valdés, Amir Sadik, Julian Schrittwieser, Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King, Demis Hassabis, Shane Legg and Stig Petersen December 13, 2016 Abstract DeepMind Lab is a first-person 3D game platform designed for research and development of general artificial intelligence and machine learning systems. DeepMind Lab can be used to study how autonomous artificial agents may learn complex tasks in large, partially observed, and visually diverse worlds. DeepMind Lab has a simple and flexible API enabling creative task-designs and novel AI-designs to be explored and quickly iterated upon. It is powered by a fast and widely recognised game engine, and tailored for effective use by the research community. Introduction General intelligence measures an agent’s ability to achieve goals in a wide range of environments (Legg and Hutter, 2007). The only known examples of general- purpose intelligence arose from a combination of evolution, development, and learn- ing, grounded in the physics of the real world and the sensory apparatus of animals. An unknown, but potentially large, fraction of animal and human intelligence is a direct consequence of the perceptual and physical richness of our environment, and is unlikely to arise without it (e.g. Locke, 1690; Hume, 1739). One option is to di- rectly study embodied intelligence in the real world itself using robots (e.g. Brooks, 1990; Metta et al., 2008). However, progress on that front will always be hindered by the too-slow passing of real time and the expense of the physical hardware involved. -
Intelligence Explosion FAQ
MIRI MACHINE INTELLIGENCE RESEARCH INSTITUTE Intelligence Explosion FAQ Luke Muehlhauser Machine Intelligence Research Institute Abstract The Machine Intelligence Research Institute is one of the leading research institutes on intelligence explosion. Below are short answers to common questions we receive. Muehlhauser, Luke. 2013. “Intelligence Explosion FAQ.” First published 2011 as “Singularity FAQ.” Machine Intelligence Research Institute, Berkeley, CA Contents 1 Basics 1 1.1 What is an intelligence explosion? . 1 2 How Likely Is an Intelligence Explosion? 2 2.1 How is “intelligence” defined? . 2 2.2 What is greater-than-human intelligence? . 2 2.3 What is whole-brain emulation? . 3 2.4 What is biological cognitive enhancement? . 3 2.5 What are brain-computer interfaces? . 4 2.6 How could general intelligence be programmed into a machine? . 4 2.7 What is superintelligence? . 4 2.8 When will the intelligence explosion happen? . 5 2.9 Might an intelligence explosion never occur? . 6 3 Consequences of an Intelligence Explosion 7 3.1 Why would great intelligence produce great power? . 7 3.2 How could an intelligence explosion be useful? . 7 3.3 How might an intelligence explosion be dangerous? . 8 4 Friendly AI 9 4.1 What is Friendly AI? . 9 4.2 What can we expect the motivations of a superintelligent machine to be? 10 4.3 Can’t we just keep the superintelligence in a box, with no access to the Internet? . 11 4.4 Can’t we just program the superintelligence not to harm us? . 11 4.5 Can we program the superintelligence to maximize human pleasure or desire satisfaction? . -
A Timeline of Artificial Intelligence
A Timeline of Artificial Intelligence by piero scaruffi | www.scaruffi.com All of these events are explained in my book "Intelligence is not Artificial". TM, ®, Copyright © 1996-2019 Piero Scaruffi except pictures. All rights reserved. 1960: Henry Kelley and Arthur Bryson invent backpropagation 1960: Donald Michie's reinforcement-learning system MENACE 1960: Hilary Putnam's Computational Functionalism ("Minds and Machines") 1960: The backpropagation algorithm 1961: Melvin Maron's "Automatic Indexing" 1961: Karl Steinbuch's neural network Lernmatrix 1961: Leonard Scheer's and John Chubbuck's Mod I (1962) and Mod II (1964) 1961: Space General Corporation's lunar explorer 1962: IBM's "Shoebox" for speech recognition 1962: AMF's "VersaTran" robot 1963: John McCarthy moves to Stanford and founds the Stanford Artificial Intelligence Laboratory (SAIL) 1963: Lawrence Roberts' "Machine Perception of Three Dimensional Solids", the birth of computer vision 1963: Jim Slagle writes a program for symbolic integration (calculus) 1963: Edward Feigenbaum's and Julian Feldman's "Computers and Thought" 1963: Vladimir Vapnik's "support-vector networks" (SVN) 1964: Peter Toma demonstrates the machine-translation system Systran 1965: Irving John Good (Isidore Jacob Gudak) speculates about "ultraintelligent machines" (the "singularity") 1965: The Case Institute of Technology builds the first computer-controlled robotic arm 1965: Ed Feigenbaum's Dendral expert system 1965: Gordon Moore's Law of exponential progress in integrated circuits ("Cramming more components -
Artificial General Intelligence: Concept, State of the Art, and Future Prospects
Journal of Artificial General Intelligence 5(1) 1-46, 2014 Submitted 2013-2-12 DOI: 10.2478/jagi-2014-0001 Accepted 2014-3-15 Artificial General Intelligence: Concept, State of the Art, and Future Prospects Ben Goertzel [email protected] OpenCog Foundation G/F, 51C Lung Mei Village Tai Po, N.T., Hong Kong Editor: Tsvi Achler Abstract In recent years broad community of researchers has emerged, focusing on the original ambitious goals of the AI field – the creation and study of software or hardware systems with general intelligence comparable to, and ultimately perhaps greater than, that of human beings. This paper surveys this diverse community and its progress. Approaches to defining the concept of Artificial General Intelligence (AGI) are reviewed including mathematical formalisms, engineering, and biology inspired perspectives. The spectrum of designs for AGI systems includes systems with symbolic, emergentist, hybrid and universalist characteristics. Metrics for general intelligence are evaluated, with a conclusion that, although metrics for assessing the achievement of human-level AGI may be relatively straightforward (e.g. the Turing Test, or a robot that can graduate from elementary school or university), metrics for assessing partial progress remain more controversial and problematic. Keywords: AGI, general intelligence, cognitive science 1. Introduction How can we best conceptualize and approach the original problem regarding which the AI field was founded: the creation of thinking machines with general intelligence comparable to, or greater than, that of human beings? The standard approach of the AI discipline (Russell and Norvig, 2010), as it has evolved in the 6 decades since the field’s founding, views artificial intelligence largely in terms of the pursuit of discrete capabilities or specific practical tasks. -
Deepmind Lab Arxiv:1612.03801V2 [Cs.AI] 13 Dec 2016
DeepMind Lab Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Küttler, Andrew Lefrancq, Simon Green, Víctor Valdés, Amir Sadik, Julian Schrittwieser, Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King, Demis Hassabis, Shane Legg and Stig Petersen December 14, 2016 Abstract DeepMind Lab is a first-person 3D game platform designed for research and development of general artificial intelligence and machine learning systems. DeepMind Lab can be used to study how autonomous artificial agents may learn complex tasks in large, partially observed, and visually diverse worlds. DeepMind Lab has a simple and flexible API enabling creative task-designs and novel AI-designs to be explored and quickly iterated upon. It is powered by a fast and widely recognised game engine, and tailored for effective use by the research community. Introduction General intelligence measures an agent’s ability to achieve goals in a wide range of environments (Legg and Hutter, 2007). The only known examples of general- purpose intelligence arose from a combination of evolution, development, and learn- ing, grounded in the physics of the real world and the sensory apparatus of animals. An unknown, but potentially large, fraction of animal and human intelligence is a direct consequence of the perceptual and physical richness of our environment, and is unlikely to arise without it (e.g. Locke, 1690; Hume, 1739). One option is to di- rectly study embodied intelligence in the real world itself using robots (e.g. Brooks, 1990; Metta et al., 2008). However, progress on that front will always be hindered by the too-slow passing of real time and the expense of the physical hardware involved. -
Alphago Program, What New Ideas from Sedol Do Foresee?
Pickford Film Center DOCTOBER DOCUMENTARY FILM FESTIVAL, 2017 Film Discussion Guide ALPHA GO (2017) A documentary film by Greg Kohl https://www.mountainfilm.org/festival/personalities/greg-kohs THE STORY: “The ancient Chinese board game Go has long been considered the holy-grail for artificial intelligence. Its simple rules but near-infinite number of outcomes make it exponentially more complex than chess. Mastery of the game by computers was considered by expert players and the AI community alike to be out of reach for at least another decade. Yet in 2016, Google's DeepMind team announced that they would be taking on Lee Sedol, the world's most elite Go champion. The match was set for a weeklong tournament in Seoul in early 2016, and there was more at stake than the million-dollar prize. Director Greg Kohs' absorbing documentary chronicles Google's DeepMind team as it prepares to test the limits of its rapidly evolving AI technology. The film pits machine against man, and reveals as much about the workings of the human mind as it does the future of AI.” —Ian Hollander THE CAST: Frank Lantz, Director, New York University Game Center Demis Hassabis, Co-Founder and CEO, DeepMind Shane Legg, Co-Founder and Chief Scientist, DeepMind David Silver, Lead Researcher, DeepMind Lee Sedol, 9p World GO Professional Aja Hung, Lead Programmer, DeepMind Fan Hui, 2p Go Professional, European Go Champion Janice Kim, 3p Go Professional Andrew Jackson, Vice President Operations, American GO Association Yuan Zhou, 7p Go Professional Julian Schrittweiser, Software Engineer, DeepMind DOCUMENTARY STYLE: Bill Nichols, film educator, has written many texts on documentary styles and his analysis is distilled in the article cited here. -
Algorithmic Information Theory Scholarpedia
8/28/2015 Algorithmic information theory Scholarpedia Algorithmic information theory +1 Recommend this on Google Marcus Hutter (2007), Scholarpedia, 2(3):2519. doi:10.4249/scholarpedia.2519 revision #90953 [link to/cite this article] Dr. Marcus Hutter, Australian National University, Canbera, Australia This article is a brief guide to the field of algorithmic information theory (AIT), its underlying philosophy, and the most important concepts. AIT arises by mixing information theory and computation theory to obtain an objective and absolute notion of information in an individual object, and in so doing gives rise to an objective and robust notion of randomness of individual objects. This is in contrast to classical information theory that is based on random variables and communication, and has no bearing on information and randomness of individual objects. After a brief overview, the major subfields, applications, history, and a map of the field are presented. Contents 1 Overview 2 Algorithmic "Kolmogorov" Complexity (AC) 3 Algorithmic "Solomonoff" Probability (AP) 4 Universal "Levin" Search (US) 5 Algorithmic "MartinLoef" Randomness (AR) 6 Applications of AIT 6.1 Philosophy 6.2 Practice 6.3 Science 7 History, References, Notation, Nomenclature 8 Map of the Field 8.1 Algorithmic "Kolmogorov" Complexity (AC) 8.2 Algorithmic "Solomonoff" Probability (AP) 8.3 Universal "Levin" Search (US) 8.4 Algorithmic "MartinLoef" Randomness (AR) 8.5 Applications of AIT 9 References 10 External Links 11 See Also Overview Algorithmic information theory (AIT) is the information theory of individual objects, using computer science, and concerns itself with the relationship between computation, information, and randomness. The information content or complexity of an object can be measured by the length of its shortest description. -
Universal Artificial Intelligence - Evaluation and Benchmarks By
Universal Artificial Intelligence - Evaluation and Benchmarks By Pallavi Mishra Submitted to the faculty in partial fulfillment of the requirements for the degree of Master of Science in Engineering And Management MASSACHSES NSITUTE at the OF TECHNLOG MASSACHUSETTS INSTITUTE OF TECHNOLOGY OCT 262016 June 2016 LIBRARIES @2016 Pallavi Mishra, All rights reserved. ARCHIVES The author hereby grants to MIT permission to reproduce and to distribute publicly paper and electronic copies of this thesis document in whole or in part in any medium now known or hereafter created. Signature of Author: __Signature redacted Pallavi Mishra System Design and Management Program 18 May 2016 Certified by: ______Signature redacted James M. Utterback David J. McGrath jr(1959) Professor of Management and Innovation rofessor of Engineerin, Systems f' sis up4iTisor Accepted by: Signature redacted ""r'atrick Hale, Director System Design and Management Program Reader: Sebastien Helie, Ph.D. Assistant Professor Department of Psychological Sciences, Purdue University Universal Artificial Intelligence - Evaluation and Benchmarks Pallavi Mishra Contents 1 Preface 5 2 Abstract 7 3 Acknowledgment 8 4 Introduction 9 5 Universal Artificial Intelligence 11 5.1 Theory .. .... ... .... .... 11 5.1.1 Universal Artificial Intelligence Agent 11 5.2 Modeling of Universal Al ..... 14 5.2.1 Agents ... .... ..... 16 5.2.2 Environments . .... .... 16 5.2.3 Reward Functions ..... 17 6 Theoretical Features to Practical Implementation 20 6.1 Computational Properties of Agents and Environments 20 6.1.1 Environments . ...... ...... ...... 21 6.1.2 A gents ..... ...... ...... ...... 23 6.2 Practical benchmark requirements ... ..... ... 24 6.2.1 Finite set of randomly selected environments . 24 2 6.2.2 Environments should be reward-summable .. -
On Controllability of Artificial Intelligence
On Controllability of Artificial Intelligence Roman V. Yampolskiy Speed School of Engineering Computer Science and Engineering University of Louisville [email protected], @romanyam July 18, 2020 Abstract Invention of artificial general intelligence is predicted to cause a shift in the trajectory of human civilization. In order to reap the benefits and avoid pitfalls of such powerful technology it is important to be able to control it. However, possibility of controlling artificial general intelligence and its more advanced version, superintelligence, has not been formally established. In this paper, we present arguments as well as supporting evidence from multiple domains indicating that advanced AI can’t be fully controlled. Consequences of uncontrollability of AI are discussed with respect to future of humanity and research on AI, and AI safety and security. This paper can serve as a comprehensive reference for the topic of uncontrollability. Keywords: AI Safety and Security, Control Problem, Safer AI, Uncontrollability, Unverifiability, X-Risk. 1 “One thing is for sure: We will not control it.” - Elon Musk “Dave: Open the pod bay doors, HAL. HAL: I'm sorry, Dave. I'm afraid I can't do that.” - HAL 2001 “[F]inding a solution to the AI control problem is an important task; in Bostrom’s sonorous words, ‘the essential task of our age.’” - Stuart Russell “[I]t seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. ... At some stage therefore we should have to -
Deepmind Deepmind Deepmind Deepmind Deepmind Deepmind Mila
Scalable agent alignment via reward modeling: a research direction Jan Leike David Krueger∗ Tom Everitt Miljan Martic Vishal Maini Shane Legg DeepMind DeepMind DeepMind DeepMind DeepMind DeepMind Mila Abstract One obstacle to applying reinforcement learning algorithms to real-world problems is the lack of suitable reward functions. Designing such reward functions is difficult in part because the user only has an implicit understanding of the task objective. This gives rise to the agent alignment problem: how do we create agents that behave in accordance with the user’s intentions? We outline a high-level research direction to solve the agent alignment problem centered around reward modeling: learning a reward function from interaction with the user and optimizing the learned reward function with reinforcement learning. We discuss the key challenges we expect to face when scaling reward modeling to complex and general domains, concrete approaches to mitigate these challenges, and ways to establish trust in the resulting agents. 1 Introduction Games are a useful benchmark for research because progress is easily measurable. Atari games come with a score function that captures how well the agent is playing the game; board games or competitive multiplayer games such as Dota 2 and Starcraft II have a clear winner or loser at the end of the game. This helps us determine empirically which algorithmic and architectural improvements work best. However, the ultimate goal of machine learning (ML) research is to go beyond games and improve human lives. To achieve this we need ML to assist us in real-world domains, ranging from simple tasks like ordering food or answering emails to complex tasks like software engineering or running a business.