Machine Theory of Mind
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
KNOWLEDGE ACCORDING to IDEALISM Idealism As a Philosophy
KNOWLEDGE ACCORDING TO IDEALISM Idealism as a philosophy had its greatest impact during the nineteenth century. It is a philosophical approach that has as its central tenet that ideas are the only true reality, the only thing worth knowing. In a search for truth, beauty, and justice that is enduring and everlasting; the focus is on conscious reasoning in the mind. The main tenant of idealism is that ideas and knowledge are the truest reality. Many things in the world change, but ideas and knowledge are enduring. Idealism was often referred to as “idea-ism”. Idealists believe that ideas can change lives. The most important part of a person is the mind. It is to be nourished and developed. Etymologically Its origin is: from Greek idea “form, shape” from weid- also the origin of the “his” in his-tor “wise, learned” underlying English “history.” In Latin this root became videre “to see” and related words. It is the same root in Sanskrit veda “knowledge as in the Rig-Veda. The stem entered Germanic as witan “know,” seen in Modern German wissen “to know” and in English “wisdom” and “twit,” a shortened form of Middle English atwite derived from æt “at” +witen “reproach.” In short Idealism is a philosophical position which adheres to the view that nothing exists except as it is an idea in the mind of man or the mind of God. The idealist believes that the universe has intelligence and a will; that all material things are explainable in terms of a mind standing behind them. PHILOSOPHICAL RATIONALE OF IDEALISM a) The Universe (Ontology or Metaphysics) To the idealist, the nature of the universe is mind; it is an idea. -
Is AI Intelligent, Really? Bruce D
Seattle aP cific nivU ersity Digital Commons @ SPU SPU Works Summer August 23rd, 2019 Is AI intelligent, really? Bruce D. Baker Seattle Pacific nU iversity Follow this and additional works at: https://digitalcommons.spu.edu/works Part of the Artificial Intelligence and Robotics Commons, Comparative Methodologies and Theories Commons, Epistemology Commons, Philosophy of Science Commons, and the Practical Theology Commons Recommended Citation Baker, Bruce D., "Is AI intelligent, really?" (2019). SPU Works. 140. https://digitalcommons.spu.edu/works/140 This Article is brought to you for free and open access by Digital Commons @ SPU. It has been accepted for inclusion in SPU Works by an authorized administrator of Digital Commons @ SPU. Bruce Baker August 23, 2019 Is AI intelligent, really? Good question. On the surface, it seems simple enough. Assign any standard you like as a demonstration of intelligence, and then ask whether you could (theoretically) set up an AI to perform it. Sure, it seems common sense that given sufficiently advanced technology you could set up a computer or a robot to do just about anything that you could define as being doable. But what does this prove? Have you proven the AI is really intelligent? Or have you merely shown that there exists a solution to your pre- determined puzzle? Hmmm. This is why AI futurist Max Tegmark emphasizes the difference between narrow (machine-like) and broad (human-like) intelligence.1 And so the question remains: Can the AI be intelligent, really, in the same broad way its creator is? Why is this question so intractable? Because intelligence is not a monolithic property. -
Artificial Intelligence and National Security
Artificial Intelligence and National Security Artificial intelligence will have immense implications for national and international security, and AI’s potential applications for defense and intelligence have been identified by the federal government as a major priority. There are, however, significant bureaucratic and technical challenges to the adoption and scaling of AI across U.S. defense and intelligence organizations. Moreover, other nations—particularly China and Russia—are also investing in military AI applications. As the strategic competition intensifies, the pressure to deploy untested and poorly understood systems to gain competitive advantage could lead to accidents, failures, and unintended escalation. The Bipartisan Policy Center and Georgetown University’s Center for Security and Emerging Technology (CSET), in consultation with Reps. Robin Kelly (D-IL) and Will Hurd (R-TX), have worked with government officials, industry representatives, civil society advocates, and academics to better understand the major AI-related national and economic security issues the country faces. This paper hopes to shed more clarity on these challenges and provide actionable policy recommendations, to help guide a U.S. national strategy for AI. BPC’s effort is primarily designed to complement the work done by the Obama and Trump administrations, including President Barack Obama’s 2016 The National Artificial Intelligence Research and Development Strategic Plan,i President Donald Trump’s Executive Order 13859, announcing the American AI Initiative,ii and the Office of Management and Budget’s subsequentGuidance for Regulation of Artificial Intelligence Applications.iii The effort is also designed to further June 2020 advance work done by Kelly and Hurd in their 2018 Committee on Oversight 1 and Government Reform (Information Technology Subcommittee) white paper Rise of the Machines: Artificial Intelligence and its Growing Impact on U.S. -
Theory of Mind As Inverse Reinforcement Learning. Current
Available online at www.sciencedirect.com ScienceDirect Theory of mind as inverse reinforcement learning 1,2 Julian Jara-Ettinger We review the idea that Theory of Mind—our ability to reason think or want based on how they behave. Research in about other people’s mental states—can be formalized as cognitive science suggests that we infer mental states by inverse reinforcement learning. Under this framework, thinking of other people as utility maximizers: constantly expectations about how mental states produce behavior are acting to maximize the rewards they obtain while mini- captured in a reinforcement learning (RL) model. Predicting mizing the costs that they incur [3–5]. Using this assump- other people’s actions is achieved by simulating a RL model tion, even children can infer other people’s preferences with the hypothesized beliefs and desires, while mental-state [6,7], knowledge [8,9], and moral standing [10–12]. inference is achieved by inverting this model. Although many advances in inverse reinforcement learning (IRL) did not have Theory of mind as inverse reinforcement human Theory of Mind in mind, here we focus on what they learning reveal when conceptualized as cognitive theories. We discuss Computationally, our intuitions about how other minds landmark successes of IRL, and key challenges in building work can be formalized using frameworks developed in a human-like Theory of Mind. classical area of AI: model-based reinforcement learning 1 Addresses (hereafter reinforcement learning or RL) . RL problems 1 Department of Psychology, Yale University, New Haven, CT, United focus on how to combine a world model with a reward States function to produce a sequence of actions, called a policy, 2 Department of Computer Science, Yale University, New Haven, CT, that maximizes agents’ rewards while minimizing their United States costs. -
Artificial Intelligence in the Context of Human Consciousness
Running head: AI IN THE CONTEXT OF HUMAN CONSCIOUSNESS 1 Artificial Intelligence in the Context of Human Consciousness Hannah DeFries A Senior Thesis submitted in partial fulfillment of the requirements for graduation in the Honors Program Liberty University Spring 2019 AI IN THE CONTEXT OF HUMAN CONSCIOUSNESS 2 Acceptance of Senior Honors Thesis This Senior Honors Thesis is accepted in partial fulfillment of the requirements for graduation from the Honors Program of Liberty University. ______________________________ Kyung K. Bae, Ph.D. Thesis Chair ______________________________ Jung-Uk Lim, Ph.D. Committee Member ______________________________ Mark Harris, Ph.D. Committee Member ______________________________ James H. Nutter, D.A. Honors Director ______________________________ Date AI IN THE CONTEXT OF HUMAN CONSCIOUSNESS 3 Abstract Artificial intelligence (AI) can be defined as the ability of a machine to learn and make decisions based on acquired information. AI’s development has incited rampant public speculation regarding the singularity theory: a futuristic phase in which intelligent machines are capable of creating increasingly intelligent systems. Its implications, combined with the close relationship between humanity and their machines, make achieving understanding both natural and artificial intelligence imperative. Researchers are continuing to discover natural processes responsible for essential human skills like decision-making, understanding language, and performing multiple processes simultaneously. Artificial intelligence -
Defense Primer: National and Defense Intelligence
Updated December 30, 2020 Defense Primer: National and Defense Intelligence The Intelligence Community (IC) is charged with providing Intelligence Program (NIP) budget appropriations, which insight into actual or potential threats to the U.S. homeland, are a consolidation of appropriations for the ODNI; CIA; the American people, and national interests at home and general defense; and national cryptologic, reconnaissance, abroad. It does so through the production of timely and geospatial, and other specialized intelligence programs. The apolitical products and services. Intelligence products and NIP, therefore, provides funding for not only the ODNI, services result from the collection, processing, analysis, and CIA and IC elements of the Departments of Homeland evaluation of information for its significance to national Security, Energy, the Treasury, Justice and State, but also, security at the strategic, operational, and tactical levels. substantially, for the programs and activities of the Consumers of intelligence include the President, National intelligence agencies within the DOD, to include the NSA, Security Council (NSC), designated personnel in executive NGA, DIA, and NRO. branch departments and agencies, the military, Congress, and the law enforcement community. Defense intelligence comprises the intelligence organizations and capabilities of the Joint Staff, the DIA, The IC comprises 17 elements, two of which are combatant command joint intelligence centers, and the independent, and 15 of which are component organizations military services that address strategic, operational or of six separate departments of the federal government. tactical requirements supporting military strategy, planning, Many IC elements and most intelligence funding reside and operations. Defense intelligence provides products and within the Department of Defense (DOD). -
Artificial Intelligence: How Does It Work, Why Does It Matter, and What Can We Do About It?
Artificial intelligence: How does it work, why does it matter, and what can we do about it? STUDY Panel for the Future of Science and Technology EPRS | European Parliamentary Research Service Author: Philip Boucher Scientific Foresight Unit (STOA) PE 641.547 – June 2020 EN Artificial intelligence: How does it work, why does it matter, and what can we do about it? Artificial intelligence (AI) is probably the defining technology of the last decade, and perhaps also the next. The aim of this study is to support meaningful reflection and productive debate about AI by providing accessible information about the full range of current and speculative techniques and their associated impacts, and setting out a wide range of regulatory, technological and societal measures that could be mobilised in response. AUTHOR Philip Boucher, Scientific Foresight Unit (STOA), This study has been drawn up by the Scientific Foresight Unit (STOA), within the Directorate-General for Parliamentary Research Services (EPRS) of the Secretariat of the European Parliament. To contact the publisher, please e-mail [email protected] LINGUISTIC VERSION Original: EN Manuscript completed in June 2020. DISCLAIMER AND COPYRIGHT This document is prepared for, and addressed to, the Members and staff of the European Parliament as background material to assist them in their parliamentary work. The content of the document is the sole responsibility of its author(s) and any opinions expressed herein should not be taken to represent an official position of the Parliament. Reproduction and translation for non-commercial purposes are authorised, provided the source is acknowledged and the European Parliament is given prior notice and sent a copy. -
The Role of Language in Theory of Mind Development
Top Lang Disorders Vol. 34, No. 4, pp. 313–328 Copyright c 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins The Role of Language in Theory of Mind Development Jill G. de Villiers and Peter A. de Villiers Various arguments are reviewed about the claim that language development is critically connected to the development of theory of mind. The different theories of how language could help in this process of development are explored. A brief account is provided of the controversy over the capacities of infants to read others’ false beliefs. Then the empirical literature on the steps in theory of mind development is summarized, considering studies on both typically developing and various language-delayed children. Suggestions are made for intervention by speech language pathologists to enhance the child’s access to understanding the minds of others. Key words: conversation, false beliefs, false complements, language, narrative, theory of mind N THIS ARTICLE, we review arguments on interactions between language and ToM I and evidence for the claim that children’s in both typically developing and language- language acquisition is crucially connected to delayed children. We end with suggestions their development of a theory of mind (ToM). for using language interventions to facilitate Theory of mind refers to the child’s ability to children’s understanding of their own mental understand that other people have minds, and states as well as the minds of others. those minds contain beliefs, knowledge, de- sires, and emotions that may be different from THEORIES OF THE RELATIONSHIP those of the child. If the child can figure out BETWEEN LANGUAGE AND ToM those contents, then other people’s behavior will begin to make sense and, therefore, be Many writers have made a convincing case predictable and explainable. -
Consciousness and Theory of Mind: a Common Theory?
Consciousness and Theory of Mind: a Common Theory? Keywords: Consciousness, Higher-Order Theories, Theory of Mind, Mind-reading, Metacognition. 1 Introduction Many of our mental states are phenomenally conscious. It feels a certain way, or to borrow Nagel's expression, there is something it is like to be in these states. Examples of phenomenally conscious states are those one undergoes while looking at the ocean or at a red apple, drinking a glass of scotch or a tomato juice, smelling coee or the perfume of a lover, listening to the radio or a symphonic concert, or feeling pain or hunger. A theory of consciousness has to explain the distinctive properties that phe- nomenally conscious states have and other kind of states lack. Higher-Order Representational (HOR) theories1 attempt to provide such an explanation. Ac- coding to these theories, phenomenally conscious states are those that are the objects of some kind of higher-order process or representation. There is some- thing higher-order, a meta-state, in the case of phenomenal conscious mental states, which is lacking in the case of other kind of states. According to these theories, consciousness depends on our Theory of Mind. A Theory of Mind, henceforth ToM, is the ability of humans to identify their own mental states and attribute mental states dierent from their owns to others. Such an ability can, at least conceptually, be decomposed into another two: mindreading and metacognition. Human beings are able to entertain representations of other people men- tal states thanks to the mindreading ability. We attribute beliefs, perceptions, feelings or desires to other people and predict and explain their behaviour ac- cordingly. -
Artificial Intelligence and Big Data – Innovation Landscape Brief
ARTIFICIAL INTELLIGENCE AND BIG DATA INNOVATION LANDSCAPE BRIEF © IRENA 2019 Unless otherwise stated, material in this publication may be freely used, shared, copied, reproduced, printed and/or stored, provided that appropriate acknowledgement is given of IRENA as the source and copyright holder. Material in this publication that is attributed to third parties may be subject to separate terms of use and restrictions, and appropriate permissions from these third parties may need to be secured before any use of such material. ISBN 978-92-9260-143-0 Citation: IRENA (2019), Innovation landscape brief: Artificial intelligence and big data, International Renewable Energy Agency, Abu Dhabi. ACKNOWLEDGEMENTS This report was prepared by the Innovation team at IRENA’s Innovation and Technology Centre (IITC) with text authored by Sean Ratka, Arina Anisie, Francisco Boshell and Elena Ocenic. This report benefited from the input and review of experts: Marc Peters (IBM), Neil Hughes (EPRI), Stephen Marland (National Grid), Stephen Woodhouse (Pöyry), Luiz Barroso (PSR) and Dongxia Zhang (SGCC), along with Emanuele Taibi, Nadeem Goussous, Javier Sesma and Paul Komor (IRENA). Report available online: www.irena.org/publications For questions or to provide feedback: [email protected] DISCLAIMER This publication and the material herein are provided “as is”. All reasonable precautions have been taken by IRENA to verify the reliability of the material in this publication. However, neither IRENA nor any of its officials, agents, data or other third- party content providers provides a warranty of any kind, either expressed or implied, and they accept no responsibility or liability for any consequence of use of the publication or material herein. -
Theory of Mind for a Humanoid Robot
Theory of Mind for a Humanoid Robot Brian Scassellati MIT Artificial Intelligence Lab 545 Technology Square – Room 938 Cambridge, MA 02139 USA [email protected] http://www.ai.mit.edu/people/scaz/ Abstract. If we are to build human-like robots that can interact naturally with people, our robots must know not only about the properties of objects but also the properties of animate agents in the world. One of the fundamental social skills for humans is the attribution of beliefs, goals, and desires to other people. This set of skills has often been called a “theory of mind.” This paper presents the theories of Leslie [27] and Baron-Cohen [2] on the development of theory of mind in human children and discusses the potential application of both of these theories to building robots with similar capabilities. Initial implementation details and basic skills (such as finding faces and eyes and distinguishing animate from inanimate stimuli) are introduced. I further speculate on the usefulness of a robotic implementation in evaluating and comparing these two models. 1 Introduction Human social dynamics rely upon the ability to correctly attribute beliefs, goals, and percepts to other people. This set of metarepresentational abilities, which have been collectively called a “theory of mind” or the ability to “mentalize”, allows us to understand the actions and expressions of others within an intentional or goal-directed framework (what Dennett [15] has called the intentional stance). The recognition that other individuals have knowl- edge, perceptions, and intentions that differ from our own is a critical step in a child’s development and is believed to be instrumental in self-recognition, in providing a perceptual grounding during language learning, and possibly in the development of imaginative and creative play [9]. -
Mind Perception Daniel R. Ames Malia F. Mason Columbia
Mind Perception Daniel R. Ames Malia F. Mason Columbia University To appear in The Sage Handbook of Social Cognition, S. Fiske and N. Macrae (Eds.) Please do not cite or circulate without permission Contact: Daniel Ames Columbia Business School 707 Uris Hall 3022 Broadway New York, NY 10027 [email protected] 2 What will they think of next? The contemporary colloquial meaning of this phrase often stems from wonder over some new technological marvel, but we use it here in a wholly literal sense as our starting point. For millions of years, members of our evolving species have gazed at one another and wondered: what are they thinking right now … and what will they think of next? The interest people take in each other’s minds is more than idle curiosity. Two of the defining features of our species are our behavioral flexibility—an enormously wide repertoire of actions with an exquisitely complicated and sometimes non-obvious connection to immediate contexts— and our tendency to live together. As a result, people spend a terrific amount of time in close company with conspecifics doing potentially surprising and bewildering things. Most of us resist giving up on human society and embracing the life of a hermit. Instead, most perceivers proceed quite happily to explain and predict others’ actions by invoking invisible qualities such as beliefs, desires, intentions, and feelings and ascribing them without conclusive proof to others. People cannot read one another’s minds. And yet somehow, many times each day, most people encounter other individuals and “go mental,” as it were, adopting what is sometimes called an intentional stance, treating the individuals around them as if they were guided by unseen and unseeable mental states (Dennett, 1987).