ARTIFICIAL INTELLIGENCE & AGENTS What Is AI? Acting Humanly

Total Page:16

File Type:pdf, Size:1020Kb

ARTIFICIAL INTELLIGENCE & AGENTS What Is AI? Acting Humanly What is AI? ARTIFICIAL INTELLIGENCE & AGENTS • AI is both science and engineering: – the science of understanding intelligent entities — of developing theories which attempt to explain and predict the nature of such entities; – the engineering of intelligent entities. cis32-spring2009-parsons-lect02 2 Acting Humanly • Four views of AI: • A problem that has greatly troubled AI researchers: when can we 1. AI as acting humanly count a machine as being intelligent? – as typified by the Turing test • Most famous response due to Alan Turing, British 2. AI as thinking humanly mathematician and computing pioneer: – cognitive science. 3. AI as thinking rationally – as typified by logical approaches. 4. AI as acting rationally – the intelligent agent approach. cis32-spring2009-parsons-lect02 3 cis32-spring2009-parsons-lect02 4 • No program has yet passed Turing test! (Annual Loebner competition & prize.) • A program that succeeded would need to be capable of: Human interrogates entity via teletype for 5 minutes. If, – natural language understanding & generation; after 5 minutes, human cannot tell whether entity is human – knowledge representation; or machine, then the entity must be counted as intelligent. – learning; – automated reasoning. • Note no visual or aural component to basic Turing test — augmented test involves video & audio feed to entity. cis32-spring2009-parsons-lect02 5 cis32-spring2009-parsons-lect02 6 Thinking Humanly Thinking Rationally • Try to understand how the mind works — how do we think? • Trying to understand how we actually think is one route to AI — • Two possible routes to find answers: but how about how we should think. • Use logic to capture the laws of rational thought as symbols. – by introspection — we figure it out ourselves! • – by experiment — draw upon techniques of psychology to Reasoning involves shifting symbols according to well-defined conduct controlled experiments. (“Rat in a box”!) rules (like algebra). • idealised • The discipline of cognitive science: particularly influential in Result is reasoning. vision, natural language processing, and learning. • Logicist approach theoretically attractive. cis32-spring2009-parsons-lect02 7 cis32-spring2009-parsons-lect02 8 • Lots of problems: Acting Rationally – transduction — how to map the environment to symbolic representation; • Acting rationally = acting to achieve one’s goals, given one’s – representation — how to represent real world phenomena beliefs. (time, space, . ) symbolically; • An agent is a system that perceives and acts; intelligent agent is – reasoning — how to do symbolic manipulation tractability — one that acts rationally w.r.t. the goals we delegate to it. so it can be done by real computers! • Emphasis shifts from designing theoretically best decision making • We are still a long way from solving these problems. procedure to best decision making procedure possible in circumstances. • In general logic-based approaches are unpopular in AI at the moment. • Logic may be used in the service of finding the best action — not an end in itself. cis32-spring2009-parsons-lect02 9 cis32-spring2009-parsons-lect02 10 Brief History of AI • Achieving perfect rationality — making the best decision 1943–56: theoretically possible — is not usually possible, due to limited • resources: McCulloch & Pitts (1943) artificial neural net — proved equivalent to Turing machine; – limited time; • – limited computational power; Shannon, Turing (1950) – limited memory; chess playing programs • – limited or uncertain information about environment. Marvin Minsky (1951) first neural net computer — SNARC • The trick is to do the best with what you’ve got! • Dartmouth College (1956) • This is easier than doing perfectly, but still tough. term “AI” coined by John McCarthy Newell & Simon presented LOGIC THEORIST program cis32-spring2009-parsons-lect02 11 cis32-spring2009-parsons-lect02 12 1956-70: • Programs written that could: 1970s: plan, learn, play games, prove theorems, solve problems. • Major centres established: • 1970s period of recession for AI – Minsky — MIT (Lighthill report in UK) – McCarthy — Stanford • Techniques developed on microworlds would not scale. – Newell & Simon — CMU • Implications of complexity theory developed in late 1960s, early • Major feature of the period were microworlds — toy problem 1970s began to be appreciated: domains. – brute force techniques will not work. Example: blocks world. – works in principle does not mean works in practice. “It’ll scale, honest. ” cis32-spring2009-parsons-lect02 13 cis32-spring2009-parsons-lect02 14 • Expert systems success stories: 1980s: – MYCIN — blood diseases in humans; – DENDRAL — interpreting mass spectrometers; – • General purpose, brute force techniques don’t work, so use R1/XCON — configuring DEC VAX hardware; knowledge rich solutions. – PROSPECTOR — finding promising sites for mineral deposits; • Early 1980s saw emergence of expert systems as systems capable • Expert systems emphasised knowledge representation: rules, of exploiting knowledge about tightly focussed domains to solve frames, semantic nets. problems normally considered the domain of experts. • Problems: • Ed Feigenbaum’s knowledge principle: – the knowledge elicitation bottleneck; [Knowledge] is power, and computers that amplify that – marrying expert system & traditional software; knowledge will amplify every dimension of power – breaking into the mainstream. cis32-spring2009-parsons-lect02 15 cis32-spring2009-parsons-lect02 16 Early 1990s: Late 1990s onwards: • Most companies set up to commercialise expert systems technology went bust. • “AI as homeopathic medicine” viewpoint common. • Increased interest in machine learning • 1990s: emphasis on understanding the interaction between – Reaction against expert systems? agents and environments. • Rise of statistical methods. • AI as component, rather than as end in itself. • Agent paradigm becomes commonplace. – “Useful first” paradigm — Etzioni (NETBOT, US$35m) • Robotics for the masses. – “Raisin bread” model — Winston. cis32-spring2009-parsons-lect02 17 cis32-spring2009-parsons-lect02 18 Intelligent Agents • Human “agent”: • An agent is a system that is situated in an environment, and – environment: physical world; which is capable of perceiving its environment and acting in it to – sensors: eyes, ears, . satisfy its design objectives. – effectors: hands, legs, . • Pictorially: • Software agent: sensors – environment: (e.g.) UNIX operating system; Environment – sensors: ls, ps,... Agent – rm chmod ??? effectors: , ,... • Robot: – environment: physical world; – sensors: sonar, camera; effectors – effectors: wheels. cis32-spring2009-parsons-lect02 19 cis32-spring2009-parsons-lect02 20 What to do? • Ideal rational agent: Those who do not reason For each percept sequence, an ideal rational agent will act to Perish in the act. maximise its expected performance measure, on the basis of Those who do not act information provided by percept sequence plus any information perish for that reason built in to agent. (W H Auden) • Note that this does not preclude performing actions to find things • The key problem we have is knowing the right thing to do. out. • Knowing what to do can in principle be easy: consider all the • More precisely, we can view an agent as a function: alternatives, and choose the “best”. f : P∗ → A • But any time-constrained domain, we have to make a decision in time for that decision to be useful! from sequences of percepts P to actions A. • A tradeoff. cis32-spring2009-parsons-lect02 21 cis32-spring2009-parsons-lect02 22 Accessible vs inaccessible Deterministic vs non-deterministic An accessible environment is one in which the agent can obtain A deterministic environment is one in which any action has a complete, accurate, up-to-date information about the single guaranteed effect — there is no uncertainty about the state environment’s state. that will result from performing an action. Most moderately complex environments (including, for example, The physical world can to all intents and purposes be regarded the everyday physical world and the Internet) are inaccessible. as non-deterministic. The more accessible an environment is, the simpler it is to build Non-deterministic environments present greater problems for agents to operate in it. the agent designer. cis32-spring2009-parsons-lect02 23 cis32-spring2009-parsons-lect02 24 Episodic vs non-episodic . Static vs dynamic . In an episodic environment, the performance of an agent is dependent on a number of discrete episodes, with no link between the performance of an agent in different scenarios. A static environment is one that can be assumed to remain An example of an episodic environment would be a mail sorting unchanged except by the performance of actions by the agent. system. A dynamic environment is one that has other processes Episodic environments are simpler from the agent developer’s operating on it, and which hence changes in ways beyond the perspective because the agent can decide what action to perform agent’s control. The physical world is a highly dynamic based only on the current episode — it need not reason about the environment. interactions between this and future episodes. cis32-spring2009-parsons-lect02 25 cis32-spring2009-parsons-lect02 26 Summary . • Discrete vs continuous . This lecture has looked at: – The history of AI – The notion of intelligent agents – An environment is discrete if there are a fixed, finite number of A classification of agent environments. actions and percepts in it. • Broadly speaking, the rest course will cover the major techniques Russell and Norvig give a chess game as an example of a discrete of AI, with special reference to agents. environment, and taxi driving as an example of a continuous • The techniques we’ll look at will start with those applicable to one. simple environments and move towards those suitable for more complex environments. cis32-spring2009-parsons-lect02 27 cis32-spring2009-parsons-lect02 28.
Recommended publications
  • Intelligent Agents
    Intelligent Agents Authors: Dr. Gerhard Weiss, SCCH GmbH, Austria Dr. Lars Braubach, University of Hamburg, Germany Dr. Paolo Giorgini, University of Trento, Italy Outline Abstract ..................................................................................................................2 Key Words ..............................................................................................................2 1 Foundations of Intelligent Agents.......................................................................... 3 2 The Intelligent Agents Perspective on Engineering .............................................. 5 2.1 Key Attributes of Agent-Oriented Engineering .................................................. 5 2.2 Summary and Challenges................................................................................ 8 3 Architectures for Intelligent Agents ...................................................................... 9 3.1 Internal Agent Architectures .......................................................................... 10 3.2 Social Agent Architectures............................................................................. 11 3.3 Summary & Challenges ................................................................................. 12 4 Development Methodologies................................................................................ 13 4.1 Overall Characterization ................................................................................ 13 4.2 Selected AO Methodologies..........................................................................
    [Show full text]
  • Intelligent Agents - Catholijn M
    ARTIFICIAL INTELLIGENCE – Intelligent Agents - Catholijn M. Jonker and Jan Treur INTELLIGENT AGENTS Catholijn M. Jonker and Jan Treur Vrije Universiteit Amsterdam, Department of Artificial Intelligence, Amsterdam, The Netherlands Keywords: Intelligent agent, Website, Electronic Commerce Contents 1. Introduction 2. Agent Notions 2.1 Weak Notion of Agent 2.2 Other Notions of Agent 3. Primitive Agent Concepts 3.1 External primitive concepts 3.2 Internal primitive concepts 3.3. An Example Analysis 4. Business Websites 5. A Generic Multi-Agent Architecture for Intelligent Websites 6. Requirements for the Website Agents 6.1 Characteristics and Requirements for the Website Agents 6.2 Characteristics and Requirements for the Personal Assistants 7. The Internal Design of the Information Broker Agents 7.1 A Generic Information Broker Agent Architecture 7.2 The Website Agent: Internal Design 7.3 The Personal Assistant: Internal Design Glossary Bibliography Biographical Sketches Summary In this article first an introduction to the basic concepts of intelligent agents is presented. The notion of weak agent is briefly described, and a number of external and internal primitive agent concepts are introduced. Next, as an illustration, application of intelligent agents to business Websites is addressed. An agent-based architecture for intelligentUNESCO Websites is introduced. Requirements – EOLSS of the agents that play a role in this architecture are discussed. Moreover, their internal design is discussed. 1. IntroductionSAMPLE CHAPTERS The term agent has become popular, and has been used for a wide variety of applications, ranging from simple batch jobs and simple email filters, to mobile applications, to intelligent assistants, and to large, open, complex, mission critical systems (such as systems for air traffic control).
    [Show full text]
  • Multi-Agent Reinforcement Learning: an Overview∗
    Delft University of Technology Delft Center for Systems and Control Technical report 10-003 Multi-agent reinforcement learning: An overview∗ L. Bus¸oniu, R. Babuska,ˇ and B. De Schutter If you want to cite this report, please use the following reference instead: L. Bus¸oniu, R. Babuska,ˇ and B. De Schutter, “Multi-agent reinforcement learning: An overview,” Chapter 7 in Innovations in Multi-Agent Systems and Applications – 1 (D. Srinivasan and L.C. Jain, eds.), vol. 310 of Studies in Computational Intelligence, Berlin, Germany: Springer, pp. 183–221, 2010. Delft Center for Systems and Control Delft University of Technology Mekelweg 2, 2628 CD Delft The Netherlands phone: +31-15-278.24.73 (secretary) URL: https://www.dcsc.tudelft.nl ∗This report can also be downloaded via https://pub.deschutter.info/abs/10_003.html Multi-Agent Reinforcement Learning: An Overview Lucian Bus¸oniu1, Robert Babuskaˇ 2, and Bart De Schutter3 Abstract Multi-agent systems can be used to address problems in a variety of do- mains, including robotics, distributed control, telecommunications, and economics. The complexity of many tasks arising in these domains makes them difficult to solve with preprogrammed agent behaviors. The agents must instead discover a solution on their own, using learning. A significant part of the research on multi-agent learn- ing concerns reinforcement learning techniques. This chapter reviews a representa- tive selection of Multi-Agent Reinforcement Learning (MARL) algorithms for fully cooperative, fully competitive, and more general (neither cooperative nor competi- tive) tasks. The benefits and challenges of MARL are described. A central challenge in the field is the formal statement of a multi-agent learning goal; this chapter re- views the learning goals proposed in the literature.
    [Show full text]
  • An Exploration in Stigmergy-Based Navigation Algorithms
    From Ants to Service Robots: an Exploration in Stigmergy-Based Navigation Algorithms عمر بهر تيری محبت ميری خدمت گر رہی ميں تری خدمت کےقابل جب هوا توچل بسی )اقبال( To my late parents with love and eternal appreciation, whom I lost during my PhD studies Örebro Studies in Technology 79 ALI ABDUL KHALIQ From Ants to Service Robots: an Exploration in Stigmergy-Based Navigation Algorithms © Ali Abdul Khaliq, 2018 Title: From Ants to Service Robots: an Exploration in Stigmergy-Based Navigation Algorithms Publisher: Örebro University 2018 www.publications.oru.se Print: Örebro University, Repro 05/2018 ISSN 1650-8580 ISBN 978-91-7529-253-3 Abstract Ali Abdul Khaliq (2018): From Ants to Service Robots: an Exploration in Stigmergy-Based Navigation Algorithms. Örebro Studies in Technology 79. Navigation is a core functionality of mobile robots. To navigate autonomously, a mobile robot typically relies on internal maps, self-localization, and path plan- ning. Reliable navigation usually comes at the cost of expensive sensors and often requires significant computational overhead. Many insects in nature perform robust, close-to-optimal goal directed naviga- tion without having the luxury of sophisticated sensors, powerful computational resources, or even an internally stored map. They do so by exploiting a simple but powerful principle called stigmergy: they use their environment as an external memory to store, read and share information. In this thesis, we explore the use of stigmergy as an alternative route to realize autonomous navigation in practical robotic systems. In our approach, we realize a stigmergic medium using RFID (Radio Frequency Identification) technology by embedding a grid of read-write RFID tags in the floor.
    [Show full text]
  • Expert Assessment of Stigmergy: a Report for the Department of National Defence
    Expert Assessment of Stigmergy: A Report for the Department of National Defence Contract No. W7714-040899/003/SV File No. 011 sv.W7714-040899 Client Reference No.: W7714-4-0899 Requisition No. W7714-040899 Contact Info. Tony White Associate Professor School of Computer Science Room 5302 Herzberg Building Carleton University 1125 Colonel By Drive Ottawa, Ontario K1S 5B6 (Office) 613-520-2600 x2208 (Cell) 613-612-2708 [email protected] http://www.scs.carleton.ca/~arpwhite Expert Assessment of Stigmergy Abstract This report describes the current state of research in the area known as Swarm Intelligence. Swarm Intelligence relies upon stigmergic principles in order to solve complex problems using only simple agents. Swarm Intelligence has been receiving increasing attention over the last 10 years as a result of the acknowledgement of the success of social insect systems in solving complex problems without the need for central control or global information. In swarm- based problem solving, a solution emerges as a result of the collective action of the members of the swarm, often using principles of communication known as stigmergy. The individual behaviours of swarm members do not indicate the nature of the emergent collective behaviour and the solution process is generally very robust to the loss of individual swarm members. This report describes the general principles for swarm-based problem solving, the way in which stigmergy is employed, and presents a number of high level algorithms that have proven utility in solving hard optimization and control problems. Useful tools for the modelling and investigation of swarm-based systems are then briefly described.
    [Show full text]
  • Intelligent Agent
    INTELLIGENT AGENT INTELLIGENT AGENT By, P.V. Raja Shekar & CH. Madhuri MRITS MCA FINAL YEAR. INTRODUCTION 1 | P a g e INTELLIGENT AGENT 1 What is artificial intelligence? 2 Branches of artificial intelligence. 3 Introduction to intelligence agent. 4 What is an agent. 5 What is a rational agent. 6 Common attributes of intelligent agent 7 What is bounded rationality. 8 What is an environment and different types of them. 9 Different agent architectures. 10 Applications . 11 Limitations . 12 Conclusion . Artificial Intelligence(AI) Artificial intelligence (AI) is both the intelligence of machines and the branch of computer science which aims to create it. Artificial Intelligence is a combination of cognitive science, linguistics, ontology, physiology, psychology, philosophy, operations research, economics, control theory, neuroscience, computer science, probability, optimization and logic. AI is a very 2 | P a g e INTELLIGENT AGENT large subject-matter. It consists of many different fields, from machine vision to expert systems. The aim of all the fields is the creation of machines that can "think". Researches hope that AI machines will be capable of reasoning, knowledge, learning, communication, planning, perception and the ability to move and manipulate objects. What's it all about? Artificial Intelligence (AI) is the science and engineering of creating intelligent machines and computer programs. It is related to similar tasks of using computers to understand human intelligence but AI does not only have to confine itself to methods that are biologically observable. The interest in AI has commonly increased since years and some classical sub- disciplines like robotics, language processing, or natural computing have produced reliable solutions Intelligence is prediction Intelligence is the ability to predict what comes next, even when faced with incomplete knowledge.
    [Show full text]
  • A Systematic Approach to Artificial Agents
    A Systematic Approach to Artificial Agents Mark Burgin1 and Gordana Dodig-Crnkovic2 1 Department of Mathematics, University of California, Los Angeles, California, USA, 2 School of Innovation, Design and Engineering, Mälardalen University, Sweden, Abstract. Agents and agent systems are becoming more and more important in the development of a variety of fields such as ubiquitous computing, ambient intelligence, autonomous computing, intelligent systems and intelligent robotics. The need for improvement of our basic knowledge on agents is very essential. We take a systematic approach and present extended classification of artificial agents which can be useful for understanding of what artificial agents are and what they can be in the future. The aim of this classification is to give us insights in what kind of agents can be created and what type of problems demand a specific kind of agents for their solution. Keywords: Artificial agents, classification, perception, reasoning, evaluation, mobility 1 Introduction Agents are advanced tools people use to achieve different goals and to various problems. The main difference between ordinary tools and agents is that agents can function more or less independently from those who delegated agency to the agents. For a long time people used only other people and sometimes animals as their agents. Developments in information processing technology, computers and their networks, have made possible to build and use artificial agents. Now the most popular approach in artificial intelligence is based on agents. Intelligent agents form a basis for many kinds of advanced software systems that incorporate varying methodologies, diverse sources of domain knowledge, and a variety of data types.
    [Show full text]
  • Adversarial Attack and Defense in Reinforcement Learning-From AI Security View Tong Chen , Jiqiang Liu, Yingxiao Xiang, Wenjia Niu*, Endong Tong and Zhen Han
    Chen et al. Cybersecurity (2019) 2:11 Cybersecurity https://doi.org/10.1186/s42400-019-0027-x SURVEY Open Access Adversarial attack and defense in reinforcement learning-from AI security view Tong Chen , Jiqiang Liu, Yingxiao Xiang, Wenjia Niu*, Endong Tong and Zhen Han Abstract Reinforcement learning is a core technology for modern artificial intelligence, and it has become a workhorse for AI applications ranging from Atrai Game to Connected and Automated Vehicle System (CAV). Therefore, a reliable RL system is the foundation for the security critical applications in AI, which has attracted a concern that is more critical than ever. However, recent studies discover that the interesting attack mode adversarial attack also be effective when targeting neural network policies in the context of reinforcement learning, which has inspired innovative researches in this direction. Hence, in this paper, we give the very first attempt to conduct a comprehensive survey on adversarial attacks in reinforcement learning under AI security. Moreover, we give briefly introduction on the most representative defense technologies against existing adversarial attacks. Keywords: Reinforcement learning, Artificial intelligence, Security, Adversarial attack, Adversarial example, Defense Introduction best moves with reinforcement learning from games of Artificial intelligence (AI) is providing major break- self-play. Meanwhile, for Atari game, Mnih et al. (2013) throughs in solving the problems that have withstood presented the first deep learning model to learn con- many attempts of natural language understanding, speech trol policies directly from high-dimensional sensory input recognition, image understanding and so on. The latest using reinforcement learning. Moreover, Liang et al. (Guo studies (He et al.
    [Show full text]
  • Extending Universal Intelligence Models with Formal Notion of Representation
    Extending Universal Intelligence Models with Formal Notion of Representation Alexey Potapov, Sergey Rodionov AIDEUS, Russia {potapov,rodionov}@aideus.com Abstract. Solomonoff induction is known to be universal, but incomputable. Its approximations, namely, the Minimum Description (or Message) Length (MDL) principles, are adopted in practice in the efficient, but non-universal form. Recent attempts to bridge this gap leaded to development of the Repre- sentational MDL principle that originates from formal decomposition of the task of induction. In this paper, possible extension of the RMDL principle in the context of universal intelligence agents is considered, for which introduction of representations is shown to be an unavoidable meta-heuristic and a step toward efficient general intelligence. Hierarchical representations and model optimiza- tion with the use of information-theoretic interpretation of the adaptive reso- nance are also discussed. Key words: Universal Agents, Kolmogorov Complexity, Minimum Descrip- tion Length Principle, Representations 1 Introduction The idea of universal induction and prediction on the basis of algorithmic information theory was invented a long time ago [1]. In theory, it eliminates the fundamental problem of prior probabilities, incorrect solutions of which result in such negative practical effects as overlearning, overfitting, oversegmentation, and so on. It would be rather natural to try to develop some models of universal intelligence on this basis. However, the corresponding detailed models were published only relatively recently (e.g. [2]). Moreover, the theory of universal induction was not popular even in ma- chine learning. The reason is quite obvious – it offers incomputable methods, which additionally require training sets of large sizes in order to make good predictions.
    [Show full text]
  • Arxiv:1607.08289V4 [Cs.AI] 21 Jan 2019 Keywords: USA CA [email protected] Francisco, San FPC, Vicarious Hay J
    1 Mammalian Value Systems Gopal P. Sarma School of Medicine, Emory University, Atlanta, GA USA [email protected] Nick J. Hay Vicarious FPC, San Francisco, CA USA [email protected] Keywords: Friendly AI, value alignment, human values, biologically inspired AI, human-mimetic AI Characterizing human values is a topic deeply interwoven with the sciences, humanities, political philosophy, art, and many other human endeavors. In recent years, a number of thinkers have argued that accelerating trends in computer science, cognitive science, and related disciplines foreshadow the creation of intelligent machines which meet and ultimately surpass the cognitive abilities of human beings, thereby entangling an understanding of human values with future technological development. Contemporary research accomplishments suggest increasingly sophisticated AI systems becoming widespread and responsible for managing many aspects of the modern world, from preemptively planning users’ travel schedules and logistics, to fully autonomous vehicles, to domestic robots assisting in daily living. The extrapolation of these trends has been most forcefully described in the context of a hypothetical “intelligence explosion,” in which the capabilities of an intelligent software agent would rapidly increase due to the presence of feedback loops unavailable to biological organisms. The possibility of super- intelligent agents, or simply the widespread deployment of sophisticated, autonomous AI systems, highlights an important theoretical problem: the need to separate
    [Show full text]
  • Intelligent Agents Web-Mining Agents
    Intelligent Agents Web-Mining Agents Prof. Dr. Ralf Möller Dr. Tanya Braun Universität zu Lübeck Institut für Informationssysteme Acknowledgements • Some slides have been designed by PD Dr. Özgür Özcep. Since Özgür works in my group, those slides are not explicitly marked. Thank you nevertheless, Özgür. • Other slides have been take from lecture material provided by researchers on the web. I hope this material is indicated appropriately. Thank you all. 2 Artificial Intelligence and Intelligent Agents • Artificial intelligence (AI) is the science of systematic synthesis and analysis of computational agents that act intelligently – Agents are central to AI (and vice versa) – Intelligent agent = computational agent that acts intelligently – Talking about AI w/o talking about agents misses the point (and vice versa) • Need to technically define the notion of “acting intelligently” • AI = Science of Intelligent Systems – Systems are called computational agents in AI, or agents for short 3 Literature http://aima.cs.berkeley.edu (AIMA, 1st edition 1995) http://artint.info st (AIFCA, 1 edition 2010) 4 Literature 5 Literature 6 What is an Agent? • Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators [AIMA-Def] Sensors Percepts ? Environment Agent Actions – Human agent Actuators eyes, ears, and other organs for sensors; hands, legs, mouth, and other body parts for actuators – Robotic agent cameras and infrared range finders for sensors; various motors for actuators – Software agent interfaces, data integration, interpretation, data manipulation/output 7 Abstractions: Agents and Environments Sensors Percepts ? Environment Agent Actions Actuators • The agent function maps from percept histories to actions: [f: P* à A] • The agent program runs on Really insist on functional a physical architecture to produce f behavior? • Agent = architecture + program 8 Reactive vs.
    [Show full text]
  • Using Stories to Teach Human Values to Artificial Agents
    The Workshops of the Thirtieth AAAI Conference on Artificial Intelligence AI, Ethics, and Society: Technical Report WS-16-02 Using Stories to Teach Human Values to Artificial Agents Mark O. Riedl and Brent Harrison School of Interactive Computing, Georgia Institute of Technology Atlanta, Georgia, USA friedl, [email protected] Abstract achieve a goal without doing anything “bad.” Value align- ment is a property of an intelligent agent indicating that it Value alignment is a property of an intelligent agent in- can only pursue goals that are beneficial to humans (Soares dicating that it can only pursue goals that are beneficial to humans. Successful value alignment should ensure and Fallenstein 2014; Russell, Dewey, and Tegmark 2015). that an artificial general intelligence cannot intention- Value alignment concerns itself with the definition of “good” ally or unintentionally perform behaviors that adversely and “bad,” which can subjectively differ from human to hu- affect humans. This is problematic in practice since it man and from culture to culture. is difficult to exhaustively enumerated by human pro- Value alignment is not trivial to achieve. As argued by grammers. In order for successful value alignment, we Soares (2015) and effectively demonstrated by Asimov’s argue that values should be learned. In this paper, we Robot series of books, it is very hard to directly specify val- hypothesize that an artificial intelligence that can read ues. This is because there are infinitely many undesirable and understand stories can learn the values tacitly held by the culture from which the stories originate. We de- outcomes in an open world.
    [Show full text]