Frameworks for Intelligent Systems

Total Page:16

File Type:pdf, Size:1020Kb

Frameworks for Intelligent Systems Frameworks for Intelligent Systems CompSci 765 Meeting 3 Pat Langley Department of Computer Science University of Auckland Outline of the Lecture • Computer science as an empirical discipline • Physical symbol systems • List structures and list processing • Reasoning and intelligence • Intelligence and search • Knowledge and intelligence • Implications for social cognition 2 Computer Science as an Empirical Discipline! In their Turing Award article, Newell and Simon (1976) make some important claims: • Computer science is an empirical discipline, rather than a branch of mathematics. • It is a science of the artificial, in that it constructs artifacts of sufficient complexity that formal analysis is not tractable. • We must study these computational artifacts as if they were natural systems, forming hypotheses and collecting evidence. They propose two hypotheses based on their founding work in list processing and artificial intelligence.! 3 Laws of Qualitative Structure! The authors introduce the idea of laws of qualitative structure, which are crucial for any scientific field’s development: • The cell doctrine in biology • Plate tectonics in geology • The germ theory of disease • The atomic theory of matter They propose two such laws, one related to mental structures and another and the other to mental processes. 4 Physical Symbol Systems! Newell and Simon’s first claim, the physical symbol system hypothesis, states that: • A physical symbol system has the necessary and sufficient means for general intelligent action. They emphasize general cognitive abilities, such as humans exhibit, rather than specialized ones. This is a theoretical claim that is subject to empirical tests, but the evidence to date generally supports it.! 5 More on Physical Symbol Systems! What do Newell and Simon mean by a physical symbol system? • Symbols are physical patterns that are stable unless modified. • Symbol structures or expressions are organized sets of symbols. • A physical symbol system create, modifies, copies, and destroys symbol structures in order to: • Maintain structures that designate other objects or processes; • Interpret expressions that designate such processes. These ideas are agnostic about the nature of physical patterns; they can reside in neurons, in silicon chips, or on paper.! 6 Development of the Hypothesis! Four historical developments during the 20th Century led to the physical symbol system hypothesis: • Studies in formal logic and symbol manipulation • Turing machines and digital computers • The concept of a stored program • List processing and languages like IPL and Lisp Later work on computer systems that designated and interpreted symbol structures built on these advances.! 7 What is List Processing?! Newell, Shaw, and Simon introduced list processing in 1956; this paradigm involved three key ideas: • Dynamic memory structures and mechanisms to alter them • Data types and operations for different types • Designation and manipulation of symbol structures These support abstraction of structures and processes beyond the specific hardware on which they are implemented. They demonstrated these ideas in IPL, the first list-processing language, although Lisp soon became more widely used.! 8 Why is List Processing Important?! This new framework was very important to AI’s development because it could: • Encode arbitrarily complex structural descriptions; • Create new structural descriptions dynamically; • Use such structures to designate other structures; and • Interpret these structures to produce behavior. As we will see, each of these abilities plays a crucial role in the construction of intelligent systems.! 9 Lists and List Structures! List-processing techniques can encode very complex structural descriptions using: • Symbols such as on, A, and B ; • Lists such as (on A B) and (eats agent John object soup) ; • List structures such as (goal me (not (on ?any B))) . We can use such structures to represent an agent’s beliefs and goals, as well as its rules and other forms of knowledge. Most classical AI systems have depended on list structures to encode such symbolic content. 10 Three Metaphors for Intelligence Most research on AI adopts one of three distinct metaphors for understanding the mind: • Intelligence as multi-step reasoning • Intelligence as search through a maze • Intelligence as retrieval of knowledge from memory None of these metaphors is right or wrong, but each one offers important insights into the nature of mental processing. 11 Metaphor 1: Reasoning and Intelligence One metaphor for intelligence emphasizes the human ability to carry out complex reasoning, including: • Deduction, which draws logical conclusions from givens; • Abduction, which finds plausible accounts of observations; • Analogy, which maps new situations onto known ones. AI researchers have developed systems that exhibit each of these cognitive capabilities. Despite differences in operation, they all operate over structured, relational encodings, stated as list structures. 12 Automated Reasoning Some of the earliest AI research dealt with automated reasoning: • Newell, Shaw, Simon’s (1957) Logic Theorist: The first AI system, based on studies of human reasoning, proved theorems in logic; • Gelernter’s (1959) Geometry Theorem Machine: Proved geometry theorems with diagrams, some requiring constructions; • Slagle’s (1963) SAINT: Solved final exam problems in symbolic integration from a first-year Calculus course. Automated reasoning methods are now mature enough to appear in software like Maple and to be used in hardware verification. Propagation of values in Excel is an even more widespread use. 13 SHRDLU (1970) SHRDLU was an early AI system that Person: Pick up a big red block. Computer: OK. interacted with users in natural language. Person: Grasp the pyramid. It inferred sentence meanings to answer Computer: I don't understand which pyramid you mean. queries and executed complex commands. Person (changing their mind): Find a block which is taller than the one you are Although limited in scope, SHRDLU had holding and put it into the box. Computer: By "it", I assume you mean the many features of a cognitive system. block which is taller than the one I am holding. Computer: OK. Person: What does the box contain? Computer: The blue pyramid and the blue block. Person: What is the pyramid supported by? Computer: The box. Person: How many blocks are not in the box? Computer: Four of them. Person: Is at least one of them narrower than the one which I told you to pick up? Computer: Yes, the red cube. 14 Metaphor 2: Intelligence and Search A second metaphor for intelligence rests on an analogy with finding one’s way through a physical maze. This view is commonly adopted for problem solving tasks. These require one to find some sequences of steps that solves an unfamiliar problem. Instances include generating new plans, schedules, and designs. The search view is not antithetical to the reasoning metaphor, but they emphasize different aspects of intelligence. 15 Problem Solving and Symbol Systems! Human intelligence includes the ability to solve novel problems. But how can we find solutions to problems when we do not already know the answers? • This is the Meno Paradox that Plato first posed. Fortunately, we can separate generators for candidate solutions from mechanisms for testing them. • This division eliminates the apparent paradox. But it requires the ability to represent candidate solutions and to search through the resulting problem space.! 16 The Heuristic Search Hypothesis! This insight led Newell and Simon to propose their heuristic search hypothesis: • A problem solver represents candidate solutions in terms of symbol structures; • Problem solving involves a search process that generates and modifies these structures; • The problem solver tests candidates to determine whether they are acceptable. This process relies on heuristics because, in practice, one cannot search most problem spaces exhaustively. 17 Search and the Tower of Hanoi Consider a puzzle that is known as the Tower of Hanoi, which involves three pegs and N disks. The task is to move all disks from the initial peg to another peg, which involves problem solving and search. 18 Problem Space for Tower of Hanoi Starting from a single Initial state state, we can apply operators repeatedly to generate the entire problem space, or we can explore it more selectively. Goal state 19 The Logic Theorist! In 1956, Newell, Shaw, and Simon developed the Logic Theory Machine, the first running AI program. • This system proved theorems in propositional logic, finding different proofs from those of Russell and Whitehead. • The Logic Theorist demonstrated the use of list processing, heuristic search, and goal-directed reasoning. • This was first of an entirely new breed of computer programs. It would difficult to overestimate the system’s impact on the new field of artificial intelligence.! 20 AI Planning Systems One important use of search mechanisms is to generate a plan for achieving some goal. AI planning methods have become mature enough to support a variety of applied systems, including: • Automobile navigation systems that generate routes to follow; • Orbitz and other travel sites, which propose airline itineraries; • DART, which generated logistical plans for the US military; • The Hubble space telescope, Mars rovers, and even copiers. These systems are much less flexible than human planners, but they find solutions that people might overlook. 21 Game-Playing Programs Early work
Recommended publications
  • Ai: Early History 1 and Applications
    AI: EARLY HISTORY 1 AND APPLICATIONS All men by nature desire to know... —ARISTOTLE, Opening sentence of the Metaphysics Hear the rest, and you will marvel even more at the crafts and resources I have contrived. Greatest was this: in the former times if a man fell sick he had no defense against the sickness, neither healing food nor drink, nor unguent; but through the lack of drugs men wasted away, until I showed them the blending of mild simples wherewith they drive out all manner of diseases. It was I who made visible to men’s eyes the flaming signs of the sky that were before dim. So much for these. Beneath the earth, man’s hidden blessing, copper, iron, silver, and gold—will anyone claim to have discovered these before I did? No one, I am very sure, who wants to speak truly and to the purpose. One brief word will tell the whole story: all arts that mortals have come from Prometheus. —AESCHYLUS, Prometheus Bound 1.1 From Eden to ENIAC: Attitudes toward Intelligence, Knowledge, and Human Artifice Prometheus speaks of the fruits of his transgression against the gods of Olympus: his purpose was not merely to steal fire for the human race but also to enlighten humanity through the gift of intelligence or nous: the rational mind. This intelligence forms the foundation for all of human technology and ultimately all human civilization. The work of Aeschylus, the classical Greek dramatist, illustrates a deep and ancient awareness of the extraordinary power of knowledge. Artificial intelligence, in its very direct concern for Prometheus’s gift, has been applied to all the areas of his legacy—medicine, psychology, biology, astronomy, geology—and many areas of scientific endeavor that Aeschylus could not have imagined.
    [Show full text]
  • Connectionism and the Chinese-Room About Twenty Years
    Connectionism and the Chinese-Room About twenty years ago, John Searle began a huge body of discourse with his rejection of a computational theory of mind. Searle is arguing against "strong AI”, his term for computational theory of mind . Strong AI would claim that an appropriately programmed computer can posses cognitive states, that it really is or has a mind. Searle is arguing generally against a computational approach to mind. To this end, he utilizes a now infamous thought experiment, known as the Chinese-Room. At the core of Searle‟s thought experiment there is a human simulating a computer. The human in the room is given meaningless symbols in Chinese script and instructions in English for manipulating them according to their shapes. By following the instructions and correlating the symbols he appears to those outside the room to understand the Chinese language. In reality he understands nothing; he is just manipulating the symbols according to their syntactic properties. Since he is doing in essence what a computer does and he doesn‟t understand Chinese, then no computer can ever come to genuinely understand Chinese, or anything for that matter. At least, not the way humans do. To understand the effects of Searle‟s argument, we must first understand what exactly he is arguing against. Searle directs his efforts at what he calls Strong AI. At the time, this term encompassed all computational theories of mind. To fully analyse the effects of Searle‟s experiment, it is helpful to distinguish two separate types of computational theory. This will be done by examining the use and dismissal of standard symbolic atoms.
    [Show full text]
  • AI Thinks Like a Corporation—And That's Worrying
    Topics Current edition More Subscribe Log in or sign up Search Open Future Manage subscription Open Voices AI thinks like a corporation—and that’s worrying Arti!cial intelligence was born of organisational decision-making and state power; it needs human ethics, says Jonnie Penn of the University of Cambridge Open Future Nov 26th 2018 | by BY JONNIE PENN Arti!cial intelligence is everywhere but it is considered in a wholly ahistorical way. To understand the impact AI will have on our lives, it is vital to appreciate the context in which the !eld was established. After all, statistics and state control have evolved hand in hand for hundreds of years. Consider computing. Its origins have been traced not only to analytic philosophy, pure mathematics and Alan Turing, but perhaps surprisingly, to the history of public administration. In “The Government Machine: A Revolutionary History of the Computer” from 2003, Jon Agar of University College London charts the development of the British civil service as it ballooned from 16,000 employees in 1797 to 460,000 by 1999. He noticed an uncanny similarity between the functionality of a human bureaucracy and that of the digital electronic computer. (He confessed that he could not tell whether this observation was trivial or profound.) Get our daily newsletter Upgrade your inbox and get our Daily Dispatch and Editor's Picks. Email address Sign up now Both systems processed large quantities of information using a hierarchy of pre-set but adaptable rules. Yet one predated the other. This suggested a telling link between the organisation of human social structures and the digital tools designed to serve them.
    [Show full text]
  • CAP 5636 - Advanced Artificial Intelligence
    CAP 5636 - Advanced Artificial Intelligence Introduction This slide-deck is adapted from the one used by Chelsea Finn at CS221 at Stanford. CAP 5636 Instructor: Lotzi Bölöni http://www.cs.ucf.edu/~lboloni/ Slides, homeworks, links etc: http://www.cs.ucf.edu/~lboloni/Teaching/CAP5636_Fall2021/index.html Class hours: Tue, Th 12:00PM - 1:15PM COVID considerations: UCF expects you to get vaccinated and wear a mask Classes will be in-person, but will be recorded on Zoom. Office hours will be over Zoom. Motivating artificial intelligence It is generally not hard to motivate AI these days. There have been some substantial success stories. A lot of the triumphs have been in games, such as Jeopardy! (IBM Watson, 2011), Go (DeepMind’s AlphaGo, 2016), Dota 2 (OpenAI, 2019), Poker (CMU and Facebook, 2019). On non-game tasks, we also have systems that achieve strong performance on reading comprehension, speech recognition, face recognition, and medical imaging benchmarks. Unlike games, however, where the game is the full problem, good performance on a benchmark does not necessarily translate to good performance on the actual task in the wild. Just because you ace an exam doesn’t necessarily mean you have perfect understanding or know how to apply that knowledge to real problems. So, while promising, not all of these results translate to real-world applications Dangers of AI From the non-scientific community, we also see speculation about the future: that it will bring about sweeping societal change due to automation, resulting in massive job loss, not unlike the industrial revolution, or that AI could even surpass human-level intelligence and seek to take control.
    [Show full text]
  • Assignment 1 How Does an NPTEL Online Course Work? the Due Date for Submitting This Assignment Has Passed
    1/23/2021 Artificial Intelligence Search Methods For Problem Solving - - Unit 4 - Week 1 X (https://swayam.gov.in) (https://swayam.gov.in/nc_details/NPTEL) [email protected] NPTEL (https://swayam.gov.in/explorer?ncCode=NPTEL) » Artificial Intelligence Search Methods For Problem Solving (course) Announcements (announcements) About the Course (preview) Ask a Question (forum) Progress (student/home) Mentor (student/mentor) Unit 4 - Week 1 Course outline Assignment 1 How does an NPTEL online course work? The due date for submitting this assignment has passed. Due on 2020-09-30, 23:59 IST. As per our records you have not submitted this assignment. Pre-requisite Assignment The questions in this assignment are recall based, essentially to remind you about the key points in the long history of the desire and quest for building Week 0 thinking machines. 1) ________ is often referred to as the “first programmer” 1 point Week 1 Charles Babbage The Notion of Mind in Lady Ada Lovelace Philosophy (unit? unit=6&lesson=119) Gottfried Wilhelm von Leibniz Alan Turing Reasoning = Computation (unit?unit=6&lesson=120) No, the answer is incorrect. Score: 0 Concepts and Categories Accepted Answers: (unit?unit=6&lesson=121) Lady Ada Lovelace How did AI get its name? 2) Who among the following was the first to build a calculating machine? 1 point (unit?unit=6&lesson=122) The Chess Saga (unit? Blaise Pascal unit=6&lesson=123) Gottfried Wilhelm von Leibniz A Brief History of AI (unit? Thomas de Colmar unit=6&lesson=124) Galileo Galilei The Worlds in our Minds No, the answer is incorrect.
    [Show full text]
  • The Logicist Manifesto: at Long Last Let Logic-Based Artificial Intelligence Become a Field Unto Itself∗
    The Logicist Manifesto: At Long Last Let Logic-Based Artificial Intelligence Become a Field Unto Itself∗ Selmer Bringsjord Rensselaer AI & Reasoning (RAIR) Lab Department of Cognitive Science Department of Computer Science Rensselaer Polytechnic Institute (RPI) Troy NY 12180 USA [email protected] version 9.18.08 Contents 1 Introduction 1 2 Background 1 2.1 Logic-Based AI Encapsulated . .1 2.1.1 LAI is Ambitious . .3 2.1.2 LAI is Based on Logical Systems . .4 2.1.3 LAI is a Top-Down Enterprise . .5 2.2 Ignoring the \Strong" vs. \Weak" Distinction . .5 2.3 A Slice in the Day of a Life of a LAI Agent . .6 2.3.1 Knowledge Representation in Elementary Logic . .8 2.3.2 Deductive Reasoning . .8 2.3.3 A Note on Nonmonotonic Reasoning . 12 2.3.4 Beyond Elementary Logical Systems . 13 2.4 Examples of Logic-Based Cognitive Systems . 15 3 Factors Supporting Logicist AI as an Independent Field 15 3.1 History Supports the Divorce . 15 3.2 The Advent of the Web . 16 3.3 The Remarkable Effectiveness of Logic . 16 3.4 Logic Top to Bottom Now Possible . 17 3.5 Learning and Denial . 19 3.6 Logic is an Antidote to \Cheating" in AI . 19 3.7 Logic Our Only Hope Against the Dark AI Future . 21 4 Objections; Rebuttals 22 4.1 \But you are trapped in a fundamental dilemma: your position is either redundant, or false." . 22 4.2 \But you're neglecting probabilistic AI." . 23 4.3 \But we now know that the mind, contra logicists, is continuous, and hence dynamical, not logical, systems are superior." .
    [Show full text]
  • After Math: (Re)Configuring Minds, Proof, and Computing in the Postwar United States
    After Math: (Re)configuring Minds, Proof, and Computing in the Postwar United States The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Dick, Stephanie Aleen. 2015. After Math: (Re)configuring Minds, Proof, and Computing in the Postwar United States. Doctoral dissertation, Harvard University, Graduate School of Arts & Sciences. Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:14226096 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA After Math (Re)configuring Minds, Proof, and Computing in the Postwar United States Adissertationpresented by Stephanie Aleen Dick to The Department of the History of Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the subject of the History of Science Harvard University Cambridge, Massachusetts November 2014 © 2014 Stephanie Aleen Dick. All rights reserved. Dissertation Advisor: Professor Peter Galison Stephanie Aleen Dick After Math (Re)configuring Minds, Proof, and Computing in the Postwar United States Abstract This dissertation examines the history of three early computer programs that were designed to prove mathematical theorems: The Logic Theory Machine, the Program P, and the Automated Reasoning Assistant, all developed between 1955 and 1975. I use these programs as an opportunity to explore ways in which mathematical knowledge and practice were transformed by the introduction of modern computing. The prospect of automation generated disagreement about the character of human mathematical faculties like intuition, reasoning, and understanding and whether computers could be made to possess or participate in them.
    [Show full text]
  • OCAMS Methods AIEDAM V2
    Workflow Agents vs. Expert Systems: Problem Solving Methods in Work Systems Design William J. Clancey NASA Ames Research Center & Florida Institute for Human & Machine Cognition Maarten Sierhuis RIACS, NASA Ames Research Center Chin Seah QSS, NASA Ames Research Center Prepared for special issue of Artificial Intelligence for Engineering Design, Analysis, and Manufacturing (AIEDAM) — “Problem Solving Methods: Past, Present, and Future,” Spring 2008 Corresponding Author: [email protected] Intelligent Systems Division, M/S 269-3 NASA Ames Research Center Moffett Field, CA 94035 650-604-2526, fax 4-4036 Short Title: “The Role of Method Abstraction in Work Systems Design” Number of pages (excluding title, abstract, and biographies): 55 Number of figures: 2 Draft: May 1, 2008 1 Workflow Agents vs. Expert Systems: Problem Solving Methods in Work Systems Design Abstract During the 1980s, a community of artificial intelligence researchers became interested in formalizing problem solving methods as part of an effort called “second generation expert systems” (2nd GES). How do the motivations and results of this research relate to building tools for the workplace today? We provide an historical review of how the theory of expertise has developed, a progress report on a tool for designing and implementing model-based automation (Brahms), and a concrete example how we apply 2nd GES concepts today in an agent-based system for space flight operations (OCAMS). Brahms’ incorporates an ontology for modeling work practices, what people are doing in the course of a day, characterized as “activities.” OCAMS was developed using a simulation-to-implementation methodology, in which a prototype tool was embedded in a simulation of future work practices.
    [Show full text]
  • How the Symbol Grounding of Living Organisms Can Be Realized in Artificial Agents
    (version 17 March 2015) How the Symbol Grounding of Living Organisms Can Be Realized in Artificial Agents J. H. van Hateren [email protected] Johann Bernouilli Institute for Mathematics and Computer Science University of Groningen, P.O. Box 407, 9700 AK Groningen, The Netherlands Abstract A system with artificial intelligence usually relies on symbol manipulation, at least partly and implic- itly. However, the interpretation of the symbols – what they represent and what they are about – is ultimately left to humans, as designers and users of the system. How symbols can acquire meaning for the system itself, independent of external interpretation, is an unsolved problem. Some grounding of symbols can be obtained by embodiment, that is, by causally connecting symbols (or sub-symbolic var- iables) to the physical environment, such as in a robot with sensors and effectors. However, a causal connection as such does not produce representation and aboutness of the kind that symbols have for humans. Here I present a theory that explains how humans and other living organisms have acquired the capability to have symbols and sub-symbolic variables that represent, refer to, and are about some- thing else. The theory shows how reference can be to physical objects, but also to abstract objects, and even how it can be misguided (errors in reference) or be about non-existing objects. I subsequently abstract the primary components of the theory from their biological context, and discuss how and under what conditions the theory could be implemented in artificial agents. A major component of the theory is the strong nonlinearity associated with (potentially unlimited) self-reproduction.
    [Show full text]
  • ALLEN NEWELL March 19, 1927–July 19, 1992
    NATIONAL ACADEMY OF SCIENCES A L L E N N E W ELL 1927—1992 A Biographical Memoir by HER BE R T A. S IMON Any opinions expressed in this memoir are those of the author(s) and do not necessarily reflect the views of the National Academy of Sciences. Biographical Memoir COPYRIGHT 1997 NATIONAL ACADEMIES PRESS WASHINGTON D.C. ALLEN NEWELL March 19, 1927–July 19, 1992 BY HERBERT A. SIMON ITH THE DEATH from cancer on July 19, 1992, of Allen WNewell the field of artificial intelligence lost one of its premier scientists, who was at the forefront of the field from its first stirrings to the time of his death and whose research momentum had not shown the slightest diminu- tion up to the premature end of his career. The history of his scientific work is partly my history also, during forty years of friendship and nearly twenty of collaboration, as well as the history of the late J. C. (Cliff) Shaw, a longtime colleague; but I will strive to make this account Allen-cen- tric and not intrude myself too far into it. I hope I will be pardoned if I occasionally fail.1 If you asked Allen Newell what he was, he would say, “I am a scientist.” He played that role almost every waking hour of every day of his adult life. How would he have answered the question, “What kind of scientist?” We hu- mans have long been obsessed with four great questions: the nature of matter, the origins of the universe, the nature of life, the workings of mind.
    [Show full text]
  • Computer Science and the Organization of White-Collar Work, 1945-1975
    Post-Industrial Engineering: Computer Science and the Organization of White-Collar Work, 1945-1975 by Andrew Benedict Mamo A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in History in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Cathryn Carson, Chair Professor David Hollinger Professor David Winickoff Spring 2011 Post-Industrial Engineering: Computer Science and the Organization of White-Collar Work, 1945-1975 © 2011 by Andrew Benedict Mamo Abstract Post-Industrial Engineering: Computer Science and the Organization of White-Collar Work, 1945-1975 by Andrew Benedict Mamo Doctor of Philosophy in History University of California, Berkeley Professor Cathryn Carson, Chair The development of computing after the Second World War involved a fundamental reassessment of information, communication, knowledge — and work. No merely technical project, it was prompted in part by the challenges of industrial automation and the shift toward white-collar work in mid-century America. This dissertation therefore seeks out the connections between technical research projects and organization-theory analyses of industrial management in the Cold War years. Rather than positing either a model of technological determinism or one of social construction, it gives a more nuanced description by treating the dynamics as one of constant social and technological co-evolution. This dissertation charts the historical development of what it has meant to work with computers by examining the deep connections between technologists and mid-century organization theorists from the height of managerialism in the 1940s through the decline of the “liberal consensus” in the 1970s. Computing was enmeshed in ongoing debates concerning automation and the relationship between human labor and that of machines.
    [Show full text]
  • Simon (1990) Invariants of Human Behavior
    Annual Reviews www.annualreviews.org/aronline Annual Reviews www.annualreviews.org/aronline Annu. Rev. Psychol. 1990. 41:1-19 INVARIANTS OF HUMAN BEHAVIOR1 Herbert A. Simon* Departmentof Psychology,Carnegie-Mellon University, Pittsburgh, PEnnsylvania 15213 CONTENTS PHYSICALSYMBOL SYSTEMS ................................................................. 3 ADAPTIVITY......................................................................................... 5 ComputationalLimits on Adaptivity........................................................... 5 ReasoningUnder the OptimalityPrinciple .................................................. 6 ComputationalFeasibility: BoundedRationality ............................................ 7 RationalityWithout Optimization .............................................................. 8 MECHANISMSFORRATIONALITY ........................................................... 8 RecognitionProcesses ........................................................................... 8 HeuristicSearch .................................................................................. 9 Serial PatternRecognition ...................................................................... 10 ProceduralRationality ........................................................................... 11 THINKINGAND REASONING ................................................................... 11 COGNITIVEARCHITECTURE ................................................................... 13 LINKAGESTO OTHERPARTS OF PSYCHOLOGY......................................
    [Show full text]