
Biologically Inspired Cognitive Architectures II: Papers from the AAAI Fall Symposium (FS-09-01) The GLAIR Cognitive Architecture Stuart C. Shapiro and Jonathan P. Bona Department of Computer Science and Engineering and Center for Cognitive Science State University of New York at Buffalo {shapiro|jpbona}@buffalo.edu Abstract 2. GLAIR as a Layered Architecture GLAIR (Grounded Layered Architecture with Integrated 2.1 The Layers Reasoning) is a multi-layered cognitive architecture for em- The highest layer of the GLAIR Architecture, the Knowl- bodied agents operating in real, virtual, or simulated envi- edge Layer (KL), contains the beliefs of the agent, and is ronments containing other agents. The highest layer of the the layer in which conscious reasoning, planning, and act GLAIR Architecture, the Knowledge Layer (KL), contains the beliefs of the agent, and is the layer in which conscious selection is performed. reasoning, planning, and act selection is performed. The low- The lowest layer of the GLAIR Architecture, the Sensori- est layer of the GLAIR Architecture, the Sensori-Actuator Actuator Layer (SAL), contains the controllers of the sen- Layer (SAL), contains the controllers of the sensors and ef- sors and effectors of the hardware or software robot. fectors of the hardware or software robot. Between the KL Between the KL and the SAL is the Perceptuo-Motor and the SAL is the Perceptuo-Motor Layer (PML), which Layer (PML), which, itself is divided into three sublayers. grounds the KL symbols in perceptual structures and subcon- The highest, the PMLa, grounds the KL symbols in percep- scious actions, contains various registers for providing the tual structures and subconscious actions, and contains vari- agent’s sense of situatedness in the environment, and han- ous registers for providing the agent’s sense of situatedness dles translation and communication between the KL and the in the environment. The lowest of these, the PMLc, directly SAL. The motivation for the development of GLAIR has been “Computational Philosophy”, the computational understand- abstracts the sensors and effectors into the basic behavioral ing and implementation of human-level intelligent behavior repertoire of the robot body. The middle PML layer, the without necessarily being bound by the actual implementa- PMLb, handles translation and communication between the tion of the human mind. Nevertheless, the approach has been PMLa and the PMLc. inspired by human psychology and biology. 2.2 Mind-Body Modularity 1. Introduction The KL constitutes the mind of the agent; the PML and SAL, its body. However, the KL and PMLa layers are independent GLAIR (Grounded Layered Architecture with Integrated of the implementation of the agent’s body, and can be con- Reasoning) is a multi-layered cognitive architecture for em- nected, without modification, to a hardware robot or to a bodied agents operating in real, virtual, or simulated en- variety of software-simulated robots or avatars. Frequently, vironments containing other agents (Hexmoor, Lammens, the KL, PMLa, and PMLb have run on one computer; the and Shapiro 1993; Lammens, Hexmoor, and Shapiro 1995; PMLc and SAL on another. The PMLb and PMLc handle Shapiro and Ismail 2003). It was an outgrowth of the SNePS communication over I/P sockets.1 Actor (Kumar and Shapiro 1991). Our motivating goal has been what is called “Computational Philosophy” in (Shapiro 3. The KL: Memory and Reasoning 1992), that is, the computational understanding and imple- mentation of human-level intelligent behavior without nec- The KL contains the beliefs of the agent, including: short- essarily being bound by the actual implementation of the hu- term and long-term memory; semantic and episodic mem- man mind. Nevertheless, our approach has been inspired by ory; quantified and conditional beliefs used for reasoning; human psychology and biology. plans for carrying out complex acts and for achieving goals; Although GLAIR is a cognitive architecture appropriate beliefs about the preconditions and effects of acts; policies for implementing various cognitive agents, we tend to name about when, and under what circumstances, acts should be all our cognitive agents “Cassie.” So whenever in this paper performed; self-knowledge; and metaknowledge. we refer to Cassie, we mean one or another of our imple- The KL is the layer in which conscious reasoning, plan- mented GLAIR agents. ning, and act selection is performed. The KL is implemented Copyright c 2009, Stuart C. Shapiro and Jonathan P. Bona. All 1Other interprocess communication methods might be used in rights reserved. the future. 141 in SNePS (Shapiro and Rapaport 1992; Shapiro 2000b; and has not yet been satisfied. Shapiro and The SNePS Implementation Group 2008), The ACS is key to SNePS’ bidirectional inference which is simultaneously a logic-based, frame-based, and (Shapiro, Martins, and McKay 1982; Shapiro 1987). Infer- network-based knowledge representation and reasoning sys- ence processes are created both by backward inference and tem, that employs various styles of inference as well as be- by forward inference. If such a process is needed and al- lief revision. ready exists, a forward-chaining process (producer) adds its As a logic-based KR system, SNePS implements a results to the process’s collection, and a backward-chaining predicate logic with variables, quantifiers, and function process (consumer) is added to the producer-process’s con- symbols. Although equivalent to First-Order Logic, its sumers to be notified. If a query is asked that can’t be most unusual feature is that every well-formed expression answered, the processes established for it remain, and can is a term, even those that denote propositons (Shapiro be found be subsequent forward inferences. When new be- 1993). This allows for metapropositions, propositions liefs are added to the KL with forward inference, and exist- about propositions, without restriction and without the need ing consumer-processes are found for them, new consumer- for an explicit Holds predicate (Morgado and Shapiro processes are not established. The result of this is that after 1985; Shapiro et al. 2007). For example the asserted a query, additional new information is considered in light of term, Believe(B8,Rich(B8)) in the context of the this concern. In other words, a GLAIR agent working on a asserted term, Propername(B8,Oscar), denotes the problem considers relevant new data only as it relates to that proposition that Oscar believes himself to be rich (Ra- problem, focussing its attention on it. paport, Shapiro, and Wiebe 1997). SNePS supports The ACS can be deleted. It is then reestablished the next forward- backward- and bidirectional-reasoning (Shapiro time a forward- or backward- inference begins. In this way 1987; Shapiro, Martins, and McKay 1982) using a natural- the GLAIR agent changes its attention from one problem to deduction proof theory, and belief revision (Martins and another. When this change of attention happens is, however, Shapiro 1988). currently rather ad hoc. A better theory of when it should Every functional term in SNePS is represented as an as- happen is a subject of future research. sertional frame in which the argument positions are slots and the arguments are fillers. This allows for sets of argu- 3.2 Contexts ments to be used to represent combinatorially many asser- Propositions may be asserted in the KL because they en- tions. For example, instanceOf({Fido, Lassie, tered from the environment. Either they were told to the Rover}, {dog, pet}) might be used to represent the agent by some other agent, possibly a human, or they are assertion that Fido, Lassie, and Rover are dogs and pets. It the result of some perception. Alternatively, a proposition also allows sets to be used for symmetric relationships, for might be asserted in the KL because it was derived by rea- example adjacent({US, Canada}) can represent the soning from some other asserted propositions. We call the assertion that the US and Canada are adjacent to each other former hypotheses and the latter derived propositions. When (Shapiro 1986). The frame view of SNePS supports “slot- a proposition is derived, an origin set, consisting of the set based inference”, whereby an asserted frame logically im- of hypotheses used to derive it is stored with it (Martins plies one with a subset or superset of fillers in given slots and Shapiro 1988) a` la an ATMS (de Kleer 1986). At each (Shapiro 2000a). moment, some particular context, consisting of a set of hy- By treating the terms as nodes and the slots as labeled di- potheses, is current. The asserted propositions, the propo- rected arcs, SNePS can be used as a propositional network sitions the GLAIR agent believes, are the hypotheses of the (Shapiro and Rapaport 1987). This supports a style of in- current context and those derived propositions whose origin ference driven by following paths in the network (Shapiro sets are subsets of that set of hypotheses. If some hypothe- 1978; 1991). sis is removed from the current context (i.e., is disbelieved), the derived propositions that depended on it remain in the 3.1 The Active Connection Graph KL, but are no longer believed. If all the hypotheses in the Reasoning is performed by an active connection graph origin set of a derived proposition return to the current con- (ACS) (McKay and Shapiro 1980; 1981). Viewing the text, the derived proposition is automatically believed again, SNePS knowledge base as a propositional graph, every without having to be rederived (Martins
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages12 Page
-
File Size-