A Biosemiotic Framework for Artificial Autonomous Sign Users
Total Page:16
File Type:pdf, Size:1020Kb
A Biosemiotic Framework for Artificial Autonomous Sign Users Erich Prem Austrian Research Institute for Artificial Intelligence Freyung 6/2/2 A-1010 Vienna, Austria [email protected] Abstract It is all the more surprising that recent publications in this In this paper we critically analyse fundamental assumptions area rarely address foundational is-sues concerning the underlying approaches to symbol anchoring and symbol semantics of system-environment interactions or other grounding. A conceptual framework inspired by problems related to biosemiotics (a notable exception is biosemiotics is developed for the study of signs in (Cangelosi 01)). As we will discuss below, the approaches autonomous artificial sign users. Our theory of reference make little or no reference to specifically life-like or even uses an ethological analysis of animal-environment robotic characteristics such as goal-directedness, interaction. We first discuss semiotics with respect to the purposiveness, or the dynamics of system-environment meaning of signals taken up from the environment of an interaction. These features are, however, central to much autonomous agent. We then show how semantic issues arise of robotics and ALife. It is highly questionable, however, in a similar way in the study of adaptive artificial sign us- ers. Anticipation and adaptation play the important role of whether technical approaches to symbol anchoring should defining purpose which is a neces-sary concept in the be developed devoid of any sound theoretical foundation semiotics of learning robots. The proposed focus on sign for concepts such as “meaning” or “reference”. Until now, acts leads to a se-mantics in which meaning and reference simplistic versions of Fregean or Peircean semiotics seem are based on the anticipated outcome of sign-based in- to have motivated existing technical symbol anchoring and teraction. It is argued that such a novel account of semantics symbol grounding proposals. based on indicative acts of refer-ence is compatible with This is all the more regrettable, since robotics lends itself merely indicative approaches in more conventional semiotic nicely as a tool for the study of se-mantic processes in life- frame-works such as symbol anchoring approaches in like systems. Robot and ALife models offer the potential robotics. for systematic in-depth analysis of complex system- environment interactions where many (or all) parameters are known, simply because these systems have been Introduction constructed by their designers. For this approach to Issues of semantics have a long history in the study of develop its full potential, however, it is necessary to first adaptive and evolving systems. Ever since the seminal get a thorough understanding of the phenomenon under work of Uexküll (1928) in biology researchers were study. This is the aim of the work described here. It is interested in the explanation of how something like performed in the con-text of learning robots in which the “meaning” is created in system-environment interaction. In acquisition of object concepts and learning names for these our days, modern system scientists (e.g. Pattee 86) objects plays a central role. addressed these questions. In the area of Artificial Life In this paper, we investigate the nature of signs and (ALife) it is often robotics researchers who focus on semantics in relation to robotic systems. In particular, we problems related to signs and artificial systems: work in propose a framework for the study of semantic aspects of this field ranges from stigmergic communication (Mataric autonomous (artificial) sign users with the aim of 95) to behaviour-based robots using signs and language clarifying concepts such as “reference” and “meaning”. (Steels & Vogt 97, Steels 01, Billard & Dautenhahn 97, We investigate the role symbols and other signs play in 99). In particular, technical approaches to mapping objects autonomous systems and address biosemantics from an in an autonomous robot’s environment on structures ethological perspective so as to develop a system theoretic internal to the robot (“Symbol Anchoring”) are an active framework which also reconciles symbol anchoring with field of research (Coradeschi & Saffiotti 03). sign-act perspective of meaning in robotic agents. Copyright © 2002, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. In both cases – and this is the relationship to symbol Semiotics in ALife grounding – the assumption usually is that there exists an internal mediating representation that captures the Semiotics generally refers to the study of signs. In a “meaning” of symbols (either as words or as program- psychological (and semiotic) context, signs usually are internal constructs). In most examples described in the physical objects that carry meaning for humans. From a literature, symbols refer to static objects or to features of more biology-oriented per-spective, signs are simply these objects. Verbs and other word categories are only signals that carry meaning. In what follows, we will use rarely grounded in practice. both characteri-sations based on the assumption that These approaches often root in a naïve view of language as physical signs also generate “signals” that can carry a means of communicating informa-tion about situations to meaning in this sense. other agents, as if the purpose of language would be to The referent in traditional semiotics is what a sign stands inform about a state-of-affairs. In Alife models the aim for, e.g. “cow” stands for a cow in many contexts and the then is to automate the construction of such a language by same is true for the German word “Kuh”. While syntax an autonomous adaptive agent. In essence, this amounts to studies the relation of signs to other signs, semiotics deals the automated generation of models of the environment with the relation of signs to their “meaning” or, more pre- that happen to be understandable by e.g. humans, cf. (Prem cisely, their referent. Pragmatics then covers all the aspects 00). The semantics of the sign tokens used thus are that relate to the actual use of the sign by its interpreter. In components in such an image-oriented model in which the what follows we will slightly blur this seemingly clear sole pur-pose of signs is to “represent” object-like states in distinction between pragmatics and semantics. This is a a model of the environment. In what follows we propose a natural consequence of focusing on life-like systems that completely different approach to semantics for ex-hibit purposeful behaviour in their environment and autonomous artificial sign users. either actively use or passively “read” signs. In the context of robotic, ALife, and AI systems, the Active Sign Users interest in signs and their meaning arises from at least three In contrast to the image-oriented view described above, we different perspectives. The first originates in the aim of will focus on the semantic action (and thus include aspects creating a system that uses signs for communicative acts of what is usually termed pragmatics) in the analysis of with humans or other artificial systems. The underlying artificial sign users. Following the approach of the motiva-tion here can either be to achieve this desired philosopher Martin Heidegger (“Being and Time”), the communication or to study processes of child-like semiotician Charles W. Morris (“Signs, Language, and language acquisition or even, language evolution. The Behaviour”), and the work of the linguist J.L. Austin second perspective – usually found in ro-botics – is to (“Doing things with words”), we regard signs as tools that connect the meaning of internal structures enable agents to pursue their goals. Consider the following (“representations” or “anchors”) to objects in the world. examples of using signs or “sign acts”: Here, the goal often is to create a model of the robot’s environment for planning purposes. Finally, another Sign Act Behaviour perspective (often alluded to by the two others) focuses on Greeting The agent reacts to a greeting or the more philosophical “Symbol Grounding Problem” that salutation or to another agent arose in discussions following John Searle‘s famous with a specific greeting “Chinese Room” argument. Harnad poses the question as behaviour. to how it is possible for an arti-ficial agent to acquire Set mark The agent marks an interesting symbols that possess intrinsic meaning (Harnad 90, 93) location or object in the which is not “para-sitic” on the meaning of other symbols environment so as to retrieve it in our head. later more easily. In many ALife approaches, all these efforts result in the challenge of how to establish and main-tain a relationship Warn Produce an alarm signal to make of sensory data of objects in the agent’s environment with group members aware of danger symbolic representations. The result of such a process are or make them run away. descriptors for sensory signals that allow the classification Flee React to a warning signal by of a part of the sensory stream as caused by an object in running away. the environment of the agent. In the case of “anchoring” Follow The agent moves in the direction the focus lies on proper technical solutions ranging from arrow to which an arrow points. prototypes, feature-based approaches, to more Find place The agent navigates to a sophisticated dynamical systems solutions (cf. Davidsson designated place. Examples 95). In the case of making an agent use and “understand” include “here”, “there”, “home”, human language, the focus is on the relationship of the etc. agent’s categories and human word use (or “word use” of other robots). Uexküll proposed to analyse living beings following a Table 1. Examples of sign using behaviours (sign acts). close study of the interaction of the system with its environment and of a detailed analysis of their sensors and actuators. This list can be easily extended to include more language- like behaviours, e.g.