
Cognitive Robotics: Analysis of Preconditions and Implementation of a Cognitive Robotic System for Navigation Tasks Stefano Bennati [[email protected]] Universita` degli Studi di Brescia Brescia, Italy Marco Ragni [[email protected]] Department of Cognitive Science, Friedrichstrasse 50 Freiburg 79098, Germany Abstract Cognitive robotics is a fascinating field in its own right and comprises both key features of autonomicity and cognitive skills like learning behavior. Cognitive architectures aim at mirroring human memory and assumptions about mental pro- cesses. A robot does not only extend the cognitive architecture regarding real-world interaction but brings new and important challenges regarding perception and action processes. Cog- nitive robotics is a step towards embodied cognition and may have influences not only on cognitive science but on robotics as well. This paper presents an integration of the cognitive archi- tecture ACT-R on a simple but programmable robotic system. This system is evaluated on a navigation experiment. Keywords: Cognitive Science; Robotics; Mindstorms; ACT- R; Navigation Introduction From the early beginning of robotics one line of research has tried to bring human cognition and robotics closer together Figure 1: Layouts of two mazes: The task is to find the target Brooks et al., 1999. Nowadays, technological progress in the object (red dot) at the edge or center Buechner et al., 2009. field of robotics and the development of cognitive architec- tures allows for a leap forward: A robot able to navigate an Modeling navigation tasks, for instance in labyrinths, still environment, with the ability to learn and a human-like atten- poses a challenge for cognitive architectures: Although they tion shift. can model decision processes they typically abstract from This new and exciting field is sometimes referred to as metrical details of the world, from sensor-input, and from the Cognitive Robotics 1. This combination of two fields leads to integration processes of environmental input to actions (like a number of important research questions: What are the im- move operations). Robotics, on the other hand, has typically mediate advantages of cognitive robotics (a term we will use captured all of these aspects, but does not necessarily make in the following for a robot controlled by a cognitive archi- use of human-like learning and reasoning processes or even tecture) over classical robotics? Is the cognitive architecture try to explain human errors or strategies. (which partially is able to simulate human learning processes) restricting or improving navigation skills? In cognitive sci- Compared to humans a robotic agent has limited percep- ence new research focuses on embodied cognition. tions and capabilities. On the other hand, it can teach us Embodied cognition claims that understanding (especially something about the relevant perceptions that are already suf- of spatial problems) is derived from the environment Ander- ficient for the robotic agent to perform successfully. For in- son, 2003. In other words, cognition is not independent on its stance, in the navigation task above (e.g., cp. Fig. 1), the physical realization. The study described in Buechner et al., distance from the wall (in the direction it is facing) and the 2009 used a virtual reality environment. Participants had to color of the floor the robot is standing on are sufficient. navigate through a labyrinth in the ego-perspective and had Much research is being done in the field of the cognitive to find an initially specified goal (red dot). The study identi- robotics; some prototypes of cognitive robots have already 2 fied recurrent navigation strategies (which we introduce later) been built . This research concentrates on the human-robot used by the subjects. and robot-environment interaction, allowing the robots to rec- 1”Towards a Cognitive Robotics”, Clark, Andy https://www. 2e.g., Cognitive Robotics with the architecture ACT-R/E http: era.lib.ed.ac.uk/handle/1842/1297 //www.nrl.navy.mil/aic/iss/aas/CognitiveRobots.php 157 ognize and interact with objects and people through their vi- sual and auditory modules Sofge et al., 2004. The architec- ture proposed contains Path Planning and Navigation rou- tines based on the Vector Field Histogram Borenstein and Ko- ren, 1991 that allow the robot to navigate avoiding obstacles and explore the environment. Unlike the architecture pro- posed, the objective of this paper is to implement a Cognitive Navigation System which is completely based on ACT-R and takes advantage of its cognitive features like Utility Learn- ing, that allows Reinforcement Learning Russell and Norvig, Figure 2: The Mindstorms class robot. It is equipped with a 2003, p.771 on productions. No other softwares than ACT-R color, ultrasonic and two touch sensors. will be used to control the robot. That experiment has the merit of have taken the first steps towards interfacing ACT-R with a mobile robot, but the data is still incomplete and tries Chassis design. The chassis is the structure to which the to combine as many different abilities as possible (from nat- central brick, the motors and the sensors are attached. To limit ural language processing, to parallel computing) in a manner odometry errors two fixed, independent driving wheels have that is likely more complex than necessary. Our approach been used, instead of caterpillar tracks. A third central fixed starts at the other end, taking only those aspects and sen- idle wheel allows the robot to keep its balance. The struc- sor data into account that are necessary to perform the task ture resembles a differential drive robot, with the exception – the most simple robot. Other research studied the possi- that the third wheel cannot pivot. Removing the tire from the bility of interfacing ACT-R with a robot and giving it direct wheel allows the robot to turn without too many difficulties. control over the robot’s actions. That effort produced an ex- tension of ACT-R called ACT-R/E(mbodied) Trafton, 2009; Sensors. A robot can be equipped with several kinds of sen- Harrison and Trafton, 2010. ACT-R/E contains some new sors: from a simple sound or light sensor, to more complex modules that act as an interface between the cognitive model ones like a compass or webcam. Our design makes use only and the Mobile-Dexterous-Social (MDS) robot Breazeal et al., of the most basic sensors: a touch sensor activated by pres- 2008, allowing it to perceive the physical world through a sure, a color sensor capable of recognizing light intensity or video camera. However, no navigation was investigated, as six tonalities of color, and an ultrasonic sensor for distance the robot did not navigate an environment. In both this and measurements. The color sensor and the ultrasonic sensor our implementation ACT-R has been extended. The vast dif- are fixed to the front, while a touch sensor is placed on each ference between the two is the smaller amount of changes side. Both touch sensors are linked to a structure that covers made to the standard ACT-R by our implementation, due to the front of the robot on that side. They are not used during the sensors’ higher complexity in the MDS. navigation but to stop the robot when it touches a wall. Consequently this article investigates, first, how to control a robot through ACT-R and, second, if this cognitive robot Integrating Architecture and Robot shows human-like behavior while navigating and searching The first step towards a cognitive robot is to create a working for a goal, e.g., an exit from the maze. The paper is structured interface between the ACT-R framework and the NXT plat- in three parts: The first part – The Elements – contains a brief form. This interface allows an ACT-R model to control the description of ACT-R, the robot and its features. The second robot and to receive sensor inputs from the robot. Due to the part – the Integration – describes the software that connects modular structure of ACT-R, this interface is composed of ACT-R with the robot, through which perceptions can reach modules. the cognitive architecture and actions the actuators. The third part – the Evaluation – tests the robot on a navigation task and analyzes the results. Low level lisp function. On the basis of these modules The Elements there is a library called nxt-lsp Hiraishi, 2007 that provides low-level lisp functions to execute simple tasks like interro- The Robotic System gate a sensor or move a motor. The capabilities of this library are the following: it can read information about the environ- Our device of choice is a standard Mindstorms3 class robot ment from the light sensor, the distance sensor and the touch (cf. Fig. 2). It consists of a central “brick” that contains the sensor; it can make the robot perform some actions, like move processor and the batteries. At this core component several its motors, turn on and off the light and play some sounds; it peripherals can be attached. It supports up to three step-to- can also check the robot’s internal state, querying the battery step engines and up to four sensors. or the motors about their states. The library did not have the support for the color sensor, so it has been extended to sup- 3This type of robot is produced by The Lego Group port that sensor, necessary for our purposes, as well. 158 Extending ACT-R Evaluation: Navigation in a Labyrinth The interface is composed by several modules with different We decided to test the cognitive robot on several self-built functions, this step has been taken to keep things simple and labyrinths. The environment is perceived by the robot to follow the general design of ACT-R.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-