<<

Journal of Computing and Information Technology 1

Iterative Design of Advanced Mobile Robots

Fran¸coisMichaud1∗, Dominic L´etourneau1 ´ Beaudry2, Maxime Fr´echette1, Froduald Kabanza2, and Michel Lauria1

1Department of Electrical Engineering and Computer Engineering, Universit´ede Sherbrooke, Qu´ebec Canada 2Department of Computer Science, Universit´ede Sherbrooke, Qu´ebec Canada

Integration of hardware, software and decisional com- 1. Introduction ponents is fundamental in the design of advanced mo- bile robotic systems capable of performing challenging The dream of having autonomous machines perform- tasks in unstructured and unpredictable environments. ing useful tasks in everyday environments, as un- We address such integration challenges following an it- structured and unpredictable as they can be, has erative design strategy, centered on a decisional archi- motivated for many decades now research in tecture based on the notion of motivated selection of and artificial intelligence. However, intelligent and behavior-producing modules. This architecture evolved autonomous mobile robots is still in what can be over the years from the integration of obstacle avoid- ance, message reading and touch screen graphical inter- identified as being research phases, i.e., mostly basic faces, to localization and mapping, planning and schedul- research and concept formulation, and some prelimi- ing, sound source localization, tracking and separation, nary uses of the technology with attempts in clarify- speech recognition and generation on a custom-made in- ing underlying ideas and generalizing the approach teractive robot. Designed to be a scientific robot re- [Shaw 2002]. The challenge in making such a dream porter, the robot provides understandable and config- become a reality lies in the intrinsic complexities urable interaction, intention and information in a con- and interdependencies of the necessary components ference setting, reporting its experiences for on-line and to be integrated in a robot. A mobile robot is a off-line analysis. This paper presents the integration of system with structural, locomotive, sensing, actu- these capabilities on this robot, revealing interesting new ating, energizing and processing capabilities, which issues in terms of planning and scheduling, coordination set the resources available to deal with the operat- of audio/visual/graphical capabilities, and monitoring the uses and impacts of the robot’s decisional capabil- ing conditions of the environment. Such resources ities in unconstrained operating conditions. This paper are exploited and coordinated based on decisional also outlines new design requirements for our next de- processes and higher cognitive functions, according sign iteration, adding compliance to the locomotion and to the intended tasks. manipulation capabilities of the platform, and natural In his Presidential Address at the 2005 AAAI interaction through gestures using arms, facial expres- (American Association for Artificial Intelligence) sions and the robot’s pose. Conference, Ronald Brachman argued that Artificial Intelligence (AI) is a system science that must target working in the wild, messy world [Brachman 2006]. Keywords: Integration, Decisional architecture, Task This was the initial goal of AI, which revealed to be planning and scheduling, Human-robot interaction, Iter- too difficult to address with the technology 50 years ative design. ago, leading to the creation of more specialized sub- fields. The same can be said about the Robotics sci- entific community. The underlying complexities in these disciplines are difficult enough to resolve that ∗F. Michaud holds the Canada Research Chair (CRC) in a “divide-and-conquer” approach is justified. This Mobile Robotics and Autonomous Intelligent Systems. Sup- explains why only a few initiatives around the world port for this work is provided by the Natural Sciences and En- gineering Research Council of Canada, the Canada Research tackle directly this challenge. But those that do al- Chair program and the Canadian Foundation for Innovation. ways identify new ways of solving the integration is- 2 Iterative Design of Advanced Mobile Robots

sues, outlining guidelines and specifications for new tional, articulated torso) and interaction skills (ges- research questions or for new technologies aiming to ture, expressions, tactile, force-controlled actuators, improve robotic capabilities. Continuous technolog- stereo vision, natural language, manipulate objects ical progress makes it possible to push the state- and collaborate with humans in task-oriented ap- of-the-art in robotics closer to its use in day-to-day plications), they do not go far in terms of cogni- applications, and it is by addressing directly the in- tion (e.g., autonomous decisions in open settings, tegration challenges that such objective can and will adaptation to a variety of situations). They also be accomplished. make compromises on the advanced capabilities of Designing a mobile robot that must operate in the integrated elements: not all of them use state- public settings probably addresses the most com- of-the-art technologies. These platforms are how- plete set of issues related to autonomous and inter- necessary in making significant contributions active mobile robots, with system integration play- for moving toward higher level of cognition (e.g., ing a fundamental role at all levels. A great va- future research with targets the implemen- riety of robots have been and are currently be- tation of a motivational system with homeostatic ing designed to operate in natural settings, in- balance between basic drives and a cognitive layer, teracting with humans in the wild, messy world. plus incremental learning of sensorimotor experi- For the most recent and pertinent related work to ences [Edsinger-Gonzales & Weber 2004]). our project, we can identify non-mobile child- In opposition, our work initiated from cognitive size humanoid robots (e.g., Kaspar, from the architectural considerations, and moved to software RobotCub project, to investigate the use of ges- and hardware requirements of a mobile robot oper- tures, expressions, synchronization and imitation; ating in real life settings. EMIB (Emotion and Mo- Huggable [Stiehl & Breazeal 2005], equipped with tivation for Intentional selection and configuration full-body sensate skin, silent voice coil-type electro- of Behavior-producing modules) was our first com- magnetic actuators with integrated position, veloc- putational architecture used to integrate all neces- ity and force sensing means); adult-size humanoid sary decisional capabilities of an autonomous robots torso (e.g., Dexter [Brock et al. 2005], for grasping [Michaud 2002]. It was implemented on a Pioneer 2 and manipulation using tactile load cells on the fin- robot entry in the 2000 AAAI Mobile Robot Chal- ger, investigating decomposition-based planning and lenge. A robot had to start at the entrance of the knowledge acquisition and programming by demon- conference site, find the registration desk, register, stration; Domo [Edsinger-Gonzales & Weber 2004], perform volunteer duties (e.g., guard an area) and using stereo vision, force sensing compliant actu- give a presentation [Maxwell et al. 2004]. Our team ators (FSCA) and series elastic actuators (SEA) was the first and only one to attempt the event from [Pratt & Williamson 1995] to investigate alterna- start to end, using sonars as proximity sensors, navi- tive approaches to robot manipulation in unstruc- gating in the environment by reading written letters tured conditions); differential-drive platforms and symbols using a pan-tilt-zoom (PTZ) camera, with manipulators (e.g., B21, PeopleBot and interacting with people through a touch-screen in- Care-O-Bot platforms equipped usually with one terface, displaying a graphical face to express the robotic manipulator); differential-drive mobile robot’s emotional state, determining what to do next torsi (e.g., Robonaut, from NASA, installed on a using a finite-state machine (FSM), recharging itself Segway Robotic Mobility Platform base and teleop- when needed, and generating a HTML report of its erated; Battlefield Extraction-Assist Robot BEAR experience [Michaud et al. 2001]. We then started from Vecna Robotic-US, using leg-track/dynamic to work on a more advanced platform that we first balancing platform to carry fully-weighted human presented in the 2005 AAAI Mobile Robot Com- casualities from a teleoperation base); omnidi- petition [Michaud et al. 2007, Michaud et al. 2005]. rectional humanoid base (e.g., ARMAR III Our focus was on integrating perceptual and de- [Asfour et al. 2006], using load cells, torque sen- cisional components addressing all four issues out- sors, tactile skin, stereo vision and six micro- lined during the AAAI events over the years, i.e.: phone array in the head and a three-tier (plan- 1) the robust integration of different software pack- ning, coordination, execution) control architecture ages and intelligent decision-making capabilities; 2) to accomplish service tasks in a kitchen; HERMES natural interaction modalities in open settings; 3) [Bischoff & Graefe 2004], combining stereo vision, adaptation to environmental changes for localiza- kinesthetic, tactile array bumper and single omnidi- tion; and 4) monitoring/reporting decisions made by rectional auditory sensing with natural spoken lan- the robot [Gockley et al. 2004, Smart et al. 2003, guage (input via voice, keyboard or e-mail) with a Maxwell et al. 2004, Simmons et al. 2003]. Trials situation-oriented skill-based architecture). conducted at the 2005 AAAI event helped estab- Even though each of these robots present inter- lished new requirements for providing meaningful esting functionalities in terms of motion (omnidirec- and appropriate modalities for reporting and moni- Iterative Design of Advanced Mobile Robots 3

toring the robot’s states and experiences. When operating in open settings, interactions are rapid, diverse and context-related. Having an au- tonomous robot determining on its own when and what it has to do based on a variety of modalities (time constraints, events occurring in the world, re- quests from users, etc.) also makes it difficult to un- derstand the robot’s behavior just by looking at it. Therefore, we concentrated our integration effort in 2006 to design a robot that can interact and explain, through speech and graphical displays, its decisions and its experiences as they occur (for on-line and off- line diagnostics) in open settings. This paper first presents the software/hardware components of our 2006 AAAI robot entry, its software and decisional architectures, the technical demonstration made at the conference along with the interfaces developed Figure 1: Our 2006 AAAI robot entry (front view, for reporting the robot’s experiences. This outlines back view). the progress made so far in addressing the integra- tion challenge of designing autonomous robots. The paper then describes the design specification of our used to implement the behavior-producing mod- new advanced mobile robot platform, derived from ules, the vision modules for reading messages what we have identified as critical elements in mak- [Letourneau, Michaud, & Valin 2004] and SSLTS, ing autonomous systems operating in everyday en- our real-time sound source localization, track- vironments. ing [Valin, Michaud, & Rouat 2006] and separation [Valin, Rouat, & Michaud 2004] system. For speech 2. A scientific robot reporter recognition and dialogue management, we interfaced the CSLU toolkit (http://cslu.cse.ogi.edu/toolkit/). Our 2006 AAAI robot entry, shown in Figure 1, is a One key feature of CSLU is that each grammar can custom-built robot equipped with a SICK LMS200 be compiled at runtime, and it is possible to use dy- laser range finder, a Sony SNC-RZ30N 25X pan-tilt- namic grammars that can change on-the-fly before zoom color camera, a Crossbow IMU400CC inertial or while the dialogue system is running. Software measurement unit, an array of eight microphones integration of all these components are made possi- placed in the robot’s body, a touch screen interface, ble using MARIE, a middleware framework oriented an audio system, one on-board computer and two towards developing and integrating new and exist- laptop computers. The robot is equipped with a ing software for robotic systems [Cote et al. 2006b, business card dispenser, which is part of the robot’s Cote et al. 2006a]. interaction modalities. A small camera (not shown in the picture) was added on the robot’s base to al- low the robot to detect the presence of electric out- lets. The technique used is based on a cascade of boosted classifiers working with Haar-like features [Lienhart & Maydt 2002] and a rule-based analysis of the images. Figure 2 illustrates the robot’s software architec- ture. It integrates Player for sensor and actuator ab- straction layer [Vaughan, Gerkey, & Howard 2003] and CARMEN (Carnegie Mellon Robot Naviga- tion Toolkit) for path planning and localization [Montemerlo, Roy, & Thrun 2003]. Also integrated but not shown on the figure is Stage/Gazebo for 2D and 3D simulators, and Pmap library2 for 2D mapping, all designed at the Univer- sity of Southern California. RobotFlow and FlowDesigner (FD) [Cote et al. 2004] are also

2http://robotics.usc.edu/ ahoward/pmap/ Figure 2: Our robot’s software architecture. 4 Iterative Design of Advanced Mobile Robots

2.1. Decisional architecture and modules robot’s decision-making capabilities is shown in Fig- ure 3. It is based on the notion of having motivated For our participation to the AAAI 2006 Mobile selection of behavior-producing modules. We refer Robot Competition, our robot is designed to be a sci- to it as MBA, for Motivated Behavioral Architec- entific robot reporter, in the sense of a human-robot ture [Michaud et al. 2007, Michaud et al. 2005]. It interaction research assistant. The objective is to is composed of three principal components: have our robot provide understandable and config- urable interaction, intention and information in un- 1. Behavior-producing modules (BPMs) define constrained environmental conditions, reporting the how particular percepts and conditions influ- robot experiences for scientific data analysis. ence the control of the robot’s actuators. Their The robot is programmed to respond to requests name represents their purpose. The actual from people, and with only one intrinsic goal of want- use of a BPM is determined by an arbitra- ing to recharge when its energy is getting low (by tion scheme realized through Arbitration and either going to an outlet identified on the map or Behaviour Interfaces module (implementing in by searching for one, and then asking to be plugged our case a priority-based / subsumption arbi- in). Our robot’s duty is to address these requests to tration), and the BPM activation conditions the best of its capabilities and what is ‘robotically’ derived by the Behaviour Selection and Con- possible. Such requests may be to deliver a written figuration module (explained later). or a vocal message to a specific location or to a spe- cific person, to meet at a specific time and place, to 2. Motivational sources (or Motivations, serving schmooze, etc. The robot may receive multiple re- to make the robot do certain tasks) recom- quests at different periods, and has to autonomously mend the use or the inhibition of tasks to manage what, when and how it can satisfy them. be accomplished by the robot. Motivational With our robot, we assume that the most appro- sources are categorized as either instinctual, priate approach for making a robot navigate in the rational or emotional. This is similar to con- world and accomplish various tasks is done through sidering that the human mind is a “commit- the use of behavior-producing modules (also known tee of the minds,” with instinctual, rational, as behavior-based systems [Arkin 1998]). As a re- emotional, etc. minds competing and inter- sult, representation and abstract reasoning working acting [Werner 1999]. Instinctual motivations on top of these modules become a requirement, with provide basic operation of the robot using sim- the objective of being able to anticipate the effects of ple rules. Rational motivations are more re- decisions while still adapt to the inherently complex lated to cognitive processes, such as navigation properties of open and unconstrained conditions of and planning. Emotional motivations monitor natural living environments. conflictual or transitional situations between tasks. The importance of each of these influ- ences explains why manifested behavior can be more influenced by direct perception, by reasoning or by managing commitments and choices. By distributing motivational sources and distinguishing their roles, it is possible to exploit more efficiently the strengths of var- ious influences regarding the tasks the robot must accomplish, while not having to rely on only one of them for the robot to work. This is similar in essence to the basic principles of behavior-based control, but at a higher ab- straction level. 3. Dynamic Task Workspace (DTW) serve as the interface between motivational sources and BPMs. Through the DTW, motivations ex- change information asynchronously on how to activate, configure and monitor BPMs. The interface between the BPMs and the motiva- tional sources is done through tasks in the Figure 3: Our robot’s decisional architecture. DTW. Tasks are data structures that are asso- ciated with particular configuration and acti- The decisional architecture developed for our vation of one or multiple behaviors. The DTW Iterative Design of Advanced Mobile Robots 5

organizes tasks in a tree-like structure ac- tion and Configuration module. cording to their interdependencies, from high- level/abstract tasks (e.g., ’Deliver message’), 2.1.1. Task planning motivation to primitive/BPM-related tasks (e.g., ’Avoid’). Motivations can add and modify tasks by sub- This motivational source invokes, when a new mitting modification requests, queries or sub- task is placed in the DTW, a planning al- scribe to events regarding the task’s status. gorithm that combines principles from SHOP2 HTN planning algorithm [Nau et al. 2003] and By exchanging information through the DTW, SAPA [Do & Kambhampati 2003]. As in SHOP2, motivations are kept generic and independent from we specify a planning domain (e.g., the domain of each other, allowing motivations to distributively attending a conference at AAAI) by describing the come up with behavior configuration based on the robot primitive behaviors in terms of template tasks capabilities available to the robot. For instance, and methods for recursively decomposing template one instinctual motivational source may monitor the tasks down to primitive tasks which are atomic ac- robot’s energy level to issue a recharging task in the tions. For instance, we can specify that the task of Dynamic Task Workspace, which activates a ‘Lo- making a presentation at location px is from time cate Electrical Outlet’ behavior that would make the t1 to time t2, and that it can be decomposed into robot detect and dock in a charging station. Mean- the simpler subtasks of going to px and presenting while, if the robot knows where it is and can deter- at time t1. Decomposition of tasks into simpler ones mine a path to a nearby charging station, a path are given with preconditions under which the de- planning rational motivation can add a subtask of composition is logically sound and time constraints navigating to this position, using a ‘Goto’ behavior. for ensuring its consistency. Given such a set of Otherwise, the ‘Locate Electrical Outlet’ behavior task specifications, the planning algorithm consists will at least allow the robot to recharge opportunis- of searching through the space of tasks and world tically, when the robot perceives a charging station states. nearby. More specifically, starting from a given set of ini- Only instinctual and rational motivations are tial tasks and a current state describing the robot considered in this study, with rational motivations state and environment situation, the planner ex- having greater priority over instinctual ones in case plores a space of nodes where each node represents: of conflicts. Survive makes the robot move safely in the current list of subtasks; the current robot state the world while monitoring its energy level. Plan- and environment situation; the current plan (ini- ner determines which primitive tasks and which se- tially empty). A current node is expanded into suc- quence of these tasks are necessary to accomplish cessors by decomposing one of the current task into higher-level tasks under temporal constraints and simpler ones (using the specified task decomposi- the robot’s capabilities (as defined by BPMs). This tion method) or by validating a current primitive motivational source is detailed in Section 2.1.1.. task against the current world state (this is done by Navigator determines the path to a specific loca- checking its precondition and updating the current tion to accomplish tasks posted in the DTW. Re- world state using the primitive task’s effects) and quest Manager handles task requested by the users adding it to the current plan. The search process via the touch screen interface or from vocal com- stops when a node is reached with no more task to mands. Task requests are then processed by Planner decompose. On a node, there can be different ways and added to the actual plan if appropriate. of decomposing a task and different orders in which With multiple tasks being issued by the moti- to decompose them; backtracking is invoked to con- vational sources, the Behavior Selection and Con- sider the different alternatives until obtaining a so- figuration module determines which behaviors are lution. This can be a source of exponential blow up to be activated according to recommendations made during the search process. By carefully engineering by motivational sources, with or without particu- the decomposition methods to convey some search lar configuration (e.g., a destination to go to). A control strategy, it is possible to limit this state ex- recommendation can either be negative, neutral or plosion [Nau et al. 2003]. positive, or take on real values within this range re- Task Planning with Temporal Constraints. garding the desirability of the robot to accomplish A normal HTN planning algorithm does not specific tasks. Activation values reflect the result- consider temporal constraints [Nau et al. 2003, ing robot’s intentions derived from interactions be- Nau, Ghallab, & Traverso 2004]. To implement tween the motivational sources. Behavior use and this, we modified the basic HTN planning algorithm other information coming from behavior and that by: (a) adding a current-time variable into the rep- can be useful for task representation and monitoring resentation of a current state during search; (b) spec- are also communicated through the Behavior Selec- ifying time constraints in the specification based on 6 Iterative Design of Advanced Mobile Robots

this variable in the precondition of task decomposi- Task Filtering and Priority Handling In a tion methods and of primitive tasks; (c) allowing the traditional planning setting, a list of initial tasks is specification of conditional effect update for primi- considered as a conjunctive goals. If the planner tive tasks based on this variable. That way, we ex- fails to find a plan that achieves them all, it re- tend the HTN concept to support time constrained ports failure. In the context of the AAAI Challenge tasks by adding temporal constraints at each decom- as well as in many real life situations, we expect position level. The planner can use these temporal the robot to accomplish as many tasks as possible, constraints to add partial orders on tasks. These with some given preferences among task. For in- partial orders reduce the search space and accelerate stance, the robot may fail to deliver one message at the plan generation process. These temporal con- the right place, but successfully deliver another one. straints can also be transferred when a high-level This would be acceptable depending on the environ- task is decomposed into lower-level tasks. ment conditions. When defining temporal intervals in the domain We implement a robot mission as a list of tasks, specification, care must be taken to establish a good each associated with a degree of preference or prior- compromise between safety and efficiency for task ity level. Then we iteratively use the HTN planner execution. Being too optimistic may cause plan fail- to obtain a plan for a series of approximation of the ure because of lack of time. For instance, assuming mission, with decreasing level of accuracy. Specif- that the robot can navigate at high speed from one ically, initially we call the planner with the entire location to the other will cause a plan to fail if unpre- mission; if it fails to compute a plan within a dead- dictable events slow down the robot. To solve this line set empirically in the robot architecture (typ- problem, the solution is to be conservative in the ically 1 second), a lowest priority task is removed domain model and assume the worst case scenario. from the mission and the planner is called with the On the other hand, being too conservative may lead remaining mission. This is repeated until a solution to no solution. Therefore, temporal intervals in the plan is found. If the mission becomes empty before a domain model are specified using an average speed solution is found, failure is returned (the entire mis- (0.14 m/s) lower than the real average speed (0.19 sion is impossible). Clearly, this model can be easily m/s) of the robot. configured to include vital tasks that cause failure Over the last years, efforts has been done in cre- whenever any of them is not achievable. We also ating formal techniques for planning under temporal use some straightforward static analysis of the mis- and resource uncertainty [Bresina et al. 2002]. To sion to exclude for instance low priority tasks that keep our approach simple and to avoid having to apparently conflict with higher priority ones. deal with complex models of action duration, we use Anytime Task Planning. Similarly the following heuristics: planning is initiated first us- to SAPA [Do & Kambhampati 2003] and ing a conservative action duration model and, when SHOP2 [Nau et al. 2003], our planner has an any- a possible opportunity (using the temporal informa- time planning capability, which is important in tion associated with the tasks) is detected, the plan- mobile robotic applications. When a plan is found, ner is reinvoked to find a better plan. In other words, the planner tries to optimize it by testing alterna- if the robot proceeds faster than what is planned tive plans for the same mission until the empirical (i.e., the end of the updated projected action is lower maximum planning time is reached. than the end of the planned action), it is possible On-line Task Planning. The integration of that a better plan exists, and replanning therefore planning capabilities into a robotic system must take occurs. into account real-time constraints and unexpected Time Windowing. Using a technique bor- external events that conflict with previously gener- rowed from SAPA [Do & Kambhampati 2003], our ated plans. To reduce processing delays in the sys- planner post-process the plan generated by the tem, our planner is implemented as a library. The search process to obtain a plan with time windowing MARIE application adapter that links the planner for actions. During search, each node is associated to the rest of the computational architecture loads with a fixed time stamp (that is, the current − time the planner library, its domain and world represen- variable), and at the end of the search process we tation (e.g., operators, preconditions and effects) at obtain a plan consisting of a sequence of actions initialization. The planner remains in memory and each assigned with a time stamp indicating when it does not have to be loaded each time it is invoked. A should be executed. This sequence of actions is post- navigation table (providing the distances from each processed based on time constraints in the domain pairs of waypoints) also remains in memory and is specification (i.e., constraints attached to task de- dynamically updated during the mission. External composition methods and primitive tasks) to derive events are accounted for by monitoring the execution time intervals within which actions can be executed of a plan and validating that each action is executed without jeopardizing the correctness of the plan. with the time intervals set in the plan. Violating Iterative Design of Advanced Mobile Robots 7

conditions are handled by specific behavioral mod- plan (e.g., behavioral configuration, map with ules including one that activates re-planning. dynamic places, active tasks). In addition, we The robot can receive task requests at any time can still use the basic off-line log viewing ap- during a mission. This requires the robot to update plication to replay logs off-line. its plan even if it is not at a specified waypoint in its Figure 4 shows a representation of the world representation. When generating a task plan, WorkspaceViewer’s main window. The up- the duration of going from one waypoint to another per section contains a timeline view of DTW is instantiated dynamically based on the current en- events (first line), planner events (second line), vironment. That is, the duration of a Goto(X) ac- and behaviors’ activations and exploitations tion is not taken from the navigation table but is (third line). The bottom section shows de- rather estimated from the actual distance to the tar- tailed information according to the position geted waypoint X, as provided by an internal path- of the vertical marker on the timeline: a list planning algorithm. Therefore, Planner only has to of DTW’s tasks and properties (first window deal with high-level navigation tasks, and it is not from the left), active motivations (second win- necessary to plan intermediate waypoints between dow), the current plan (third window), the two waypoints that are not directly connected. To map of the environment and the trajectory estimate the duration for navigating from a waypoint of the robot (fourth window), the behaviors to another, the planner uses a navigation table of the and their activations and exploitations (under size n2 where n is the number of defined waypoints. the first window). The WorkspaceViewer is This table is initialized by computing the minimum directly connected to the DTW and displays distance between waypoints with a classic A* algo- information as they become available in real- rithm. Each time a new waypoint is added on the time. The WorkspaceViewer is also linked to map, the distances table is updated dynamically. Request Manager motivation to handle user re- quests. 2.1.2. Interaction modalities • Visualization of the SSLTS results. Au- With a distributed cognitive architecture integrat- dio data is difficult to monitor in dynamic and ing a high number of capabilities to be used in open open conditions: it is not persistent as a vi- conditions, it is difficult to assess what is going on sual feature can be, for instance. An interface simply by looking at the robot’s behavior, or by an- representing where the sound sources around alyzing log files off-line. More dynamic and interac- the robot are detected and what was recorded tive tools are required. Here is the set of modalities by the robot revealed to be necessary from designed to improve our understanding of integrated what we experienced during our 2005 partic- modalities (vision, audition, graphical, navigation) ipation. Figure 5 illustrates the interface de- on our robot. veloped. The upper part shows the angle of the perceived sound sources around the robot, in • Visualization tools of decisional compo- relation to time. The interface shows in real- nents. The underlying objectives of such time the sound sources perceived, using dots of tools is to facilitate integration work by offer- a distinct color. The sound sources are saved, ing visualization tools of the decisional com- and the user can select them and play back ponents added to the robot. It requires co- what the SSLTS generated. This interface re- herent and user-friendly representation of the veals to be very valuable for explaining what integrated components so that developers can the robot can hear, to construct a database of concentrate on their combined usage instead of typical audio streams in a particular setting, spending time extracting and interpreting data and to analyze the performance of the entire in log files. In 2005, we developed a log view- audio module (allowing to diagnose potential ing application, allowing us to monitor off-line difficulties that the speech recognition module the complete set of states of the robot through has to face). log files created by the different software com- ponents [Michaud et al. 2007]. In 2006, we ex- For instance, more than 6600 audio streams tended this tool to provide a dynamic and real- were recorded over the twelve hours of op- time view of what the robot is actually doing eration of the robot in open settings at the and planning to do, on-line. We call this up- AAAI 2006 conference, held at Seaport Hotel dated version the WorkspaceViewer. With the and World Trade Center in Boston. From this WorkspaceViewer, it is now possible to display, set of audio streams, we observed that around directly on our robot’s graphical interface, con- 61% were audio content (people talking nearby textual information according to its current the robot, sounds of different sort) adequately 8 Iterative Design of Advanced Mobile Robots

Figure 4: WorkspaceViewer’s main window.

processed by our SSLTS system (i.e., the au- time). dio streams were correctly separated and lo- cated, and are understandable by a human lis- tener). However, they were not related to spe- cific interaction with the robot, and therefore revealed to be useless for interacting with our robot. 3.5% of the audio streams were inaudi- ble by a human listener, and an additional 34% were either very short (e.g., caused by a noise burst or laughter) or truncated (e.g., when the robot starts talking, to avoid making the robot hear itself talk; sound amplitude too low). Implementation problems caused these limita- tions, as we diagnosed after the event. This left around 1.5% of audio streams intended for specific interaction with the robot adequately processed by the SSLTS system. All were cor- Figure 5: Track audio interface. rectly processed using CLSU in specific inter- action our robot had with human interlocu- • Multimodal interaction interfaces. It is tors. This clearly demonstrates the challenge sometimes difficult to know exactly which task of making an artificial audition system for op- is prioritized by the robot at a given time, mak- eration in open conditions. Such auditory ca- ing the interlocutors unable to know what the pability working in real-time in such condi- robot is actually doing and evaluate its perfor- tions has not yet been demonstrated on other mance. So we made the robot indicate its in- robots. The performance observed is impres- tention verbally and also graphically (in case, sive, but the challenges to overcome to make it for whatever reasons, the interlocutor is not more robust and efficient are still quite impor- able to understand the message). Figure 6 il- tant. We will continue to improve SSLTS’ per- lustrates the graphical interface with which our formance by not conducting speech recognition robot is requesting assistance to find a location on audio streams that are too small (i.e., < 1 in the convention center. sec) and by making it possible to concurrently The graphical interfaces of the robot were run multiple speech recognition processes (cur- made so that users can indicate through push rently only one stream can be processed at a buttons (using the touch screen) what they Iterative Design of Advanced Mobile Robots 9

want our robot to do. We also made it pos- Base sible for the users to make these requests ver- RobotsRoom bally, saying out loud the names of the but- tons. To facilitate the recognition process, Corridor Alley NorthEndComplexe specific grammar and dialogue managers are Washroom loaded in CSLU for each graphical mode. The CambridgeComplex audio and graphical interaction are therefore Station Hall tightly integrated together. WaterfrontRoom Elevator Escalator RestArea

Figure 7: Our robot’s intended trajectory for the technical presentation.

Our technical demonstration, programmed to last 25 minutes (setting an overall fixed time con- straint for the mission), consisted of five phases done in the area represented in Figure 7:

1. INTRODUCTION. Presentation (max 4:30 min) at Alley of our robot’s features: track Figure 6: Graphical display for an assistance request audio viewer for separated sources; directing to find a location (in laboratory conditions). camera view toward the speaker, with snap- shots memorized every second; graphic dis- plays for user input; context-based grammar; 2.2. Demonstration and results pre-mapping of the area; Gantt chart represen- tation of the plan. Our robot demonstrated its capabilities in the AAAI 2006 Human-Robot Interaction Event, evaluated in 2. NAVIGATION. Go to Corridor to receive a a technical and a public demonstrations. Our robot message (voice attachment or text) to be deliv- entered five of the seven categories by doing the fol- ered at WaterfontRoom. No time constraints lowing: are specified for this task. A specific graph- • Natural Language Understanding and Action ical interface is displayed by the Workspace- Execution: Following requests from humans, Viewer to get the message. The Planner in- written or verbal, for making the robot do dif- serts this task if time permits between existing ferent tasks. tasks (which are time constained). • Perceptual Learning Through Human Teach- 3. PLANNED RECHARGE. Initiate a recharg- ing: Learning of a location, specified by hu- ing task, asking somebody to plug the robot in mans or identified by the robot (i.e., electrical an electrical outlet. This should occur no later outlet), and being able to remember it. than 25 minutes after the beginning of the pre- sentation at the Station. However, Station is • Perception, Reasoning, and Action: Tempo- an unknown location when the demonstration ral reasoning, planning and scheduling of tasks starts. There is two ways the location can be and intents. determined : from user input with the touch • Shared Attention, Common Workspace, Intent screen interface, or automatically added if an Detection: Directing camera view in the di- electrical outlet is perceived. rection of the speaker, communicate temporal 4. PLAYBACK. Accelerated replay of what the constraints regarding task, and use of the dy- robot experienced during the demonstration. namic (context-configured) viewer, on-line and The WorkspaceViewer program starts an au- off-line. tomatic replay from the beginning of the • Integration Challenge Categories 3 to 6: Robot demonstration, and stops at important events. Scientific Reporter/Assistant demo, with un- Judges can then see a snapshot of the active derstandable interaction, decision and situa- behaviors, the plan, the trajectory of the robot tions experienced in unconstrained conditions. and active tasks. Playback occurs 15 minutes 10 Iterative Design of Advanced Mobile Robots

after the beginning of the demonstration at CARMEN using laser range finder data had difficul- Station (max 2:00 min). ties finding reference points to its internal map. The estimated positions, represented with dots on Fig- 5. QUESTIONS AND COMMENTS. Ask judges ure 9, are scattered and even sometimes outside the to enter comments regarding its performance map. Additional fine-tuning of CARMEN to oper- and capabilities through a WorkspaceViewer ate in crowded conditions would have been required. contextual interface. Judges could enter tex- We had to reposition manually the robot on the map tual or vocal comments if they want. This task when this happened, and our robot arrived two min- occurs 20 minutes after the beginning of the utes late at Alley. With our robot already following presentation (max 4 min). One minute before a tight schedule, it had a lot of trouble going through the end, a ‘Thank you’ message is displayed all of the planned tasks: they were dismissed in se- on the screen, and our robot gives one busi- quence to try to catch up with the schedule. Most of ness card. the time, we had to manually demonstrate the capa- bilities by modifying the initial plan and removing

Base tasks so that the time constraints could fit in a new RobotsRoom plan. This went on until the end, even exceeding the 25 minutes time constraint. Therefore, it turned out Corridor Alley NorthEndComplexe that localization performance played a huge impact Washroom on the overall performance of our robot. We knew that the time constraints were difficult to meet, but CambridgeComplex Station Hall we did not expect such poor performance with local- ization. This revealed the influence of tightly cou- WaterfrontRoom pling the planner and the localizer, and that multiple Elevator Escalator localization capabilities may be required when navi- RestArea gating with too many people surrounding the robot, blocking the laser’s field of view to get good position estimates. Figure 8: Localization results with nobody around our robot. Our public demonstration was made to be more interactive and entertaining. For instance, we made Figure 8 shows the dry run conducted minutes our robot interact with people by playing games before the technical presentation, without the jury (music or science trivia questions, fortune cookies). members. We started our robot in the Hall with a In very crowded conditions, positioning the camera mission making it go through Alley, Corridor, Wa- in the direction of the speaker worked very well, as terfront Room, to finally end its presentation at Sta- so did the separation of sources considering the very tion. Everything went smootlhy and worked out fine. noisy conditions in which the robot was in. Look- ing at the log files, it is possible to clearly follow Base conversations the robot had with people. With the RobotsRoom event taking place outside the technical demonstra- tion area, no map was made of the area, and our Corridor Alley NorthEndComplexe robot only wandered around, being stopped most of Washroom the time to interact with people. With the observed CambridgeComplex performances during the technical demonstration, Station Hall having a map would not have helped the robot lo- WaterfrontRoom calized itself in the area (it was much more crowded Elevator Escalator than what our robot experienced during the techni- RestArea cal demonstration).

Overall, our robot was the overall winner of Figure 9: Localisation results during the technical the event, finishing first in four categories (Per- presentation. ceptual Learning through Human Teaching; Per- ception, Reasoning and Action; Shared Attention, Figure 9 shows what actually occurred during Common Workspace, Intent Detection; Integration the technical demonstration. Compared to Figure 8, Challenge), and second in one (Natural Language Our robot had a lot of difficulties localizing its posi- Understanding and Action Execution). It was also tion when people and judges were around the robot. awarded the Public Choice Award. Iterative Design of Advanced Mobile Robots 11

3. Next iteration in designing ad- objective of designing advanced mobile robots oper- vanced mobile robots ating in everyday environments and interacting with humans.

Our 2006 implementation showed increased reliabil- • Compliance. A robot mechanically interacts ity and capabilities of MBA (especially the DTW) in with its environment when the dynamics of a stable and integrated implementation of the com- the coupled system (the robot and environ- ponents described in Section 2.. Our planner is capa- ment in contact) differ significantly from the ble of dealing with conditions temporal constraints dynamics of the robot alone [Buerger 2005, violation, opportunity detection, task cancellation, Hogan & Buerger 2005]. Not all robots me- unknown waypoint) that would occur in real life set- chanically interact with their environment. tings and within a computational architecture us- For instance, the vast majority of robots used ing distributed modules (compared to architectures for industrial applications do not interact sig- with one centralized module for goal/task genera- nificantly. In many cases, a limited amount tion). Our robot is equipped with a dynamic viewer of physical interaction can be tolerated with- with voice and graphical interaction, contextualized out specific modeling or control. However, for based on the robot’s active tasks. complex robotic tasks (e.g., manipulation, lo- However, it is only one step in coming up with comotion), the lack of knowledge of precise in- the most efficient integration of these modalities. teraction models, the difficulties to precisely We now have the necessary tools for easy configura- measure the task associated physical quantities tion of robot missions and for on- and off-line anal- (e.g., position of contact points, interaction ysis of experienced situations (trajectories, requests, forces) in real-time, the finite sampling time of sound sources, pictures, internal states, etc.), use- digital control loops and the non-collocation of ful in human-robot interaction experiments and AI sensors and transducers have negative effects algorithms in unconstrained conditions. In future on performance and stability of robots when work, we want to use these tools to address different using simple force or simple movement con- research issues, such as the influence of directed at- trollers. For robots to take a more significant tention in vocal interaction, the preferred way and role in human life, they must safely manage time spent interacting with a robot, the most effi- intentional and unintentional physical contact cient strategy to navigate in a crowd or to find an with humans, even while performing high-force unknown location, to schmooze and make acquain- tasks. These challenges occur when interacting tance with people, and to compare different compo- with the environment produces coupled system nents for the robot’s modalities (e.g., speech recog- dynamics that differ significantly from the dy- nition software such as NUANCE and Sphinx, dia- namics of the robot system alone. logue software such as CLSU and Collagen, SLAM We are currently developing a new compli- algorithm such as CARMEN and vSLAM). ant actuator named Differential Elastic Actu- Still, we must remember that the intelligence ator (DEA), specifically designed to deal with manifested by a machine is limited by its physi- the lack of knowledge on precise interaction cal, perceptual and reasoning capabilities. Using models in natural settings and the difficulties our robot entries to the AAAI 2000, 2005 and 2006 to precisely measure in real-time forces and events, we were able to study the added value of positions associated to complex robotic tasks integrating hardware and software components, en- like manipulation and locomotion. Compared ergetic versus weight versus power considerations, to FSCA and SEA, DEA [Lauria et al. 2008] navigation capabilities, artificial audition and vision, uses a differential coupling instead of a serial task planning and scheduling, distributed decisional coupling between a high impedance mechan- architecture and visualization tools for evaluation in ical speed source and a low impedance me- dynamic conditions. The software and decisional ar- chanical spring. This results in a more com- chitectures can be ported and adapted to another pact and simpler solution, with similar per- platform, but their strengths and limitations can formances. DEA are high performance ac- only be adequately characterized when used on plat- tuators providing higher torque/encumbrance forms with greater capabilities. But we need to add ratio, higher torque/weight ratio, lower rota- significant improvements to the platform to make tive inertia, lower rotation velocity of the flex- even more progress. Therefore, from what we ob- ible element, and improved effort transmission served from our trials with our robots and in line through the motors chassis. with the current state-of-the-art in mobile robotic development, we have identified and are working on • Gesture in natural interaction. Natural two key innovations that would greatly benefit our interaction is mainly influenced by perception 12 Iterative Design of Advanced Mobile Robots

through senses like vision, audition and touch, and by actions like speech, graphics and also gesture. Gesture, i.e., a motion made to ex- press or help express thought, was not covered in our past iterations, and would greatly en- hanced the way the robot communicate with humans. Gesture can be made using arms, fa- cial expression, but also through the pose of the robot (rising itself up or moving down). With these elements in mind, we are currently designing our third iteration of interactive au- tonomous mobile robot platform. The platform is shown in Figure 10. The mobile base is a legged- tracked platform capable of changing the orienta- tion of its four articulations. Each articulation has three degrees of freedom (DOF): it can rotate 360 de- grees around its point of attachment to the chassis, can change its orientation over 180 degrees, and ro- tate to propulse the robot. This platform is inspired Figure 10: Our third iteration in designing advanced from the AZIMUT platform [Michaud et al. 2005], mobile robotic platform. built using stiff transmission mechanisms, with mo- tors and gearboxes placed directly at the attachment consideration possible extensions to the integration points of the articulations. Such design choices made of modular elements of the system. Since a robot is the platform vulnerable to shocks when moving over an embodied system, it has to deal with real world rough terrain, especially with its articulations point- dynamics and issues related to the intended applica- ing down: an undetected obstacle touching the tip tion. Therefore, integration directly influences scien- of an articulation would create an important reac- tific progress in mobile robots, and it is important to tion force at the articulations attachment point to address such challenges by developing and address- the chassis and inside his non back drivable gearbox. ing them in designing new platforms and conduct The new base uses DEA actuators for compliant lo- real world field trials. The primary research contri- comotion control: 4 DEA for the direction of its ar- bution of this paper is to present what we have done ticulations (making it possible to guide the platform so far with our AAAI 2006 entry, and what we are simply through touch); 4 DEA at the attachment aiming to accomplish with our new design iteration. point of the leg-tracks (for force control when the This puts in perspective what can be done and what robot stands up). This should improve the locomo- remains to be solved. tion capabilities of the platform, making it omnidi- The design of advanced mobile robotic platforms rectional and intrinsically safe. The humanoid torso is still in a basic research phase [Shaw 2002], inves- is capable of arm gesture and facial expressions. Its tigating ideas and concepts, putting initial structure arms will use DEA actuators for safe manipulation on problems and framing critical research questions. of objects and interaction with people. All of these Progress is accomplished through multiple iterations elements are currently being fabricated, assembled of increasing complexity, identifying new challenges and tested, and will be integrated by end of 2008. and research issues as we moved through these itera- tions. What we have identified with our AAAI 2006 4. Conclusion robot entry at all levels, from structural to hardware and software components, has led to the design spec- Mobile robotics is one of the greatest domains for ifications of our new robot. It is by designing more systems engineering: it requires the integration of advanced platforms that we will be able to measure sensors, actuators, energy sources, embedded com- progress at all these levels. puting, decision-making algorithms, etc., all into one As a corollary, one of our objectives is to share system. Only technologies and methodologies that our efforts by making available our hardware and work within the interdependent constraints of these software components. Advanced mobile robotics is elements, addressed in a holistic approach, can be a field so broad that it is not possible to be an expert useful. The integration work is influenced by tech- in all related areas. Collaborative work is therefore nological factors as well as by the operating condi- a necessary element in our attempts, as a commu- tions of the robotic platform. With continuing tech- nity, to significantly accelerate scientific progress of nological progress, designs and tools must take into the field. More specifically, we plan to use our new Iterative Design of Advanced Mobile Robots 13

robot to conduct comparative and integrated evalu- [Cote et al. 2006a] Cote, C.; Brosseau, Y.; Le- ations of robotic software capabilities and architec- tourneau, D.; Raievsky, C.; and Michaud, F. tures on a common platform and shared test con- 2006a. Using MARIE in software development ditions, thus eliminating any bias to conduct such and integration for autonomous mobile robotics. evaluations. This way, we should be able to move International Journal of Advanced Robotic Sys- research activities from feasibility studies to compar- tems, Special Issue on Software Development and ative evaluations and human-robot interaction field Integration in Robotics 3(1):55–60. studies, addressing an all new level with clear inno- vations in terms of motion, interaction and cognition [Cote et al. 2006b] Cote, C.; Letourneau, D.; capabilities, both individually and integrated. Raievsky, C.; Brosseau, Y.; and Michaud, F. 2006b. Using MARIE for mobile robot software development and integration. In Brugali, D., ed., References Software Engineering for Experimental Robotics. Springer Tracts on Advanced Robotics. [Arkin 1998] Arkin, R. C. 1998. Behavior-Based Robotics. The MIT Press. [Do & Kambhampati 2003] Do, M., and Kambham- pati, S. 2003. SAPA: A scalable multi-objective [Asfour et al. 2006] Asfour, T.; Regenstein, metric temporal planner. Journal of Artificial In- K.; Azad, P.; Schroder, J.; Bierbaum, A.; telligence Research 20:155–194. Vahrenkamp, N.; and Dillman, R. 2006. Armar-III: An integrated humanoid platform for [Edsinger-Gonzales & Weber 2004] Edsinger- sensory-motor control. In Proceedings IEEE/RAS Gonzales, A., and Weber, J. 2004. Domo: A International Conference on Humanoid Robotics, force sensing for manipulation 169–175. research. In Proceedings IEEE/RAS International Conference on Humanoid Robotics, 273–291. [Bischoff & Graefe 2004] Bischoff, R., and Graefe, V. 2004. Hermes: A versatile personal assistant [Gockley et al. 2004] Gockley, R.; Simmons, R.; robot. Proceedings IEEE Special Issue on Human Wang, J.; Busquets, D.; DiSalvo, C.; Caffrey, K.; Interactive Robots for Psychological Enrichment Rosenthal, S.; Mink, J.; Thomas, S.; Adams, W.; 1759–1779. Lauducci, T.; Bugajska, M.; Perzanowski, D.; and Schultz, A. 2004. Grace and : Social robots [Brachman 2006] Brachman, R. J. 2006. (AA)AI at AAAI. Technical Report WS-04-11, AAAI Mo- more than the sum of its parts. AI Magazine bile Robot Competition Workshop. pp. 15-20. 27(4):19–34. [Hogan & Buerger 2005] Hogan, N., and Buerger, [Bresina et al. 2002] Bresina, J.; Dearden, R.; S. P. 2005. Impedance and interaction control. In Meuleau, N.; Ramkrishnan, S.; Smith, D.; and Robotics and Automation Handbook. CRC Press. Washington, R. 2002. Planning under continuous time and resource uncertainty: A challenge for AI. [Lauria et al. 2008] Lauria, M.; Legault, M.-A.; In Proceedings 19th Conference on Uncertainty in Lavoie, M.-A.; and Michaud, F. 2008. Differen- AI, 77–84. tial elastic actuator for robotic interaction tasks. Submitted to IEEE International Conference on [Brock et al. 2005] Brock, O.; Fagg, A.; Grupen, R.; Robotics and Automation. Platt, R.; Rosenstein, M.; and Sweeney, J. 2005. A framework for learning and control in intelli- [Letourneau, Michaud, & Valin 2004] Letourneau, gent humanoid robots. International Journal of D.; Michaud, F.; and Valin, J.-M. 2004. Au- Humanoid Robotics 2(3):301–336. tonomous robot that can read. EURASIP Journal on Applied Signal Processing, Special Issue on [Buerger 2005] Buerger, S. P. 2005. Stable, Advances in Intelligent Vision Systems: Methods High-Force, Low Impedance Robotic Actuators for and Applications 17:1–14. Human-Interactive Machines. Ph.D. Dissertation, MIT. [Lienhart & Maydt 2002] Lienhart, R., and Maydt, J. 2002. An extended set of Haar-like features for [Cote et al. 2004] Cote, C.; Letourneau, D.; rapid object detection. In Proceedings IEEE In- Michaud, F.; Valin, J.-M.; Brosseau, Y.; ternational Conference on Image Processing, vol- Raievsky, C.; Lemay, M.; and Tran, V. 2004. ume 1, 900–903. Code reusability tools for programming mobile robots. In Proceedings of the IEEE/RSJ Inter- [Maxwell et al. 2004] Maxwell, B.; Smart, W.; Ja- national Conference on Intelligent Robots and coff, A.; Casper, J.; Weiss, B.; Scholtz, J.; Yanco, Systems, 1820–1825. H.; Micire, M.; Stroupe, A.; Stormont, D.; and 14 Iterative Design of Advanced Mobile Robots

Lauwers, T. 2004. 2003 AAAI robot competition [Pratt & Williamson 1995] Pratt, G., and and exhibition. AI Magazine 25(2):68–80. Williamson, M. 1995. Series elastic actua- tors. In Proceedings IEEE/RSJ International [Michaud et al. 2001] Michaud, F.; Audet, J.; Le- Conference on Intelligent Robots and Systems, tourneau, D.; Lussier, L.; Theberge-Turmel, C.; volume 1, 399–406. and Caron, S. 2001. Experiences with an au- tonomous robot attending the AAAI conference. [Shaw 2002] Shaw, M. 2002. What makes good re- IEEE Intelligent Systems 16(5):23–29. search in software engineering? In Proceedings European Joint Conference on Theory and Prac- [Michaud et al. 2005] Michaud, F.; Brosseau, Y.; tice of Software. Cote, C.; Letourneau, D.; Moisan, P.; Ponchon, A.; Raievsky, C.; Valin, J.-M.; Beaudry, E.; and [Simmons et al. 2003] Simmons, R.; Goldberg, D.; Kabanza, F. 2005. Modularity and integration in Goode, A.; Montemerlo, M.; Roy, N.; Sellner, the design of a socially interactive robot. In Pro- B.; Urmson, C.; Schultz, A.; Abramson, M.; ceedings IEEE International Workshop on Robot Adams, W.; Atrash, A.; Bugajska, M.; Coblenz, and Human Interactive Communication, 172–177. M.; MacMahon, M.; Perzanowski, D.; Horswill, I.; Zubek, R.; Kortenkamp, D.; Wolfe, B.; Milam, T.; [Michaud et al. 2007] Michaud, F.; Cote, C.; Le- and Maxwell, B. 2003. Grace : An autonomous tourneau, D.; Brosseau, Y.; Valin, J.-M.; Beaudry, robot for the AAAI robot challenge. AI Magazine E.; Raievsky, C.; Ponchon, A.; Moisan, P.; Lep- 24(2):51–72. age, P.; Morin, Y.; Gagnon, F.; Giguere, P.; Roux, M.-A.; Caron, S.; Frenette, P.; and F.Kabanza. [Smart et al. 2003] Smart, W. D.; Dixon, M.; Mel- 2007. Spartacus attending the 2005 AAAI confer- chior, N.; Tucek, J.; and Srinivas, A. 2003. Lewis ence. Autonomous Robots, Special Issue on AAAI the graduate student: An entry in the AAAI robot Mobile Robot Competition 22(4):369–384. challenge. Technical report, AAAI Workshop on Mobile Robot Competition. p. 46-51. [Michaud 2002] Michaud, F. 2002. EMIB – Compu- tational architecture based on emotion and moti- [Stiehl & Breazeal 2005] Stiehl, W., and Breazeal, vation for intentional selection and configuration C. 2005. Design of a therapeutic robotic compan- of behaviour-producing modules. Cognitive Sci- ion for relational, affective touch. In Proceedings ence Quaterly, Special Issue on Desires, Goals, IEEE Workshop on Robot and Human Interactive Intentions, and Values: Computational Architec- Communication. tures 3-4:340–361. [Valin, Michaud, & Rouat 2006] Valin, J.-M.; [Michaud et al. 2005] Michaud, F.; Letourneau, D.; Michaud, F.; and Rouat, J. 2006. Robust Arsenault, M.; Bergeron, Y.; Cadrin, R.; Gagnon, localization and tracking of simultaneous moving F.; Legault, M.-A.; Millette, M.; Pare, J.-F.; sound sources using beamforming and particle Tremblay, M.-C.; Lepage, P.; Morin, Y.; and filtering. Robotics and Autonomous Systems Caron, S. 2005. Multi-modal locomotion robotic Journal 55:216-228. platform using leg-track-wheel articulations. Au- tonomous Robots, Special Issue on Unconven- [Valin, Rouat, & Michaud 2004] Valin, J.-M.; tional Robotic Mobility 18(2):137–156. Rouat, J.; and Michaud, F. 2004. Enhanced robot audition based on microphone array source [Montemerlo, Roy, & Thrun 2003] Montemerlo, M.; separation with post-filter. In Proceedings of Roy, N.; and Thrun, S. 2003. Perspectives on stan- the IEEE/RSJ International Conference on dardization in mobile robot programming: The Intelligent Robots and Systems, 2123–2128. Carnegie Mellon navigation (CARMEN) toolkit. In Proceedings IEEE/RSJ International Confer- [Vaughan, Gerkey, & Howard 2003] Vaughan, ence on Intelligent Robots and Systems, 2436– R. T.; Gerkey, B. P.; and Howard, A. 2003. On 2441. device abstractions for portable, reusable robot code. In Proceedings IEEE/RSJ International [Nau et al. 2003] Nau, D.; Au, T.; Ilghami, O.; Conference on Intelligent Robots and Systems, Kuter, U.; Murdock, J.; Wu, D.; and Yaman, F. 2421–2427. 2003. SHOP2: An HTN planning system. Journal of Artificial Intelligence Research 20:379–404. [Werner 1999] Werner, M. 1999. Humanism and beyond the truth, volume 13. Humanism Today. [Nau, Ghallab, & Traverso 2004] Nau, D.; Ghallab, http://www.humanismtoday.org/vol13/werner.html. M.; and Traverso, P. 2004. Automated Planning: Theory & Practice. San Francisco, CA, USA: Mor- gan Kaufmann Publishers Inc.