<<

Journal of Computing and Information Technology - CIT 17, 2009, 1, 1–16 1 doi:10.2498/cit.1001178

Iterative Design of Advanced Mobile Robots

Fran¸cois Michaud1, Dominic Letourneau´ 1, ´ Beaudry2, Maxime Frechette´ 1, Froduald Kabanza2 and Michel Lauria1

1Department of Electrical Engineering and Computer Engineering, Universit´e de Sherbrooke, Qu´ebec, Canada 2Department of Computer Science, Universit´e de Sherbrooke, Qu´ebec, Canada

Integration of hardware, software and decisional compo- i.e., mostly basic research and concept formu- nents is fundamental in the design of advanced mobile lation, and some preliminary uses of the tech- robotic systems capable of performing challenging tasks in unstructured and unpredictable environments. We nology with attempts in clarifying underlying address such integration challenges following an iterative ideas and generalizing the approach [28].The design strategy, centered on a decisional architecture challenge in making such a dream become a based on the notion of motivated selection of behavior- reality lies in the intrinsic complexities and in- producing modules. This architecture evolved over the years from the integration of obstacle avoidance, message terdependencies of the necessary components reading and touch screen graphical interfaces, to localiza- to be integrated in a robot. A mobile robot is tion and mapping, planning and scheduling, sound source a system with structural, locomotive, sensing, localization, tracking and separation, speech recognition actuating, energizing and processing capabili- and generation on a custom-made interactive robot. De- ties, which set the resources available to deal signed to be a scientific robot reporter, the robot provides understandable and configurable interaction, intention with the operating conditions of the environ- and information in a conference setting, reporting its ment. Such resources are exploited and coordi- experiences for on-line and off-line analysis. This paper nated based on decisional processes and higher presents the integration of these capabilities on this robot, cognitive functions, according to the intended revealing interesting new issues in terms of planning and scheduling, coordination of audio/visual/graphical ca- tasks. pabilities, and monitoring the uses and impacts of the robot’s decisional capabilities in unconstrained operating In his Presidential Address at the 2005 AAAI conditions. This paper also outlines new design require- (American Association for Artificial Intelligen- ments for our next design iteration, adding compliance ce) Conference, Ronald Brachman argued that to the locomotion and manipulation capabilities of the Artificial Intelligence (AI) is a system science platform, and natural interaction through gestures using arms, facial expressions and the robot’s pose. that must target working in the wild, messy world [4]. This was the initial goal of AI, which Keywords: integration, decisional architecture, task revealed to be too difficult to address with the planning and scheduling, human-robot interaction, it- technology 50 years ago, leading to the cre- erative design ation of more specialized subfields. The same can be said about the scientific com- munity. The underlying complexities in these 1. Introduction disciplines are difficult enough to resolve that a “divide-and-conquer” approach is justified. This explains why only a few initiatives around The dream of having autonomous machines per- the world tackle directly this challenge. But forming useful tasks in everyday environments, those that do always identify new ways of solv- as unstructured and unpredictable as they can ing the integration issues, outlining guidelines be, has motivated for many decades research in and specifications for new research questions or robotics and artificial intelligence. However, in- for new technologies aiming to improve robotic telligent and autonomous mobile robots are still capabilities. Continuous technological progress in what can be identified as research phases, makes it possible to push the state-of-the-art in 2 Iterative Design of Advanced Mobile Robots robotics closer to its use in day-to-day applica- Even though each of these robots presents inter- tions, and it is by addressing directly the inte- esting functionalities in terms of motion (om- gration challenges that such objective can and nidirectional, articulated torso) and interaction will be accomplished. skills (gesture, expressions, tactile, force-con- trolled actuators, stereo vision, natural language, Designing a mobile robot that must operate manipulate objects and collaborate with humans in public settings probably addresses the most in task-oriented applications), they do not go complete set of issues related to autonomous far in terms of cognition (e.g., autonomous de- and interactive mobile robots, with system in- cisions in open settings, adaptation to a variety tegration playing a fundamental role at all lev- of situations). They also make compromises els. A great variety of robots have been and are on the advanced capabilities of the integrated currently being designed to operate in natural elements: not all of them use state-of-the-art settings, interacting with humans in the wild, technologies. These platforms are however nec- messy world. For the most recent and perti- essary in making significant contributions for nent related work to our project, we can identify moving toward higher level of cognition (e.g., non-mobile child-size humanoid robots (e.g., future research with targets the imple- Kaspar, from the RobotCub project, to investi- mentation of a motivational system with home- gate the use of gestures, expressions, synchro- ostatic balance between basic drives and a cog- nization and imitation; Huggable [31], equipped nitive layer, plus incremental learning of senso- with full-body sensate skin, silent voice coil- rimotor experiences [12]). type electromagnetic actuators with integrated position, velocity and force sensing means); In opposition, our work initiated from cogni- adult-size humanoid torso (e.g., Dexter [6], tive architectural considerations, and moved to software and hardware requirements of a mo- for grasping and manipulation using tactile load bile robot operating in real life settings. EMIB cells on the finger, investigating decomposition- (Emotion and Motivation for Intentional selec- based planning and knowledge acquisition and tion and configuration of Behavior-producing programming by demonstration; Domo [12], modules) was our first computational architec- using stereo vision, force sensing compliant ture used to integrate all necessary decisional actuators (FSCA) and series elastic actuators [ ] ( )[ ] capabilities of an autonomous robots 22 .It SEA 27 to investigate alternative approaches was implemented on a Pioneer 2 robot entry in to robot manipulation in unstructured condi- the 2000 AAAI Mobile Robot Challenge. A tions); differential-drive platforms with ma- ( robot had to start at the entrance of the con- nipulators e.g., B21, PeopleBot and Care-O- ference site, find the registration desk, register, Bot platforms equipped usually with one robotic ( ) ) perform volunteer duties e.g., guard an area manipulator ; differential-drive mobile torsi and give a presentation [18]. Our team was ( e.g., Robonaut, from NASA, installed on a Seg- the first and only one to attempt the event from way Robotic Mobility Platform base and tele- start to end, using sonars as proximity sensors, operated; Battlefield Extraction-Assist Robot navigating in the environment by reading writ- BEAR from Vecna Robotic-US, using leg-track ten letters and symbols, using a pan-tilt-zoom /dynamic balancing platform to carry fully- (PTZ) camera, interacting with people through weighted human casualities from a teleopera- a touch-screen interface, displaying a graphi- tion base); omnidirectional humanoid base cal face to express the robot’s emotional state, (e.g., ARMAR III [2], using load cells, torque determining what to do next, using a finite- sensors, tactile skin, stereo vision and six micro- state machine (FSM), recharging itself when phone array in the head and a three-tier (plan- needed, and generating a HTML report of its ning, coordination, execution) control architec- experience [19]. We then started to work on a ture to accomplish service tasks in a kitchen; more advanced platform that we first presented HERMES [3], combining stereo vision, kines- in the 2005 AAAI Mobile Robot Competition thetic, tactile array bumper and single omnidi- [21, 20]. Our focus was on integrating percep- rectional auditory sensing with natural spoken tual and decisional components addressing all language (input via voice, keyboard or e-mail) four issues outlined during the AAAI events with a situation-oriented skill-based architec- over the years, i.e.: 1) the robust integration ture). of different software packages and intelligent Iterative Design of Advanced Mobile Robots 3 decision-making capabilities; 2) natural inter- action modalities in open settings; 3) adaptation to environmental changes for localization; and 4) monitoring/reporting decisions made by the robot [13, 30, 18, 29]. Trials conducted at the 2005 AAAI event helped establish new require- ments for providing meaningful and appropri- ate modalities for reporting and monitoring the robot’s states and experiences. When operating in open settings, interactions are rapid, diverse and context-related. Having an autonomous robot determining on its own when and what it has to do based on a variety of ( modalities time constraints,events occurring in Figure 1. Our 2006 AAAI robot entry the world, requests from users, etc.) also makes (front view, back view). it difficult to understand the robot’s behavior just by looking at it. Therefore, we concentrated our integration effort in 2006 to design a robot robot’s base to allow the robot to detect the pres- that can interact and explain, through speech ence of electric outlets. The technique used is and graphical displays, its decisions and its ex- based on a cascade of boosted classifiers work- periences as they occur (for on-line and off-line ing with Haar-like features [17] and a rule-based diagnostics) in open settings. This paper first analysis of the images. presents the software/hardware components of our 2006 AAAI robot entry, its software and Figure 2 illustrates the robot’s software architec- decisional architectures, the technical demon- ture. It integrates Player for sensor and actuator [ ] ( stration made at the conference along with the abstraction layer 34 and CARMEN Carnegie ) interfaces developed for reporting the robot’s Mellon Robot Navigation Toolkit for path plan- [ ] experiences. This outlines the progress made ning and localization 24 . Also integrated but / so far in addressing the integration challenge of not shown in the Figure is Stage Gazebo for 1 designing autonomous robots. The paper then 2D and 3D simulators, and Pmap library for describes the design specification of our new 2D mapping, all designed at the University of advanced mobile robot platform, derived from Southern California. RobotFlow and FlowDe- what we have identified as critical elements in signer (FD)[8] are also used to implement the making autonomous systems operating in eve- behavior-producing modules, the vision mod- ryday environments.

2. A Scientific Robot Reporter

Our 2006 AAAI robot entry, shown in Fig- ure 1, is a custom-built robot equipped with a SICK LMS200 laser range finder, a Sony SNC-RZ30N 25X pan-tilt-zoom color camera, a Crossbow IMU400CC inertial measurement unit, an array of eight microphones placed in the robot’s body, a touch screen interface, an audio system, one on-board computer and two laptop computers. The robot is equipped with a business card dispenser, which is part of the robot’s interaction modalities. A small cam- era (not shown in the picture) was added on the Figure 2. Our robot’s software architecture.

1 http://robotics.usc.edu/ahoward/pmap/ 4 Iterative Design of Advanced Mobile Robots ules for reading messages [16] and SSLTS, our of open and unconstrained conditions of natural real-time sound source localization, tracking living environments. [32] and separation [33] system. For speech recognition and dialogue management, we in- terfaced the CSLU toolkit (http://cslu.cse. ogi.edu/toolkit/). One key feature of CSLU is that each grammar can be compiled at run- time, and it is possible to use dynamic gram- mars that can change on-the-fly before or while the dialogue system is running. Software inte- gration of all these components are made possi- ble using MARIE, a middleware framework ori- ented towards developing and integrating new and existing software for robotic systems [10, 9].

2.1. Decisional Architecture and Modules

For our participation in the AAAI 2006 Mo- bile Robot Competition, our robot is designed to be a scientific robot reporter, in the sense of a human-robot interaction research assistant. The objective is to have our robot provide un- derstandable and configurable interaction, in- tention and information in unconstrained envi- ronmental conditions, reporting the robot expe- riences for scientific data analysis. The robot is programmed to respond to requests Figure 3. Our robot’s decisional architecture. from people, and with only one intrinsic goal of wanting to recharge when its energy is get- ting low (by either going to an outlet identified on the map or by searching for one, and then The decisional architecture developed for our ) asking to be plugged in . Our robot’s duty is robot’s decision-making capabilities is shown to address these requests to the best of its capa- in Figure 3. It is based on the notion of hav- bilities and what is ‘robotically’ possible. Such ing motivated selection of behavior-producing requests may be to deliver a written or a vocal modules. We refer to it as MBA, for Motivated message to a specific location or to a specific [ ] person, to meet at a specific time and place, Behavioral Architecture 21, 20 . It is composed to schmooze, etc. The robot may receive mul- of three principal components: tiple requests at different periods, and has to autonomously manage what, when and how it 1. Behavior-producing modules (BPMs) de- can satisfy them. fine how particular percepts and conditions influence the control of the robot’s actua- With our robot, we assume that the most appro- tors. Their name represents their purpose. priate approach for making a robot navigate in the world and accomplish various tasks is done The actual use of a BPM is determined by through the use of behavior-producing modules an arbitration scheme realized through Ar- (also known as behavior-based systems [1]).As bitration and Behaviour Interfaces module a result, representation and abstract reasoning (implementing in our case a priority-based / working on top of these modules become a re- subsumption arbitration), and the BPM acti- quirement, with the objective of being able to vation conditions derived by the Behaviour anticipate the effects of decisions while still Selection and Configuration module (ex- adapting to the inherently complex properties plained later). Iterative Design of Advanced Mobile Robots 5

2. Motivational sources (or Motivations, serv- tion based on the capabilities available to the ing to make the robot do certain tasks) rec- robot. For instance, one instinctual motivational ommend the use or the inhibition of tasks to source may monitor the robot’s energy level to be accomplished by the robot. Motivational issue a recharging task in the Dynamic Task sources are categorized as either instinctual, Workspace, which activates a ‘Locate Electri- rational or emotional. This is similar to con- cal Outlet’ behavior that would make the robot sidering that the human mind is a “commit- detect and dock in a charging station. Mean- tee of the minds,” with instinctual, rational, while, if the robot knows where it is and can emotional, etc. minds competing and inter- determine a path to a nearby charging station, acting [35]. Instinctual motivations provide a path planning rational motivation can add a basic operation of the robot using simple subtask of navigating to this position, using a rules. Rational motivations are more related ‘Goto’ behavior. Otherwise, the ‘Locate Elec- to cognitive processes, such as navigation trical Outlet’ behavior will at least allow the and planning. Emotional motivations mon- robot to recharge opportunistically, when the itor conflictual or transitional situations be- robot perceives a charging station nearby. tween tasks. The importance of each of these influences explains why manifested behav- Only instinctual and rational motivations are ior can be more influenced by direct percep- considered in this study, with rational moti- tion, by reasoning or by managing commit- vations having greater priority over instinctual ments and choices. By distributing motiva- ones in case of conflicts. Survive makes the tional sources and distinguishing their roles, robot move safely in the world while monitor- it is possible to exploit more efficiently the ing its energy level. Planner determines which strengths of various influences regarding the primitive tasks and which sequence of these tasks the robot must accomplish, while not tasks are necessary to accomplish higher-level having to rely on only one of them for the tasks under temporal constraints and the robot’s robot to work. This is similar in essence to capabilities (as defined by BPMs). This motiva- the basic principles of behavior-based con- tional source is detailed in Section 2.1.1. Navi- trol, but at a higher abstraction level. gator determines the path to a specific location ( ) to accomplish tasks posted in the DTW. Request 3. Dynamic Task Workspace DTW serves as Manager handles task requested by the users via the interface between motivational sources the touch screen interface or from vocal com- and BPMs. Through the DTW, motiva- mands. Task requests are then processed by tions exchange information asynchronously Planner and added to the actual plan, if appro- on how to activate, configure and monitor priate. BPMs. The interface between the BPMs and the motivational sources is done through With multiple tasks being issued by the mo- tasks in the DTW. Tasks are data structures tivational sources, the Behavior Selection and that are associated with particular configu- Configuration module determines which behav- ration and activation of one or multiple be- iors are to be activated according to recommen- haviors. The DTW organizes tasks in a tree- dations made by motivational sources, with or like structure according to their interdepen- without particular configuration (e.g., a destina- / ( dencies, from high-level abstract tasks e.g., tiontogoto). A recommendation can either be ’Deliver message’), to primitive/BPM-rela- ( ) negative, neutral or positive, or take on real val- ted tasks e.g., ’Avoid’ . Motivations can ues within this range regarding the desirability add and modify tasks by submitting mod- of the robot to accomplish specific tasks. Ac- ification requests, queries or subscribe to tivation values reflect the resulting robot’s in- events regarding the task’s status. tentions derived from interactions between the motivational sources. Behavior use and other By exchanging information through the DTW, information coming from behavior and that can motivations are kept generic and independent be useful for task representation and monitoring from each other, allowing motivations to dis- are also communicated through the Behavior tributively come up with behavior configura- Selection and Configuration module. 6 Iterative Design of Advanced Mobile Robots

2.1.1. Task Planning Motivation sider temporal constraints [25, 26].Toimple- ment this, we modified the basic HTN planning This motivational source invokes, when a new algorithm by: (a) adding a current-time variable task is placed in the DTW, a planning algo- into the representation of a current state during rithm that combines principles from SHOP2 search; (b) specifying time constraints in the HTN planning algorithm [25] and SAPA [11]. specification based on this variable in the pre- As in SHOP2, we specify a planning domain condition of task decomposition methods and of (e.g., the domain of attending a conference at primitive tasks; (c) allowing the specification AAAI) by describing the robot primitive behav- of conditional effect update for primitive tasks iors in terms of template tasks and methods for based on this variable. That way, we extend recursively decomposing template tasks down the HTN concept to support time-constrained to primitive tasks which are atomic actions. For tasks by adding temporal constraints at each de- instance, we can specify that the task of making composition level. The planner can use these a presentation at location px is from time t1 to temporal constraints to add partial orders on time t2, and that it can be decomposed into the tasks. These partial orders reduce the search simpler subtasks of going to px and presenting space and accelerate the plan generation pro- at time t1. Decomposition of tasks into simpler cess. These temporal constraints can also be ones is given with preconditions under which transferred when a high-level task is decom- the decomposition is logically sound and time posed into lower-level tasks. constraints for ensuring its consistency. Given When defining temporal intervals in the domain such a set of task specifications, the planning al- specification, care must be taken to establish a gorithm consists of searching through the space good compromise between safety and efficiency of tasks and world states. for task execution. Being too optimistic may More specifically, starting from a given set of cause plan failure because of lack of time. For initial tasks and a current state describing the instance, assuming that the robot can navigate robot state and environment situation, the plan- at high speed from one location to the other ner explores a space of nodes where each node will cause a plan to fail if unpredictable events represents: the current list of subtasks; the cur- slow down the robot. To solve this problem, rent robot state and environment situation; the the solution is to be conservative in the domain current plan (initially empty). A current node model and assume the worst case scenario. On is expanded into successors by decomposing the other hand, being too conservative may lead one of the current task into simpler ones (us- to no solution. Therefore, temporal intervals in ing the specified task decomposition method) the domain model are specified using an aver- or by validating a current primitive task against age speed (0.14 m/s) lower than the real average the current world state (this is done by check- speed (0.19 m/s) of the robot. ing its precondition and updating the current Over the last years, efforts have been made in world state using the primitive task’s effects) creating formal techniques for planning under and adding it to the current plan. The search temporal and resource uncertainty [5]. To keep process stops when a node is reached with no our approach simple and to avoid having to deal more task to decompose. On a node, there can with complex models of action duration, we be different ways of decomposing a task and use the following heuristics: planning is initi- different orders in which to decompose them; ated first using a conservative action duration backtracking is invoked to consider the differ- model and, when a possible opportunity (using ent alternatives until obtaining a solution. This the temporal information associated with the can be a source of exponential blow up during tasks) is detected, the planner is reinvoked to the search process. By carefully engineering the find a better plan. In other words, if the robot decomposition methods to convey some search proceeds faster than it is planned (i.e., the end control strategy, it is possible to limit this state of the updated projected action is lower than the explosion [25]. end of the planned action), it is possible that Task Planning with Temporal Constraints. A a better plan exists, and replanning therefore normal HTN planning algorithm does not con- occurs. Iterative Design of Advanced Mobile Robots 7

Time Windowing. Using a technique borrowed Anytime Task Planning. SimilarlytoSAPA[11] from SAPA [11], our planner post-process the and SHOP2 [25], our planner has an anytime plan generated by the search process to obtain planning capability, which is important in mo- a plan with time windowing for actions. Dur- bile robotic applications. When a plan is found, ing search, each node is associated with a fixed the planner tries to optimize it by testing al- time stamp (that is, the current−time variable), ternative plans for the same mission until the and at the end of the search process we obtain empirical maximum planning time is reached. a plan consisting of a sequence of actions each On-line Task Planning. The integration of assigned with a time stamp indicating when it planning capabilities into a robotic system must should be executed. This sequence of actions is take into account real-time constraints and un- post-processed based on time constraints in the ( expected external events that conflict with pre- domain specification i.e., constraints attached viously generated plans. To reduce process- to task decomposition methods and primitive ing delays in the system, our planner is imple- ) tasks to derive time intervals within which ac- mented as a library. The MARIE application tions can be executed without jeopardizing the adapter that links the planner to the rest of the correctness of the plan. computational architecture loads the planner li- brary, its domain and world representation (e.g., Task Filtering and Priority Handling In a tra- ) ditional planning setting, a list of initial tasks is operators, preconditions and effects at initial- considered as a conjunctive goals. If the plan- ization. The planner remains in the memory ner fails to find a plan that achieves them all, it and does not have to be loaded each time it is invoked. A navigation table (providing the reports failure. In the context of the AAAI Chal- ) lenge as well as in many real life situations, we distances from each pairs of waypoints also remains in the memory and is dynamically up- expect the robot to accomplish as many tasks as dated during the mission. External events are possible, with some given preferences among accounted for by monitoring the execution of a the tasks. For instance, the robot may fail to plan and by validating preconditions and time deliver one message at the right place, but suc- constraints of remaining actions in the plan. Vi- cessfully delivers another one. This would be olated preconditions activates re-planning. acceptable depending on the environment con- ditions. The robot can receive task requests at any time during a mission. This requires the robot to up- We implement a robot mission as a list of tasks, date its plan even if it is not at a specified way- each associated with a degree of preference or point in its world representation. When gener- priority level. Then we iteratively use the HTN ating a task plan, the duration of going from one planner to obtain a plan for a series of approxi- waypoint to another is instantiated dynamically mation of the mission, with decreasing level of based on the current environment. That is, the accuracy. Specifically, initially we call the plan- duration of a Goto(X) action is not taken from ner with the entire mission; if it fails to compute the navigation table, but is rather estimated from a plan within a deadline set empirically in the the actual distance to the targeted waypoint X, robot architecture (typically 1 second),alowest as provided by an internal path-planning algo- priority task is removed from the mission and rithm. Therefore, Planner only has to deal with the planner is called with the remaining mis- high-level navigation tasks, and it is not nec- sion. This is repeated until a solution plan is essary to plan intermediate waypoints between found. If the mission becomes empty before two waypoints that are not directly connected. a solution is found, failure is returned (the en- To estimate the duration for navigating from a tire mission is impossible). Clearly, this model waypoint to another, the planner uses a naviga- can be easily configured to include vital tasks tion table of the size n2 where n is the number that cause failure whenever any of them is not of defined waypoints. This table is initialized achievable. We also use some straightforward by computing the minimum distance between static analysis of the mission to exclude for in- waypoints with a classic A* algorithm. Each stance low priority tasks that apparently conflict time a new waypoint is added on the map, the with higher priority ones. distances table is updated dynamically. 8 Iterative Design of Advanced Mobile Robots

2.1.2. Interaction Modalities on our robot’s graphical interface, contex- tual information according to its current plan With a distributed cognitive architecture inte- (e.g., behavioral configuration, map with dy- grating a high number of capabilities to be used namic places, active tasks). In addition, we in open conditions, it is difficult to assess what can still use the basic off-line log viewing is going on simply by looking at the robot’s be- application to replay logs off-line. havior, or by analyzing log files off-line. More dynamic and interactive tools are required. Here Figure 4 shows a representation of the Work- is the set of modalities designed to improve our spaceViewer’s main window. The upper sec- understanding of integrated modalities (vision, tion contains a timeline view of DTW events audition, graphical, navigation) on our robot. (first line), planner events (second line), • Visualization tools of decisional compo- and behaviors’ activations and exploitations ( ) nents. The underlying objectives of such third line . The bottom section shows de- tools is to facilitate integration work by of- tailed information according to the position fering visualization tools of the decisional of the vertical marker on the timeline: a list components added to the robot. It requires of DTW’s tasks and properties (first window coherent and user-friendly representation of from the left), active motivations (second the integrated components so that develop- window), the current plan (third window), ers can concentrate on their combined us- the map of the environment and the trajec- age instead of spending time extracting and tory of the robot (fourth window),the behav- interpreting data in log files. In 2005, we iors and their activations and exploitations developed a log viewing application, allow- (under the first window). The Workspace- ing us to monitor off-line the complete set Viewer is directly connected to the DTW and of states of the robot through log files cre- displays information as they become avail- ated by the different software components able in real-time. The WorkspaceViewer is [21]. In 2006, we extended this tool to pro- also linked to Request Manager motivation vide a dynamic and real-time view of what to handle user requests. the robot is actually doing and planning to do, on-line. We call this updated version • Visualization of the SSLTS results. Au- the WorkspaceViewer. With the Workspace- dio data is difficult to monitor in dynamic Viewer, it is now possible to display, directly and open conditions: it is not persistent as a

Figure 4. WorkspaceViewer’s main window. Iterative Design of Advanced Mobile Robots 9

visual feature can be, for instance. An inter- were either very short (e.g., caused by a face representing where the sound sources noise burst or laughter) or truncated (e.g., around the robot are detected and what was when the robot starts talking, to avoid mak- recorded by the robot revealed to be neces- ing the robot hear itself talk; sound ampli- sary from what we experienced during our tude too low). Implementation problems 2005 participation. Figure 5 illustrates the caused these limitations, as we diagnosed interface developed. The upper part shows after the event. This left around 1.5% of au- the angle of the perceived sound sources dio streams intended for specific interaction around the robot, in relation to time. The in- with the robot adequately processed by the terface shows in real-time the sound sources SSLTS system. All were correctly processed perceived, using dots of a distinct color. The using CLSU in specific interaction our robot sound sources are saved, and the user can had with human interlocutors. This clearly select them and play back what the SSLTS demonstrates the challenge of making an ar- generated. This interface reveals to be very tificial audition system for operation in open valuable for explaining what the robot can conditions. Such auditory capability work- hear, to construct a database of typical audio ing in real-time in such conditions has not streams in a particular setting, and to analyze yet been demonstrated on other robots. The the performance of the entire audio module performance observed is impressive, but the (allowing to diagnose potential difficulties challenges to overcome to make it more ro- that the speech recognition module has to bust and efficient are still quite important. face). We will continue to improve SSLTS’ per- formance by not conducting speech recog- nition on audio streams that are too small (i.e., < 1sec) and by making it possible to concurrently run multiple speech recog- nition processes (currently only one stream can be processed at a time). • Multimodal interaction interfaces. It is sometimes difficult to know exactly which task is prioritized by the robot at a given time, making the interlocutors unable to know what the robot is actually doing and eval- Figure 5. Track audio interface. uate its performance. So we made the robot indicate its intention verbally and also graph- ically (in case, for whatever reasons, the in- For instance, more than 6600 audio streams were recorded over the twelve hours of op- eration of the robot in open settings at the AAAI 2006 conference, held at Seaport Ho- tel and World Trade Center in Boston. From this set of audio streams, we observed that around 61% were audio content (people talk- ing nearby the robot, sounds of different sort) adequately processed by our SSLTS system (i.e., the audio streams were cor- rectly separated and located, and are un- derstandable by a human listener).How- , they were not related to specific inter- action with the robot, and therefore revealed to be useless for interacting with our robot. 3.5% of the audio streams were inaudible Figure 6. Graphical display for an assistance request to by a human listener, and an additional 34% find a location (in laboratory conditions). 10 Iterative Design of Advanced Mobile Robots

terlocutor is not able to understand the mes- sage). Figure 6 illustrates the graphical in- terface with which our robot is requesting assistance to find a location in the conven- tion center. The graphical interfaces of the robot were made so that users can indicate through push buttons (using the touch screen) what they want our robot to do. We also made it pos- sible for the users to make these requests verbally, saying out loud the names of the Figure 7. Our robot’s intended trajectory for the buttons. To facilitate the recognition pro- technical presentation. cess, specific grammar and dialogue man- agers are loaded in CSLU for each graphical ) mode. The audio and graphical interaction constraint for the mission , consisted of five phases done in the area represented in Figure 7: are therefore tightly integrated together. 1. INTRODUCTION. Presentation (max 4:30 min) at Alley of our robot’s features: track 2.2. Demonstration and Results audio viewer for separated sources; direct- ing camera view toward the speaker, with Our robot demonstrated its capabilities in the snapshots memorized every second; graphic AAAI 2006 Human-Robot Interaction Event, displays for user input; context-based gram- evaluated in a technical and a public demon- mar; pre-mapping of the area; Gantt chart strations. Our robot entered five of the seven representation of the plan. categories by doing the following: 2. NAVIGATION. Go to Corridor to receive • Natural Language Understanding and Ac- a message (voice attachment or text) to be tion Execution: Following requests from hu- delivered at WaterfontRoom. No time con- mans, written or verbal, for making the robot straints are specified for this task. A spe- do different tasks. cific graphical interface is displayed by the • Perceptual Learning Through Human Teach- WorkspaceViewer to get the message. The ing: Learning of a location, specified by Planner inserts this task if time permits be- humans or identified by the robot (i.e., elec- tween existing tasks (which are time con- trical outlet), and being able to remember stained). it. 3. PLANNED RECHARGE. Initiate a recharg- • Perception, Reasoning, and Action: Tem- ing task, asking somebody to plug the robot poral reasoning, planning and scheduling of in an electrical outlet. This should occur no tasks and intents. later than 25 minutes after the beginning of • the presentation at the Station.However, Shared Attention, Common Workspace, In- Station is an unknown location when the tent Detection: Directing camera view in demonstration starts. There are two ways the direction of the speaker, communicate the location can be determined : from user temporal constraints regarding task, and use ( ) input with the touch screen interface, or au- of the dynamic context-configured viewer, tomatically added if an electrical outlet is on-line and off-line. perceived. • Integration Challenge Categories 3 to 6: / 4. PLAYBACK. Accelerated replay of what Robot Scientific Reporter Assistant demo, the robot experienced during the demonstra- with understandable interaction, decision and tion. The WorkspaceViewer program starts situations experienced in unconstrained con- an automatic replay from the beginning of ditions. the demonstration, and stops at important Our technical demonstration, programmed to events. Judges can then see a snapshot of last 25 minutes (setting an overall fixed time the active behaviors, the plan, the trajectory Iterative Design of Advanced Mobile Robots 11

of the robot and active tasks. Playback oc- robot already following a tight schedule, it had curs 15 minutes after the beginning of the a lot of trouble going through all of the planned demonstration at Station (max2:00min). tasks: they were dismissed in sequence to try to catch up with the schedule. Most of the time, we 5. QUESTIONS AND COMMENTS. Ask had to manually demonstrate the capabilities by judges to enter comments regarding its per- modifying the initial plan and removing tasks so formance and capabilities through a Work- that the time constraints could fit in a new plan. spaceViewer contextual interface. Judges This went on until the end, even exceeding the could enter textual or vocal comments if they 25 minutes time constraint. Therefore, it turned want. This task occurs 20 minutes after the ( ) out that localization performance played a huge beginning of the presentation max 4 min . impact on the overall performance of our robot. One minute before the end, a ‘Thank you’ We knew that the time constraints were difficult message is displayed on the screen, and our to meet, but we did not expect such poor per- robot gives one business card. formance with localization. This revealed the influence of tightly coupling the planner and the Figure 8 shows the dry run conducted minutes localizer, and that multiple localization capabil- before the technical presentation, without the ities may be required when navigating with too jury members. We started our robot in the Hall many people surrounding the robot, blocking with a mission making it go through Alley, Cor- the laser’s field of view to get good position ridor, Waterfront Room, to finally end its pre- estimates. sentation at Station. Everything went smootlhy and worked out fine.

Figure 9. Localization results during the technical presentation. Figure 8. Localization results with nobody around our robot. Our public demonstration was made to be more interactive and entertaining. For instance, we Figure 9 shows what actually occurred during made our robot interact with people by play- the technical demonstration. Compared to Fig- ing games (music or science trivia questions, ure 8, our robot had a lot of difficulties lo- fortune cookies). In very crowded conditions, calizing its position when people and judges positioning the camera in the direction of the were around the robot. CARMEN using laser speaker worked very well, as did the separation range finder data had difficulties finding ref- of sources considering the very noisy conditions erence points to its internal map. The esti- in which the robot was in. Looking at the log mated positions, represented with dots in Fig- files, it was possible to clearly follow conversa- ure 9, are scattered and even sometimes outside tions the robot had with people. With the event the map. Additional fine-tuning of parameters taking place outside the technical demonstra- of probabilistic odometry model in CARMEN tion area, no map was made of the area, and configuration file would have been required to our robot only wandered around, being stopped improve localization performance in crowded most of the time to interact with people. With conditions. We had to reposition manually the the observed performances during the technical robot on the map when this happened, and our demonstration, having a map would not have robot arrived two minutes late at Alley. With our helped the robot localize itself in the area (it 12 Iterative Design of Advanced Mobile Robots was much more crowded than our robot had ex- Still, we must remember that the intelligence perienced during the technical demonstration). manifested by a machine is limited by its phys- ical, perceptual and reasoning capabilities. Us- Overall, our robot was the overall winner of the ( ing our robot entries to the AAAI 2000, 2005 event, finishing first in four categories Percep- and 2006 events, we were able to study the tual Learning through Human Teaching; Per- added value of integrating hardware and soft- ception, Reasoning and Action; Shared Atten- ware components, energetic versus weight ver- tion, Common Workspace, Intent Detection; In- ) ( sus power considerations, navigation capabil- tegration Challenge , and second in one Nat- ities, artificial audition and vision, task plan- ural Language Understanding and Action Exe- ) ning and scheduling, distributed decisional ar- cution . It was also awarded the Public Choice chitecture and visualization tools for evaluation Award. in dynamic conditions. The software and deci- sional architectures can be ported and adapted to another platform, but their strengths and limita- 3. Next Iteration in Designing Advanced tions can only be adequately characterized when Mobile Robots used on platforms with greater capabilities. But we need to add significant improvements to the platform to make even more progress. There- Our 2006 implementation showed increased re- fore, from what we observed from our trials with liability and capabilities of MBA (especially the ) our robots and in line with the current state-of- DTW in a stable and integrated implementation the-art in mobile robotic development, we have of the components described in Section 2. Our identified and are working on two key innova- planner is capable of dealing with conditions tions that would greatly benefit our objective temporal constraints violation, opportunity de- of designing advanced mobile robots operating tection, task cancellation, unknown waypoint in everyday environments and interacting with that would occur in real life settings and within humans. a computational architecture using distributed modules (compared to architectures with one • Compliance. A robot mechanically inter- centralized module for goal/task generation). acts with its environment when the dynam- Our robot is equipped with a dynamic viewer ics of the coupled system (the robot and with voice and graphical interaction, contextu- environment in contact) differs significantly alized based on the robot’s active tasks. from the dynamics of the robot alone [7,14]. Not all robots mechanically interact with However, it is only one step in coming up with their environment. For instance, the vast the most efficient integration of these modali- majority of robots used for industrial ap- ties. We now have the necessary tools for easy plications do not interact significantly. In configuration of robot missions and for on- and many cases, a limited amount of physical off-line analysis of experienced situations (tra- interaction can be tolerated without specific jectories, requests, sound sources, pictures, in- modeling or control. However, for complex ternal states, etc.), useful in human-robot inter- robotic tasks (e.g., manipulation, locomo- action experiments and AI algorithms in uncon- tion), the lack of knowledge of precise in- strained conditions. In future work, we want to teraction models, the difficulties to precisely use these tools to address different research is- measure the task associated physical quanti- sues, such as the influence of directed attention ties (e.g., position of contact points, interac- in vocal interaction, the preferred way and time tion forces) in real-time, the finite sampling spent interacting with a robot, the most efficient time of digital control loops and the non- strategy to navigate in a crowd or to find an collocation of sensors and transducers have unknown location, to schmooze and make ac- negative effects on performance and stability quaintance with people, and to compare differ- of robots when using simple force or simple ent components for the robot’s modalities (e.g., movement controllers. For robots to take speech recognition software such as NUANCE a more significant role in human life, they and Sphinx, dialogue software such as CLSU must safely manage intentional and uninten- and Collagen, SLAM algorithm such as CAR- tional physical contact with humans, even MEN and vSLAM). while performing high-force tasks. These Iterative Design of Advanced Mobile Robots 13

challenges occur when interacting with the platform vulnerable to shocks when moving environment produces coupled system dy- over rough terrain, especially with its articu- namics that differ significantly from the dy- lations pointing down: an undetected obstacle namics of the robot system alone. touching the tip of an articulation would create an important reaction force at the articulation`sı We are currently developing a new compliant attachment point to the chassis and inside his actuator named Differential Elastic Actuator non back drivable gearbox. The new base uses (DEA), specifically designed to deal with DEA actuators for compliant locomotion con- the lack of knowledge on precise interaction trol: 4 DEA for the direction of its articulations models in natural settings and the difficulties (making it possible to guide the platform simply to precisely measure in real-time forces and through touch); 4 DEA at the attachment point positions associated to complex robotic tasks of the leg-tracks (for force control when the like manipulation and locomotion. Com- robot stands up). This should improve the lo- pared to FSCA and SEA, DEA [15] uses a comotion capabilities of the platform, making differential coupling instead of a serial cou- it omnidirectional and intrinsically safe. The pling between a high impedance mechanical humanoid torso is capable of arm gesture and speed source and a low impedance mechan- facial expressions. Its arms will use DEA ac- ical spring. This results in a more compact tuators for safe manipulation of objects and in- and simpler solution, with similar perfor- teraction with people. All of these elements mances. DEA are high performance actu- are currently being fabricated, assembled and ators providing higher torque/encumbrance tested, and will be integrated by end of 2008. ratio, higher torque/weight ratio, lower ro- tative inertia, lower rotation velocity of the flexible element, and improved effort trans- mission through the motors chassis. • Gesture in natural interaction. Natural interaction is mainly influenced by percep- tion through senses like vision, audition and touch, and by actions like speech, graph- ics and also gesture. Gesture, i.e., a motion made to express or help express thought, was not covered in our past iterations, and would greatly enhance the way the robot communi- cates with humans. Gesture can be made us- ing arms, facial expression, but also through the pose of the robot (rising itself up or mov- ing down). With these elements in mind, we are currently designing our third iteration of interactive au- Figure 10. Our third iteration in designing advanced tonomous mobile robot platform. The platform mobile robotic platform. is shown in Figure 10. The mobile base is a legged-tracked platform capable of changing the orientation of its four articulations. Each ar- 4. Conclusion ticulation has three degrees of freedom (DOF): it can rotate 360 degrees around its point of at- Mobile robotics is one of the greatest domains tachment to the chassis, can change its orienta- for systems engineering: it requires the inte- tion over 180 degrees, and rotate to propulse the gration of sensors, actuators, energy sources, robot. This platform is inspired from the AZ- embedded computing, decision-making algo- IMUT platform [23], built using stiff transmis- rithms, etc., all into one system. Only technolo- sion mechanisms, with motors and gearboxes gies and methodologies that work within the in- placed directly at the attachment points of the terdependent constraints of these elements, ad- articulations. Such design choices made the dressed in a holistic approach, can be useful. 14 Iterative Design of Advanced Mobile Robots

The integration work is influenced by techno- 5. Acknowledgment logical factors as well as by the operating con- ditions of the robotic platform. With continuing F. Michaud holds the Canada Research Chair technological progress, designs and tools must ( ) take into consideration possible extensions to CRC in Mobile Robotics and Autonomous In- the integration of modular elements of the sys- telligent Systems. Support for this work is pro- tem. Since a robot is an embodied system, it has vided by the Natural Sciences and Engineering to deal with real world dynamics and issues re- Research Council of Canada, the Canada Re- lated to the intended application. Therefore, in- search Chair program and the Canadian Foun- tegration directly influences scientific progress dation for Innovation. in mobile robots, and it is important to address such challenges by developing and addressing them in designing new platforms and conduct References real world field trials. The primary research contribution of this paper is to present what we have done so far with our AAAI 2006 entry, and [1] ARKIN, R. C. 1998. Behavior-based Robotics.The what we are aiming to accomplish with our new MIT Press. design iteration. This puts in perspective what can be done and what remains to be solved. [2] ASFOUR,T.;REGENSTEIN,K.;AZAD,P.;SCHRODER, J.; BIERBAUM,A.;VAHRENKAMP,N.;AND DILL- The design of advanced mobile robotic plat- MAN, R. 2006. Armar-III: An integrated humanoid forms is still in a basic research phase [28], platform for sensory-motor control. In Proceed- ings of the IEEE/RAS International Conference on investigating ideas and concepts, putting ini- Humanoid Robotics, 169–175. tial structure on problems and framing critical research questions. Progress is accomplished [3] BISCHOFF,R.,AND GRAEFE, V. 2004. Hermes: through multiple iterations of increasing com- A versatile personal assistant robot. Proceedings of the IEEE Special Issue on Human Interactive plexity, identifying new challenges and research Robots for Psychological Enrichment 1759–1779. issues as we moved through these iterations. What we have identified with our AAAI 2006 [4] BRACHMAN, R. J. 2006. (AA)AI more than the robot entry at all levels, from structural to hard- sum of its parts. AI Magazine 27(4):19–34. ware and software components, has led to the [5] BRESINA,J.;DEARDEN,R.;MEULEAU,N.;RAMKR- design specifications of our new robot. It is by ISHNAN,S.;SMITH,D.;AND WASHINGTON, R. 2002. designing more advanced platforms that we will Planning under continuous time and resource un- be able to measure progress at all these levels. certainty: A challenge for AI. In Proceedings of the 19th Conference on Uncertainty in AI, 77–84. As a corollary, one of our objectives is to share our efforts by making available our hardware [6] BROCK,O.;FAGG,A.;GRUPEN,R.;PLATT,R.; ROSENSTEIN,M.;AND SWEENEY, J. 2005. A frame- and software components. Advanced mobile work for learning and control in intelligent hu- robotics is a field so broad that it is not possible manoid robots. International Journal of Humanoid to be an expert in all related areas. Collabo- Robotics 2(3):301–336. rative work is therefore a necessary element in [7] BUERGER, S. P. 2005. Stable, High-force, our attempts, as a community, to significantly Low Impedance Robotic Actuators for Human- accelerate scientific progress of the field. More interactive Machines. Ph.D. Dissertation, MIT. specifically, we plan to use our new robot to con- [ ] duct comparative and integrated evaluations of 8 COTE,C.;LETOURNEAU,D.;MICHAUD,F.;VALIN, J.-M.; BROSSEAU,Y.;RAIEVSKY,C.;LEMAY,M.; robotic software capabilities and architectures AND TRAN, V. 2004. Code reusability tools for on a common platform and shared test condi- programming mobile robots. In Proceedings of the tions, thus eliminating any bias to conduct such IEEE/RSJ International Conference on Intelligent evaluations. This way, we should be able to Robots and Systems, 1820–1825. move research activities from feasibility stud- [9] COTE,C.;BROSSEAU,Y.;LETOURNEAU,D.; ies to comparative evaluations and human-robot RAIEVSKY,C.;AND MICHAUD, F. 2006a. Using interaction field studies, addressing an all new MARIE in software development and integration for autonomous mobile robotics. International Journal level with clear innovations in terms of motion, of Advanced Robotic Systems, Special Issue on interaction and cognition capabilities, both in- Software Development and Integration in Robotics dividually and integrated. 3(1):55–60. Iterative Design of Advanced Mobile Robots 15

[10] COTE,C.;LETOURNEAU,D.;RAIEVSKY,C.; P.; M ORIN,Y.;GAGNON,F.;GIGUERE,P.;ROUX, BROSSEAU,Y.;AND MICHAUD, F. 2006b. Using M.-A.; CARON,S.;FRENETTE,P.;AND F.KABANZA. MARIE for mobile robot software development and 2007. Spartacus attending the 2005 AAAI confer- integration. In Brugali, D., ed., Software Engineer- ence. Autonomous Robots, Special Issue on AAAI ing for Experimental Robotics. Springer Tracts on Mobile Robot Competition 22(4):369–384. Advanced Robotics. [22] MICHAUD, F. 2002. EMIB – Computational ar- [11] DO,M.,AND KAMBHAMPATI, S. 2003. SAPA: A chitecture based on emotion and motivation for in- scalable multi-objective metric temporal planner. tentional selection and configuration of behaviour- Journal of Artificial Intelligence Research 20:155– producing modules. Cognitive Science Quaterly, 194. Special Issue on Desires, Goals, Intentions, and Values: Computational Architectures 3-4:340–361. [12] EDSINGER-GONZALES,A.,AND WEBER, J. 2004. Domo: A force sensing for manip- [23] MICHAUD,F.;LETOURNEAU,D.;ARSENAULT,M.; ulation research. In Proceedings of the IEEE/RAS BERGERON,Y.;CADRIN,R.;GAGNON,F.;LEGAULT, International Conference on Humanoid Robotics, M.-A.; MILLETTE,M.;PARE,J.-F.;TREMBLAY,M.- 273–291. C.; LEPAGE,P.;MORIN,Y.;AND CARON, S. 2005. Multi-modal locomotion robotic platform using [13] GOCKLEY,R.;SIMMONS,R.;WANG,J.;BUSQUETS, leg-track-wheel articulations. Autonomous Robots, D.; DISALVO,C.;CAFFREY,K.;ROSENTHAL,S.; Special Issue on Unconventional Robotic Mobility MINK,J.;THOMAS,S.;ADAMS,W.;LAUDUCCI,T.; ( ) BUGAJSKA,M.;PERZANOWSKI,D.;AND SCHULTZ, 18 2 :137–156. A. 2004. Grace and : Social robots at [ ] AAAI. Technical Report WS-04-11, AAAI Mobile 24 MONTEMERLO,M.;ROY,N.;AND THRUN, S. 2003. Robot Competition Workshop. pp. 15-20. Perspectives on standardization in mobile robot pro- gramming: The Carnegie Mellon navigation (CAR- ) [14] HOGAN,N.,AND BUERGER, S. P. 2005. Impedance MEN toolkit. In Proceedings of the IEEE/RSJ and interaction control. In Robotics and Automation International Conference on Intelligent Robots and Handbook. CRC Press. Systems, 2436–2441.

[15] LAURIA,M.;LEGAULT,M.-A.;LAVO I E ,M.-A.;AND [25] NAU,D.;AU,T.;ILGHAMI,O.;KUTER,U.;MUR- MICHAUD, F. 2008. Differential elastic actuator for DOCK,J.;WU,D.;AND YAMAN, F. 2003. SHOP2: robotic interaction tasks. Submitted to IEEE Inter- An HTN planning system. Journal of Artificial national Conference on Robotics and Automation. Intelligence Research 20:379–404.

[16] LETOURNEAU,D.;MICHAUD,F.;AND VALIN,J.-M. [26] NAU,D.;GHALLAB,M.;AND TRAVERSO, P. 2004. 2004. Autonomous robot that can read. EURASIP Automated Planning: Theory & Practice. San Fran- Journal on Applied Signal Processing, Special Issue cisco, CA, USA: Morgan Kaufmann Publishers Inc. on Advances in Intelligent Vision Systems: Methods and Applications 17:1–14. [27] PRATT,G.,AND WILLIAMSON, M. 1995. Series elastic actuators. In Proceedings of the IEEE/RSJ [17] LIENHART,R.,AND MAYDT, J. 2002. An extended International Conference on Intelligent Robots and set of Haar-like features for rapid object detection. Systems, volume 1, 399–406. In Proceedings of the IEEE International Confer- ence on Image Processing, volume 1, 900–903. [28] SHAW, M. 2002. What makes good research in [ ] software engineering? In Proceedings of the Euro- 18 MAXWELL,B.;SMART,W.;JACOFF,A.;CASPER,J.; pean Joint Conference on Theory and Practice of WEISS,B.;SCHOLTZ,J.;YANCO,H.;MICIRE,M.; Software. STROUPE,A.;STORMONT,D.;AND LAUWERS,T. 2004. 2003 AAAI robot competition and exhibi- [29] SIMMONS,R.;GOLDBERG,D.;GOODE,A.;MON- tion. AI Magazine 25(2):68–80. TEMERLO,M.;ROY,N.;SELLNER,B.;URM- [19] MICHAUD,F.;AUDET,J.;LETOURNEAU,D.; SON,C.;SCHULTZ,A.;ABRAMSON,M.;ADAMS, LUSSIER,L.;THEBERGE-TURMEL,C.;AND CARON, W.; ATRASH,A.;BUGAJSKA,M.;COBLENZ,M.; S. 2001. Experiences with an autonomous robot MACMAHON,M.;PERZANOWSKI,D.;HORSWILL,I.; attending the AAAI conference. IEEE Intelligent ZUBEK,R.;KORTENKAMP,D.;WOLFE,B.;MILAM, Systems 16(5):23–29. T.; AND MAXWELL, B. 2003. Grace : An au- tonomous robot for the AAAI robot challenge. AI [20] MICHAUD,F.;BROSSEAU,Y.;COTE,C.;LE- Magazine 24(2):51–72. TOURNEAU,D.;MOISAN,P.;PONCHON,A.; RAIEVSKY,C.;VALIN,J.-M.;BEAUDRY,E.;AND [30] SMART,W.D.;DIXON,M.;MELCHIOR,N.;TUCEK, KABANZA, F. 2005. Modularity and integration J.; AND SRINIVAS, A. 2003. Lewis the gradu- in the design of a socially interactive robot. In ate student: An entry in the AAAI robot challenge. Proceedings of the IEEE International Workshop Technical report, AAAI Workshop on Mobile Robot on Robot and Human Interactive Communication, Competition. p. 46-51. 172–177. [31] STIEHL,W.,AND BREAZEAL, C. 2005. Design of a [21] MICHAUD,F.;COTE,C.;LETOURNEAU,D.; therapeutic robotic companion for relational, affec- BROSSEAU,Y.;VALIN,J.-M.;BEAUDRY,E.; tive touch. In Proceedings of the IEEE Workshop RAIEVSKY,C.;PONCHON,A.;MOISAN,P.;LEPAGE, on Robot and Human Interactive Communication. 16 Iterative Design of Advanced Mobile Robots

[32] VALIN,J.-M.;MICHAUD,F.;AND ROUAT, J. 2006. FRANCOIS¸ MICHAUD is a Professor at the Department of Electrical En- Robust localization and tracking of simultaneous gineering and Computer Engineering of the Universite´ de Sherbrooke, moving sound sources using beamforming and par- Quebec,´ Canada. He is the Canada Research Chairholder in Mobile ticle filtering. Robotics and Autonomous Systems Robots and Autonomous Intelligent Systems. His research interests are Journal 55:216-228. in architectural methodologies for intelligent decision-making, design of autonomous mobile robotics, social robotics, robot for children with autism, robot learning and intelligent systems. [33] VALIN,J.-M.;ROUAT,J.;AND MICHAUD, F. 2004. Enhanced robot audition based on microphone array source separation with post-filter. In Proceedings of the IEEE/RSJ International Conference on Intel- DOMINIC LETOURNEAU´ is a Research Engineer at the Department of ligent Robots and Systems, 2123–2128. Electrical Engineering and Computer Engineering of the Universitede´ Sherbrooke, Quebec,´ Canada. His research interests cover combination of systems and intelligent capabilities to increase the usability of au- [34] VAUGHAN,R.T.;GERKEY,B.P.;AND HOWARD,A. tonomous mobile robots in the real world. His expertise lies in artificial 2003. On device abstractions for portable, reusable vision, mobile robotics, robot programming and integrated design. robot code. In Proceedings of the IEEE/RSJ In- ternational Conference on Intelligent Robots and Systems, 2421–2427. E´ RIC BEAUDRY is a Ph.D. student at the Department of Computer Sci- [35] WERNER, M. 1999. Humanism and beyond the ence of the Universite´ de Sherbrooke, Quebec,´ Canada. His expertise truth, volume 13. Humanism Today. http://www. is in artificial intelligence, planning and scheduling and autonomous humanismtoday.org/vol13/werner.html. robotics.

Received: November, 2007 MAXIME FRECHETTE´ is a Master’s student at the Department of Electri- Accepted: April, 2008 cal Engineering and Computer Engineering of the UniversitedeSher-´ brooke, Quebec,´ Canada. His expertise lies in dialogue management Contact addresses: and human-robot interaction. Fran¸cois Michaud Dominic Letourneau´ Maxime Frechette´ FRODUALD KABANZA is a Professor at the Department of Computer Michel Lauria Science at the Universite´ de Sherbrooke. His research interests include Department of Electrical Engineering intelligent decision-making, AI and planning, AI and education and and Computer Engineering user-modelling. Universite´ de Sherbrooke Quebec,´ Canada [email protected] e-mail: MICHEL LAURIA is a Professor at the Department of Electrical Engi- Eric´ Beaudry neering and Computer Engineering of the Universite´ de Sherbrooke, Froduald Kabanza Quebec,´ Canada. His research interests are in mobile robot design and Department of Computer Science control, mechatronic and high-performance actuators. Universite´ de Sherbrooke Quebec,´ Canada