ROBOCUP 2005 TEAMCHAOS DOCUMENTATION
Vicente Matell´anOlivera Humberto Mart´ınezBarber´a Francisco Mart´ınRico David Herrero P´erez Carlos Enrique Ag¨ueroDur´an Victor Manuel G´omezG´omez
University of Murcia Rey Juan Carlos University Department of Information and Systems and Communications Group Communications Engineering E-28933 Madrid, Spain E-30100 Murcia, Spain
Miguel Angel Cazorla Quevedo Mar´ıaIsabel Alfonso Galipienso Alessandro Saffiotti Antonio Bot´ıaMart´ınez Kevin LeBlanc Boyan Ivanov Bonev
University of Alicante Orebro¨ University Department of Computer Science and Center for Applied Autonomous Artificial Intelligence Sensor Systems E-03690 Alicante, Spain S-70182 Orebro,¨ Sweden
Acknowledgements
For the elaboration of this document we have used material from past team members, be them from the original TeamSweden or its follow-up TeamChaos. The list is becoming rather large, so we strongly thank all of them for their contributions and effort, which has allow us to continue with the work. In particular, this document is based on previous team description papers and some conference papers, from which we have borrowed an important amount of information. Although they appear in the references, we want to explicitly cite the following papers and give extra thanks to their authors:
• A. Saffiotti and K. LeBlanc. Active Perceptual Anchoring of Robot Behavior in a Dynamic Environment. Int. Conf. on Robotics and Automation (San Francisco, CA, 2000) pp. 3796-3802.
• P. Buschka, A. Saffiotti, and Z. Wasik. Fuzzy Landmark-Based Localization for a Legged Robot. Int. Conf. on Intelligent Robotic Systems (IROS) Takamatsu, Japan, 2000.
• Z. Wasik and A. Saffiotti. Robust Color Segmentation for the RoboCup Domain. Int. Conf. on Pattern Recognition (ICPR), Quebec City, CA, 2002.
• A. Saffiotti and Z. Wasik. Using Hierarchical Fuzzy Behaviors in the RoboCup Do- main . In: C. Zhou, D. Maravall and D. Ruan, eds, Autonomous Robotic Systems, Springer, DE, 2003.
• J.P C´anovas, K. LeBlanc and A. Saffiotti. Multi-Robot Object Localization by Fuzzy Logic. Proc. of the Int. RoboCup Symposium, Lisbon, Portugal, 2004.
TeamChaos Documentation Contents
Contents
1 Introduction 1 1.1 History ...... 1 1.2 Team Members ...... 2 1.2.1 University of Murcia, Spain ...... 2 1.2.2 Rey Juan Carlos University, Spain ...... 3 1.2.3 University of Alicante, Spain ...... 3 1.2.4 Orebro¨ University, Sweden ...... 4
2 Architecture 5 2.1 Introduction ...... 5 2.2 ThinkingCap Architecture ...... 6
3 Locomotion 9 3.1 CMD: Commander Module ...... 9 3.2 Walk...... 10 3.2.1 Implementation ...... 10 3.3 Kicks ...... 12 3.3.1 Implementation ...... 16 3.4 Optimisation of Walking ...... 18 3.4.1 The Optimisation Problem ...... 19 3.4.2 Learning algorithm ...... 24 3.4.3 Implementation ...... 28
4 Perception 31 4.1 PAM: Perceptual Anchoring Module ...... 31 4.1.1 Standard vision pipeline ...... 31 4.1.2 Experimental vision pipeline ...... 32 4.2 Color Segmentation ...... 33 4.2.1 Seed Region Growing ...... 34 4.2.2 Experiments ...... 35 4.3 Corner Detection and Classification ...... 37 4.4 Active Perception ...... 42
i TeamChaos Documentation Contents
4.4.1 Perception-based Behaviour ...... 43 4.4.2 Active Perceptual Anchoring ...... 45 4.4.3 Implementation ...... 46 4.4.4 Experiments ...... 48
5 Self-Localization 51 5.1 Uncertainty Representation ...... 52 5.1.1 Fuzzy locations ...... 52 5.1.2 Representing the robot’s pose ...... 52 5.1.3 Representing the observations ...... 53 5.2 Fuzzy Self-Localization ...... 54 5.3 Experimental results ...... 57
6 Information Sharing 59 6.1 TCM: Team Communication Management ...... 59 6.1.1 Arquitecture ...... 60 6.1.2 Communicating with TCM ...... 64 6.1.3 Implementation ...... 67 6.2 Sharing Global Data ...... 72 6.3 Ball Fusion ...... 73
7 Behaviours 77 7.1 Low-level Behaviours ...... 77 7.1.1 Basic Behaviours ...... 77 7.1.2 Fuzzy Arbitration ...... 80 7.1.3 The Behaviour Hierarchy ...... 81 7.2 High-level Behaviours ...... 82 7.2.1 Hierarchical Finite State Machines ...... 83 7.3 Team Coordination ...... 83 7.3.1 Ball Booking ...... 84 7.3.2 Implementation ...... 84 7.4 The Players ...... 87 7.4.1 GoalKeeper ...... 87 7.4.2 Soccer Player ...... 90
A ChaosManager Tools 95 A.1 Introduction ...... 95 A.2 Vision ...... 95 A.2.1 Color Calibration ...... 96 A.2.2 Camera Configuration ...... 97 A.2.3 General Configuration ...... 98 A.3 Kicks ...... 99
ii TeamChaos Documentation Contents
A.4 Walk...... 100 A.5 MemoryStick Manager ...... 102 A.6 Game Monitor ...... 103 A.7 Game Controller ...... 104 A.7.1 Implementation ...... 105
B HFSM Builder 109 B.1 Using the Application ...... 109 B.1.1 Buttons and Menus ...... 110 B.1.2 Code Generation ...... 112 B.2 Building a Sample Automata ...... 113 B.3 Implementation ...... 114 B.3.1 File Format ...... 114 B.3.2 Class Diagrams ...... 115
iii TeamChaos Documentation Contents
iv TeamChaos Documentation Introduction
Chapter 1
Introduction
1.1 History
Team Chaos is a cooperative effort which involves universities from different countries (currently Sweden and Spain). Team Chaos is a follow-up of Team Sweden, which was created in 1999 and has participated in the 4-legged league of RoboCup ever since. The distributed nature of the team has made the project organization demanding but has resulted in a rewarding scientific and human cooperation experience. The scientific results of the team during these years have been very fruitful, although the competition results have not matched those, in particular in the the last two years. Because of the complexity of the software system, the code for the 2003 competition got very messy and included an important number of bugs. In addition, it did not include tools for helping in the configuration or calibration (except for a very simple tool for color segmentation). For the 2004 competition the team decided to go through a major code debugging and rewriting process. This included a number of configuration and calibration tools. Unfortunately the effort was not finished and ready for the 2004 competition. A major problem for the 2003 and 2004 was the lack of appropriate funding, including travelling. This, among other obvious problems, did not allow us to assist to local competitions, which definitively is very important for getting the system up and running for the competition. In December 2004 an important fact resulted in a major reshape of the team: the Spanish Ministry of Education and Science granted three Spanish universities in order to work in the RoboCup domain for three years. Thus, funds were available not only for the RoboCup but also for local competitions. The Team Chaos 2005 consisted on the following members:
1 TeamChaos Documentation Introduction
University Country Coordinator University of Murcia Spain Humberto Mart´ınezBarber´a [email protected] University of Alicante Spain Miguel Angel Cazorla Quevedo [email protected] Rey Juan Carlos University Spain Vicente Matell´anOlivera [email protected] Orebro¨ University Sweden Alessandro Saffiotti asaffi[email protected]
We had two main requirements in mind when we joined RoboCup. First, we wanted our entry to effectively address the challenges involved in managing uncertainty in the domain, where perception and execution are affected by errors and imprecision. Second, we wanted our entry to illustrate our research in autonomous robotics, and incorporate general techniques that could be reused on different robots and in different domains. While the first requirement could have been met by writing ad hoc competition soft- ware, the second one led us to develop principled solutions that drew upon our current research in robotics, and pushed it further ahead.
1.2 Team Members
1.2.1 University of Murcia, Spain
The Laboratorio de Rob´otica M´ovil (Mobile Robotics Lab) is part of the Intelligent Sys- tems Group, University of Murcia, Spain. The lab is headed by Dr. Humberto Mart´ınez and currently counts 3 docents, 4 PhD students and several undergraduate students. There are three different research activities in the group: fundamental mobile robotics research, industrial mobile robots, and field robots. In the first case, the members of the group work in control architectures for multi-robot systems, navigation techniques (mapping and localisation), and sensor fusion. These techniques are applied in both standard indoor mobile robots and in the RoboCup domain (Sony Four-Legged League), in which the group actively participates in the competition. In the second case, the members of the group work in integrating basic research techniques into industrial AGVs in order to achieve real autonomy. The developed system (the group has designed and built both the hardware and software), called iFork, has been deployed in an agricultural company and is currently in use. In the third case, the group has been working in the development of different autonomous outdoor vehicles: an autonomous platoon of real cars, an autonomous airplane (UAV) for search and rescue operations, and an autonomous boat (ASC) for autonomous bathymetry and autonomous sampling.
2 TeamChaos Documentation Introduction
1.2.2 Rey Juan Carlos University, Spain
Rey Juan Carlos University is the youngest public university in the Madrid autonomous region. Founded in 1997 currently counts about 17,000 students in four different campuses around Madrid. The Laboratorio de Rob´otica (Robotics Lab) was established in 2001 and it is part of the Systems and Communications Group, that belongs to the Computer Science Depart- ment. The lab is headed by Dr. Vicente Matell´an and currently counts 3 full-time teaching people, 4 PhD students, and several undergraduate students. The group main research interest is how to create autonomous mobile robots. This is a wide area with many chal- lenging issues involved: environment estimation through sensor readings, robust control, action selection, etc. The group is focused in perception and control architectures, both for single robots or groups of robots. These architectures will let an autonomous robots exhibit intelligent behaviors, reacting to different stimuli and accomplishing their goals in a dynamic indoor environments. Another related research issue the group is involved is in the integration of vision into the robot architectures. This involves works in attention, abductive perception, etc. We work both with wheeled and legged robots. In both cases the group is primarily a robotics software group, which means that we try to use standard robotic platforms (currently Pioneer from ActivMedia and Aibo from Sony). We are also a libre software group, meaning that we support open source approaches to robotics.
1.2.3 University of Alicante, Spain
People from University of Alicante are integrated in the Robot Vision Group: RVG at the Department of Computer Science and Artificial Intelligence. The group was formed in 2001, but almost all of their members have more than 10 years of research experience. We are 5 PhD docents, 5 docents and 3 PhD students, only 3 of them working in TeamChaos. The main research interest is focused in robotics and computer vision. On the one hand, we are interested in how to exploit vision to perform tasks in mobile robots. The relative orientation of the robot may be computed through the analysis of geometric structures. Obstacle detection and avoidance may rely on 3D information estimated with stereo vision. Stereo vision is also key for building 3D maps which may be sampled by particle-filters algorithms that finally localize the robot in the environment. The effective- ness of such an approach increases significantly with the aid of visual appearance which in turn contributes to identify discriminant places or landmarks. We are interested in estimating these landmarks, incorporate them in topological maps, and inferring optimal exploration behaviors through reinforcement learning. Our previous expertise in control architectures and planning will contribute to successfully embedding visual modules in robot architectures. On the other hand, we are interested in implementing of effective and efficient computer- vision algorithms for: feature extraction and grouping (obtain a geometric structure from grouping junctions through connecting edges), clustering and segmentation (automatic
3 TeamChaos Documentation Introduction
selection of the most effective set of filters for texture segmentation), recognition (either exploiting visual appearance, like PCA and ICA approaches, 3D information with match- ing strategies, or shape information through deformable templates), stereo and motion estimation (dense algorithms for disparity estimation, motion registration and tracking).
1.2.4 Orebro¨ University, Sweden
Orebro¨ University is one of the newest universities in Sweden, counting about 12,000 students. The Center for Applied Autonomous Sensor System (AASS) was created in 1998, and has seen a rapid growth, counting today about 25 members (10 PhD students, 10 senior researchers, and 5 visiting researchers). The mission of AASS is to explore the applicability of new paradigms in the analysis and design of autonomous sensor systems. These are mobile and immobile systems, which employ a vast array of sensors in order to analyse and influence a dynamic and uncertain world. These systems must operate in real time under both expected and unexpected situations, and with only a limited amount of human intervention. Since the world that these systems inhabit is complex, these systems need to employ advanced techniques for data processing, including embedded artificial intelligence, intelligent control, and neuromorphic systems. These are represented by the three interacting laboratories that compose the AASS Center: Sensor and Measurement technology, Intelligent Control, and Mobile robotics The Mobile Robotics lab is devoted to the development of techniques for autonomous operation of mobile robots in natural environments. By natural we mean real world environments that have not been specifically modified to accommodate the robots. This lab is headed by Dr. Alessandro Saffiotti, and currently counts 10 people (4 PhD students, 3 senior researchers, and 2 visiting researchers).
4 TeamChaos Documentation Architecture
Chapter 2
Architecture
2.1 Introduction
The problem of creating a robotic football team is a very difficult and challenging problem. There are several fields involved (low level locomotion, perception, location, behaviour development, communications, etc.), which should be developed for building a fully func- tional team. In addition debugging and monitoring tools are also needed. In practical terms, this means that the software project can be very large. In fact, if we execute the command sloccount[51] to our current robot’s code (that is, excluding the off-board tools), we get:
[...] Total Physical Source Lines of Code (SLOC) = 60,987 [...]
There are more than 60.000 lines of code. This fact shows that if we don’t organise and structure the project, it could be very difficult to manage. We use the Eclipse IDE for programming and debugging and CVS for sharing code and documentation. TeamChaos development is organised in three Eclipse and CVS projects: TeamChaos, ChaosDocs, and ChaosManager. TeamChaos contains all code related to the robot, Chaos- Manager is a suite of tools for calibrating, preparing memory sticks and monitoring dif- ferent aspects of the robots and the games, and ChaosDocs contains all the available documentation, including team reports, team description papers, and RoboCup applica- tions. The whole robot code is organised as Open-R [48] objects. Each object is mono-thread and the use of an unique object could be very difficult to develop and debug. Therefore, we have decomposed our code in three Open-R objects: ORRobot, ORTcm and ORGCtrl. ORRobot is the main object, which implements the full cognitive architecture and all the robotics dependent tasks. ORTcm is the communication manager and must fulfil tasks for sending and receiving data. ORGCtrl is an object entrusted to manage all instructions sent by the GameController.
5 TeamChaos Documentation Architecture
Communication is an important aspect because we can use external tools for achiev- ing laborious tasks easily. Using the ChaosManager we receive images from the robots and we can refine the camera parameters and reconfigure the robot’s camera while the robot is running. We can teleoperate the robot for testing kicks or locomotion using communication between dogs and ChaosManager.
2.2 ThinkingCap Architecture
Each robot uses the layered architecture shown in Figure. 2.1. This is a variant of the ThinkingCap architecture, which is a framework for building autonomous robots jointly developed by Orebro¨ University and the University of Murcia [30]. We outline below the main elements of this architecture:
Other Other Robot Robot
Team Communication Communication Layer Module Messages
Local State Global State Hierarchical Global Finite Map Higher Layer State Machine
Local State Behaviours Perceptual Hierarchical Anchoring Behaviour Middle Layer Module Module
Sensor Motor Commander Lower Layer Data Commands
Figure 2.1: The variant of the ThinkingCap architecture.
• The lower layer (commander module, or CMD) provides an abstract interface to the sensori-motor functionalities of the robot. The CMD accepts abstract commands from the upper layer, and implements them in terms of actual motion of the robot effectors. In particular, CMD receives set-points for the desired displacement veloc- ity < vx, vy, vθ >, where vx, vy are the forward and lateral velocities and vθ is the angular velocity, and translates them to an appropriate walking style by controlling the individual leg joints.
• The middle layer maintains a consistent representation of the space around the robot (Perceptual Anchoring Module, or PAM), and implements a set of robust tactical behaviours (Hierarchical Behaviour Module, or HBM). The PAM acts as a short- term memory of the location of the objects around the robot: at every moment, the PAM contains an estimate of the position of these objects based on a combination of current and past observations with self-motion information. For reference, objects are named Ball, Net1 (own net), Net2 (opponent net), and LM1-LM6 (the six possible landmarks, although only four of them are currently used). The PAM is also in charge of camera control, by selecting the fixation point according to the
6 TeamChaos Documentation Architecture
current perceptual needs [39]. The HBM realizes a set of navigation and ball control behaviours, and it is described in greater detail in the following sections.
• The higher layer maintains a global map of the field (GM) and makes real-time strategic decisions based on the current situation (situation assessment and role selection is performed in the HFSM, or Hierarchical Finite State Machine). Self- localization in the GM is based on fuzzy logic [12] [22]. The HFSM implements a behaviour selection scheme based on finite state machines [25].
• Radio communication is used to exchange position and coordination information with other robots (via the TCM, or Team Communication Module).
This architecture provides effective modularisation as well as clean interfaces, making it easy to develop different parts of it. Furthermore, its distributed implementation allows the execution of each module in a computer or robot indifferently. For instance, the low level modules can be executed onboard a robot and the high level modules can be executed offboard, where some debugging tools are available. However, a distributed implemen- tation generates serious synchronisation problems. This causes delays in decisions and robots cannot react fast enough to dynamic changes in the environment. For this reason we have favoured the implementation of a mixed mode architecture: at compilation time it is decided whether it will be a distributed version (each module is a thread, Figure 2.1) or a monolithic one (the whole architecture is a thread and the communication module is another, Figure. 2.2).
Other Robot COMMUNICATION IMPLENTATION Team Communication Module
Other Messages Robot CONTROL IMPLENTATION Local State Global State Hierarchical Global Finite Map State Machine
Local State Behaviours Perceptual Hierarchical Anchoring Behaviour Module Module
Motor Sensor Commander Data Commands
Figure 2.2: Implementation of the ThinkingCap architecture.
7 TeamChaos Documentation Architecture
8 TeamChaos Documentation Locomotion
Chapter 3
Locomotion
3.1 CMD: Commander Module
The Commander Module (CMD) is an interface to physical functions of the Aibo robot. Each robot model (ERS-210 and ERS-7) has its own particular feaures, so the way CMD interacts with the robot hardware depending on the robot model. Even though the new ERS-7 model offers a better performance (faster processor, more memory, etc), we still have an important number of ERS-210 in our labs, and thus we decided it was worth the extra effort to make them work. They are currently used for educational purposes. Basically the programs are quite compatible except for the PAM, and specially the CMD. The main differences between the two models are in the initialization part of the code, where the primitives for accessing the joints have changed and the walking algorithms, which take into account the physical aspects of the robot such as size of the limbs. We do not want to have two versions of the CMD code since it makes it very difficult two manage when doing any change or improvement and that’s the reason we focus on getting a generic code for both models. The most important feature of our CMD implementation is that this module achieve and independent way for interacting with different Aibo robot models. So an abstract architecture is implemented in this module to control robot hardware access. The CMD module implementation is divided into nine submodules, so different CMD aspects are treated in an independent submodule. Now, a brief explanation about CMD distribution is included.
Ear: treats the robot ears movement.
Gravity: controls the ground relative position of the robot, so when the robot is fallen on the carpet launch a kick for standing it up.
Kick: contains the needed structures for kicks, and controls robot kicks movements.
LedControl: manage robot leds, including face leds.
9 TeamChaos Documentation Locomotion
Look: contains the needed structures for robot head scans, and controls robot scans movements. SensorObserver: controls robot sensors (switches, accelerometers, etc). This submod- ule is in charge of generating proper events. StateMachine: manages the robot machine state (walking, kicking, etc) depending on the robot events. Tail: treats the robot tail movement. Walk: implements the walking style algorithm. It generates the step sequence needed to move robot legs, according the implemented walking style algorithm (based on the UNSW trapezoidal walk). include: contains common header files. Cmd: main class of the module, not include the Open-R intermodules communication. ORCmd: main class of the module, including the Open-R intermodules communication.
Fig. 3.1 and 3.2 show class diagrams of the most significant classes of the CMD module, specially those that contribute to the abstract aspect of this module. Each of this classes structures starts from an abstract class which indicates common aspects of the structure, while specific aspects are treated in the corresponding subclass of the structure. The three most significant operations of the CMD are: Sensor data reception from robot (Fig. 3.3), Head command execution demanded by PAM module (Fig. 3.4) and Velocity command execution demanded by HBM module (Fig. 3.5).
3.2 Walk
For the 2004 competition our walking styles was based on the code for parametric walk created by the University of New South Wales (UNSW) [20]. We are currently using the walk created by the Universit´ede Versailles [24] as well.
3.2.1 Implementation
The files for the Walk class are located into the Cmd directory:
/src/source/Cmd/Walk/
Following is the directory structure of the Walk class (Figure 3.6). The organization of the classes is as follows: Cmd creates a Walk object1. The Walk object creates a Locomotion object and calls its functions. See Figure 3.7. 1The word object refers to a ”C++ object”. Otherwise we would explicitly say ”Open-R object”.
10 TeamChaos Documentation Locomotion
Figure 3.1: Class diagrams. (a) LedControl (b) SensorObserver
Similarly, the Locomotion object creates and uses a Trajectory Plannig object, which is completely transparent to the Cmd and Walk objects. See Figure 3.8. The initialization of the objects is organized as shown on Figure 3.9. It is important to keep the initialization divided into Cmd::init() and Cmd::start(). These functions are executed in the appropriate order by an Open-R object. The Open-R object can be the ORRobot, when compiling for the monolithic version of the TeamChaos system, or the ORCmd, when compiling for the distributed version. The waking up of the robot happens during the following calls to the MoveJoint() function (Figure 3.10), as the Open-R object receives the messages for ReadyLegs. The requests to Walk operate as represented in the Figure 3.10. Finally, on the Figure 3.11 is represented another frequent scenario. Real speed is obtained form the Walk module because it differs from the speeds which the Cmd module stores. For example, when asking the robot to stop, in the first instant the Cmd velocity values are: vlin = 0, vlat = 0, and vrot = 0, while the robot is just beginning to decelerate. The real values have to be obtained from the Trajectory Planning. Some primitives are used by different objects. The responsibility for opening them is divided among these objects as follows:
• The SensorObserver opens all the 36 primitives related with sensors. These are listed in the file: src/source/Cmd/SensorObserver/SensorObserver7cfg.h
11 TeamChaos Documentation Locomotion
Figure 3.2: Class diagrams. (a) PWalk (b) Look
• The Walk object just opens the 12 primitives related to the effectors of the legs, listed in the file: src/source/Cmd/Walk/Locomotion/ers7.h The gains associated to the joints are also set by Locomotion. On the other hand, legs’ sensors are already opened by SensorObserver.
• The rest of the effector primitives are opened by other objects, i.e. Mouth, Ear, Look. These objects also set gains, when necessary.
The OPENR::SetMotorPower(opowerON) call is made by the Cmd::init() function. There are still some pending tasks, the most important ones being:
• Task: Integrate for Position Control use. Priority: middle. Difficulty: middle. Dependencies: Behaviours implementation. Antecedents: By now the system behaviours use Velocity Control. More sophisti- cated behaviours and control could be developed by using the Position Control.
• Task: Make the initial waking up sequence smooth. Priority: low. Difficulty: easy. Dependencies: none. Antecedents: During the Locomotion integration, some piece of code was posibly deleted or changed, and the initial waking was not smooth anymore.
3.3 Kicks
Kicking is a fundamental ability for soccer playing. The robot will use it to score a goal, to pass to a teammate, and in our architecture even to block an opponent shot. Basically, we define a kick as a movement sequence that the robot performs when the behavior module (HBM) activates it, and when a given set of preconditions are satisfied. The working schema is as follows:
1. The behavior module (HBM) sends a set of kick, selected among all the available ones, to the motion module (CMD).
12 TeamChaos Documentation Locomotion
Figure 3.3: Sensor data reception sequence diagram.
2. The CMD module activates these kicks, but it does not performs any of them till the preconditions are satisfied.
3. Each kick is configured with a set of preconditions regarding the relative position of the ball to the robot. The CMD module checks the preconditions for each kick. If the position of the robot and the ball satisfyies the kick conditions, the kick is performed.
Figure 3.12 shows the precondition (in blue) associated to a particular kick. If this kick is activated by HBM and the ball is in the blue area, this kick will be performed. We have classified our kicks into two categories:
• Open loop. This kind of kick is used for short kicks and they are executed in open loop mode. These kicks cannot be interrupted and they do not use any sensory information. The open-looped kicks we have currently developed are:
– LeftLegKick The ball is kicked with the left leg. – RightLegKick The ball is kicked with the right leg. – TwoLegKick The ball is kicked with both legs. – BlockKick This kick is used for blocking a ball. The robot extends both legs enlarging the blocking area.
13 TeamChaos Documentation Locomotion
Figure 3.4: Head command execution sequence diagram, invoked by PAM.
Figure 3.5: Velocity command execution sequence diagram, invoked by HBM.
– LeftHeadKick The robot uses its head for kicking the ball at its left direction. – RightHeadKick The robot uses its head for kicking the ball at its right direction. – FrontHeadKick The robot kicks the ball with its head for push the ball at its front. This kicks is a very effective one. – GetUp This kick is performed when the robot detects it has fallen down.
• Closed loop In this kind of kicks, the robot is getting information from its sensors while it is performing the kick. If a condition is not satisfied (i.e. the ball is not close enough) during the execution, the kick is aborted. These kicks can be divided into several phases. These phases starts and stops depending on the sensory information. Currently we have just one of these kicks.
– GrabAndTurn This kick makes the robot grabs the ball and turns. First, the robot orientates to the target with the ball grabbed. Then, the robot pushes
14 TeamChaos Documentation Locomotion
Figure 3.6: Directory structure of the Walk class.
the ball. If the robot lost the ball while it turns, the kick is aborted. This situation is detected by its infrared sensor situated between both legs.
The kicks module is in CMD object. It is used to execute kicks from a kick list. Each kick is a set of columns, each column stores the angular position for each joint and each row is a movement in the sequence of the complete kick movement, as shown in Fig. 3.13. The kick execution ends when the movement sequence (the rows) is completed. Furthermore each kick has a name (a string), ID (integer), activation zones (x min, x max, y min, y max) and odometry values. An special case of kick is the parametric one, which is a kick that accepts a parameter. This parameter corresponds to the degrees that the robot must spin when executing a left turn kick or right turn kick while simultaneously grabbing the ball. Parametric kicks finish when the turn is completed or when the robot looses the ball.
15 TeamChaos Documentation
Locomotion
d
C
m
d l
S
t
pee aca
v u
:
+
_
lklk
*
W
aa
w
:
+
kikik
*
K
cc
:
+
bj bj
OS *
t t
s ec ec
u u
: