
Sensor-Based Behavior Control for an Autonomous Underwater Vehicle Gregory Dudek and Philippe Giguere and Junaed Sattar Centre for Intelligent Machines McGill University {dudek, philg, junaed}@cim.mgill.ca http://www.cim.mgill.ca/˜mrl Abstract. In this paper, we present behaviors and interaction modes for a small underwater robot. In particular, we address some challenging issues arising from the underwater envi- ronment: visual processing, interactive communication with an underwater crew, and finally orientation and motion of the vehicle through a hovering mode. The visual processing consist of target tracking using various technique (color blob, color histogram and mean shift). The underwater communication is achieved through printed cards with virtual markers (ARTag). Finally, the hovering "gait" developed for this vehicle relies on the planned motion of six flippers to generate the appropriate forces. 1 Motivation and Problem Statement This paper considers non-contact guidance for an amphibious robot, based on visual cues. In particular, this is motivated by the challenges of communication underwa- ter. Teleoperation for underwater vehicles is complicated by several factors: wireless communication is problematic since conventional radio communications are infea- sible, the use of a tether is awkward on land and even worse underwater in the face of buoyancy issues and 6 DOF motion, and other communication mechanisms have their own deficiencies. Notably, even human scuba divers commonly resort to simple sign language and similar short range visual communications for underwater task coordination. In a similar manner, our underwater vehicle is being developed to combine behaviors and operating modes based on visual cues. Our target application is the visual surveillance of reef environments to assess the health and behavior of the marine life. This task, like many related inspection tasks, can be decomposed into two canonical behaviors: transition between way points, and station keeping at a fixed location. In our application, each of these is modulated by visual cues from a diver or in response to environmental stimuli (such as recognized visual landmarks). These cues take two forms: symbolic tokens used to select specific behavior classes or tune the behaviors, and target motions used for visual servoing with respect to either a diver or an environmental stimulus. Motion between the way points is performed by executing one of several swimming gaits. Station keeping, however, entails the use of a hovering gait which is both unique and challenging. 2 G. Dudek and P. Giguere and J. Sattar Fig. 1. The AQUA robot being deployed during open-sea experimentations (on the left) and operated in untethered mode (on the right). Our vehicle, a descent of the RHex hexapod robot [1], has well-developed “kicking” gaits for forward locomotion that permit limited amounts of pitch, roll and yaw. These gaits are based in simple oscillatory motions of the flippers with various phase and amplitude offsets, akin to the standard up-and-down kick of a human swimmer. In this standard mode of motion, however, rotational motion is coupled to forward motion; the robot can only turn if it is moving forward. Furthermore, thrust can only be “instantaneously” applied in the forward direction (or backward if the flipper orientation is reversed). The hovering gaits were conceived with several requirements in mind. First those gaits had to be able to move the robot in 5 degrees of freedom: pitch, roll, yaw, heave and surge. They also had to be able to combine several commands at the same time, for example a pitch and heave command. Furthermore to ensure good controllability, the reaction time of the robot had be kept to a minimum, particularly in the case of command reversal. Finally, the cross-talk between the degrees of freedom was minimized: in order to hover in place effectively, the robot needs to be able to apply rotational moments in any direction with very limited application of net forward thrust. The intrinsically non-holonomic behavior of the flippers presented a significant challenge in the design of hovering gaits. 2 Technical Approach In order to adapt to natural environments and compensate for unforeseen forces in the environment, a key aspect of our work is the sensing of environmental conditions and the recognition of the current context. To do this we use a combination of internal sensors akin to biological proprioception as well as computer vision. This adaptation process therefore falls into two distinct categories: visual processing and interpretation, and gait synthesis, selection and control. The visual processing is further subdivided into learning-based visual servoing, and symbol detection and interpretation (i.e. a visual language akin to a simplified sign language). Sensor-Based Behavior Control for an Autonomous Underwater Vehicle 3 2.1 Visual Processing and Interpretation A primary interaction mechanism for controlling the robot is via visual servoing using a chromatic target. By detecting a target object of a pre-specified color, the robot is able to track the target in image-space up to a practical operational maxi- mum distance of approximately two meters [6]. We currently use color blob, color histogram [7] and mean-shift [2] based tracking algorithms for target tracking. A proportional-integral-derivative controller takes the tracker outputs and generates yaw and pitch (but not roll) commands which determine the robot trajectory and makes the target following behavior possible. For the surveillance of a motionless target, the hovering gait of the robot is used. Fig. 2. ARTag Markers. The servoing mechanism is configurable through a number of parameters that include target color properties, robot speed, gait selection and yaw/pitch gains. We use symbolic markers provided by the ARTag toolkit [3] to visually communicate with the robot and affect changes in robot behavior. An example of an ARTag marker is shown in figure 2. These markers include both symbolic and geometric content, and are constructed using an error-correcting code to enhance robustness. Switching in and out of the hovering gait, for example, is performed by detecting a particular ARTag marker. 2.2 Gait Control Overview The gait design and control issues we consider are for a swimming robot that uses paddles (i.e. legs) for locomotion underwater. By using legs for locomotion our vehicle is able to swim underwater and walk on land. Many underwater tasks entail holding a fixed position while some task is accomplished, either surveillance or manipulation. Our robot is able to use its legs to land on the sea bottom with limited disturbance and perform certain types of surveillance task. A large class of activities is facilitated, however, by being able to hold a fixed position in middle depths, for example to monitor sea life on a coral reef, a key application of our robot. 3 Methodology and Results Our experimental methodology is comprised of three sequential phases: numerical simulation and validation on recorded video data, pool trials in a controlled environ- ment, and open water tests on a live coral reef. The latter scenario is the real test of 4 G. Dudek and P. Giguere and J. Sattar performance, but tends to provide qualitative results and video data. The deployment of the robot at sea can take place from shore or from a boat. 3.1 Visual Servoing and Tag-based Control The integrated monocular vision system in the Aqua robot currently serves two purposes; namely visual servoing and tag-based robot control. The visual servoing behavior is used to track and follow an object of interest (i.e. fish, a diver etc) underwater. The tag-based navigation mode is based on the ARTag toolkit [3] , and is used to send basic motion control commands to the robot. Both these modes run in parallel; visual servoing mode can be preempted by motion control commands sent by the tag-based motion control subsystem. We discuss both these systems briefly in the two subsections that follow. Visual Servoing The visual servoing subsystem is made up of two functional com- ponents – the visual tracker and the proportional-integral-derivative (PID) controller. The visual tracker tracks objects of interest, or targets, based on the color features of the targets. The tracking system localizes a target in image space, in Cartesian coordinates, with location (0,0) being the center of the image frame. To track an ob- ject, we use its color properties. Both single and multicolor objects can be tracked. Currently, our system is comprised of two different approaches to visual tracking. One type of approach is a naive, brute-force approach to target localization. An ex- ample of this is the color segmentation tracker, that uses trivial threshold-based color segmentation to detect the target in the image frame. We utilize statistical approaches to visual tracking as well. The histogram tracker and mean-shit tracker uses statisti- cal similarity measures between color histograms to detect probable target location between successive image frames. These trackers differ in their methods of locating the target in consecutive frames; the histogram tracker does a global search in the entire image to locate the target, whereas the mean-shift tracker uses the mean-shift vector to detect the shift in target location and thereby reacquire the target in the next frames. Fig. 3. Visual servoing architecture in AQUA. Sensor-Based Behavior Control for an Autonomous Underwater Vehicle 5 We use a PID controller that takes as input the image coordinates (and size) of the target in image space, and emits yaw and pitch commands (and optionally speed) as outputs that are sent to a auxiliary gait-control computer to modify the robot’s behavior and thus pose. The controller also embodies a low-pass filter, smoothing out random changes in yaw and pitch commands that are sent from the visual tracker.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-