Balancing Robustness and Efficiency in Deep Visual Detection

Balancing Robustness and Efficiency in Deep Visual Detection

Towards a Generic Diver-Following Algorithm: Balancing Robustness and Efficiency in Deep Visual Detection Md Jahidul Islam1, Michael Fulton2 and Junaed Sattar3∗ Preprint (v1) of ICRA 2019 submission Abstract pearances to the robot vary greatly based on their swim- ming styles, choices of wearables, and relative orientations This paper explores the design and development of a class with respect to the robot. These problems are exacerbated of robust diver-following algorithms for autonomous under- underwater since both the robot and diver are suspended water robots. By considering the operational challenges for in a six-degrees-of-freedom (6DOF) environment. Conse- underwater visual tracking in diverse real-world settings, quently, classical model-based detection algorithms fail to we formulate a set of desired features of a generic diver achieve good generalization performance [3, 4]. On the following algorithm. We attempt to accommodate these other hand, model-free algorithms incur significant target features and maximize general tracking performance by ex- drift [5] under such noisy conditions. ploiting the state-of-the-art deep object detection models. We fine-tune the building blocks of these models with a goal diver:96% of balancing the trade-off between robustness and efficiency diver:99% in an on-board setting under real-time constraints. Subse- quently, we design an architecturally simple Convolutional diver:93% diver:97% Neural Network (CNN)-based diver-detection model that is much faster than the state-of-the-art deep models yet pro- diver:97% vides comparable detection performances. In addition, we diver:89% diver:97% validate the performance and effectiveness of the proposed rov:99% diver-following modules through a number of field experi- ments in closed-water and open-water environments. diver:97% 1 INTRODUCTION diver:96% Underwater applications of autonomous underwater robots diver:97% diver:98% diver:67% range from inspection and surveillance to data collection and mapping tasks. Such missions often require a team diver:97% of divers and robots to collaborate for successful comple- tion. Without sacrificing the generality of such applica- tions, we can consider a single-robot setting where a human diver leads the task and interacts with the robot which fol- Figure 1: Snapshots of a set of diverse first-person views of lows the diver at certain stages of the mission. Such sit- the robot from different diver-following scenarios. Notice the uations arise in numerous important applications such as variation in appearances of the divers and possible noise or dis- submarine pipeline and ship-wreck inspection, marine life turbances in the scene over different scenarios. The rectangles arXiv:1809.06849v1 [cs.RO] 18 Sep 2018 and seabed monitoring, and many other underwater explo- and text overlaid on the figures are the outputs generated by ration activities [1]. Although following the diver is not our model at test time. the primary objective in these applications, it significantly simplifies the operational loop and reduces the associated In this paper, we address the inherent difficulties of un- overhead by eliminating the necessity of tele-operation. derwater visual detection by designing a class of diver- Robust underwater visual perception is generally chal- following algorithms that are: a) invariant to color (of lenging due to marine artifacts [2] such as poor visibility, divers' body/wearables [6]), b) invariant to divers' relative variations in illumination, suspended particles, etc. Addi- motion and orientation, c) robust to noise and image dis- tionally, color distortion and scarcity of salient visual fea- tortions [4], and d) reasonably efficient for real-time de- tures make it harder to robustly detect and accurately fol- ployment. We exploit the current state-of-the-art object low a diver in arbitrary directions. Moreover, divers' ap- detectors to accommodate these features and maximize the generalization performance for diver detection using RGB ∗ The authors are with the Interactive Robotics and Vision Lab- images as input. Specifically, we use the following four oratory, Department of Computer Science and Engineering, Uni- versity of Minnesota- Twin Cities, US. f1islam034, 2fulto081, models: Faster R-CNN [7] with Inception V2 [8] as a fea- [email protected] ture extractor, Single Shot MultiBox Detector (SSD) [9] with MobileNet V2 [10, 11] as a feature extractor, You Diver-following algorithms Only Look Once (YOLO) V2 [12], and Tiny YOLO [13]. These are the fastest (in terms of processing time of a single frame) among the family of current state-of-the-art mod- Feature perspective Model perspective els [14] for general object detection. We train these mod- Feature-based tracking Model-based els using a rigorously prepared dataset containing sufficient Feature-based learning Model-free training instances to capture the variabilities of underwater visual sensing. Feature or Representation learning Subsequently, we design an architecturally simple (i.e., sparse) deep model that is computationally much faster Figure 2: An algorithmic categorization of the visual perception techniques used for diver-following [18] than the state-of-the-art diver detection models. The faster running time ensures real-time tracking performance with limited on-board computational resources. We also demon- 2.1 Model Perspective strate its effectiveness in terms of detection performances compared to the state-of-the-art models through extensive In model-free algorithms, no prior information about the quantitative experiments. We then validate these results target (e.g., diver's motion model, color of wearables, etc.) with a series of field experiments. Based on our design, is used for tracking. These algorithms are initialized ar- implementation, and experimental findings, we make the bitrarily and then iteratively learn to track the target in following contributions in this paper: a semi-supervised fashion [19]. TLD (\tracking-learning- detection") trackers [20] and optical flow-based trackers [21] • We attempt to overcome the limitations of existing are the most commonly used model-free algorithms for gen- model-based diver-following algorithms using state-of- eral object tracking. The TLD trackers train a detector us- the-art deep object detection models. These models ing positive and negative feedback that are obtained from are trained on comprehensive datasets to deal with the image-based features. In contrast, the optical flow-based challenges involved in underwater visual perception1. methods estimate the motion of each pixel by solving the • In addition, we design a CNN-based diver detection Horn and Schunck formulation [22]. Although model-free model to balance the trade-offs between robustness techniques work reasonably well in practice for general ob- and efficiency. The proposed model provides consid- ject tracking, they often suffer from tracking drift caused erably faster running time, in addition to achieving by the accumulation of detection errors over time. detection performances comparable to the state-of-the- On the other hand, model-based algorithms use prior art models. knowledge about the divers' motion and appearances in order to formulate a model in the input feature-space. It- • Finally, we validate the effectiveness of the proposed erative search methods are then applied to find the tar- diver-following methodologies through extensive ex- get model in the feature-space [3]. Machine learning tech- perimental evaluations. A number of field experiments niques are also widely used to learn the diver-specific fea- are performed both in open-water and closed-water tures [18, 23] and predict the target location in the feature- (i.e., oceans and pools, respectively) environments in space. Performance of the model-free algorithms depend on order to demonstrate their real-time tracking perfor- comprehensiveness of the model descriptors and the under- mances. lying input feature-space. Hence, they require careful de- Furthermore, we demonstrate that the proposed mod- sign and thorough training processes to ensure good track- els can be extended for a wide range of other applications ing performance. such as human-robot communication [15], robot convoy- ing [5], cooperative localization [16, 17], etc. The state- 2.2 Feature Perspective of-the-art detection performance, fast running time, and architectural portability are the key features of these mod- Simple feature-based trackers [2, 24] are often practical els, which make them suitable for underwater human-robot choices for autonomous diver-following due to their op- collaborative applications. erational simplicity and computational efficiency. For in- stance, color-based trackers perform binary image thresh- olding based on the color of a diver's flippers or suit. The 2 RELATED WORK thresholded binary image is then refined to track the cen- troid of the target (diver) using algorithms such as mean- A categorization of the vision-based diver-following algo- shift, particle filters, etc. Optical flow-based methods rithms is illustrated in Figure 2. Based on algorithmic us- can also be utilized to track divers' motion in the spatio- age of the input features, they can be grouped as feature- temporal volume [18, 22]. based tracking, feature-based learning, and feature or rep- Since color distortions and low visibility issues are com- resentation learning algorithms. On the other hand, they mon in underwater settings, frequency-domain signatures can be categorized into model-based and model-free tech-

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us