Svin2: an Underwater SLAM System Using Sonar, Visual, Inertial, and Depth Sensor

Svin2: an Underwater SLAM System Using Sonar, Visual, Inertial, and Depth Sensor

SVIn2: An Underwater SLAM System using Sonar, Visual, Inertial, and Depth Sensor Sharmin Rahman1, Alberto Quattrini Li2, and Ioannis Rekleitis1 Abstract— This paper presents a novel tightly-coupled keyframe-based Simultaneous Localization and Mapping (SLAM) system with loop-closing and relocalization capabilities targeted for the underwater domain. Our previous work, SVIn, augmented the state-of-the-art visual-inertial state estimation package OKVIS to accommodate acoustic data from sonar in a non-linear optimization-based framework. This paper addresses drift and loss of localization – one of the main problems affecting other packages in under- water domain – by providing the following main contributions: a robust initialization method to refine scale using depth measurements, a fast preprocessing step to enhance the image Fig. 1. Underwater cave in Ginnie Springs, FL, where data have been quality, and a real-time loop-closing and relocalization method collected using an underwater stereo rig. using bag of words (BoW). An additional contribution is the addition of depth measurements from a pressure sensor to the tightly-coupled optimization formulation. Experimental results Odometry (VIO) [8], [9], [10], [11], [12]. However, the on datasets collected with a custom-made underwater sensor underwater environment – e.g., see Fig. 1 – presents unique suite and an autonomous underwater vehicle from challeng- ing underwater environments with poor visibility demonstrate challenges to vision-based state estimation. As shown in a performance never achieved before in terms of accuracy and previous study [13], it is not straightforward to deploy the robustness. available vision-based state estimation packages underwater. In particular, suspended particulates, blurriness, and light and I. INTRODUCTION color attenuation result in features that are not as clearly Exploring and mapping underwater environments such defined as above water. Consequently results from different as caves, bridges, dams, and shipwrecks, are extremely vision-based state estimation packages show a significant important tasks for the economy, conservation, and scientific number of outliers resulting in inaccurate estimate or even discoveries [1]. Currently, most of the efforts are performed complete tracking loss. by divers that need to take measurements manually using a In this paper, we propose SVIn2, a novel SLAM system grid and measuring tape, or using hand-held sensors [2], and specifically targeted for underwater environments – e.g., data is post-processed afterwards. Autonomous Underwater wrecks and underwater caves – and easily adaptable for Vehicles (AUVs) present unique opportunities to automate different sensor configuration: acoustic (mechanical scanning this process; however, there are several open problems that profiling sonar), visual (stereo camera), inertial (linear accel- still need to be addressed for reliable deployments, including erations and angular velocities), and depth data. This makes robust Simultaneous Localization and Mapping (SLAM), the our system versatile and applicable on-board of different focus of this paper. sensor suites and underwater vehicles. Most of the underwater navigation algorithms [3], [4], In our recent work, SVIn [14], acoustic, visual, and inertial [5], [6], [7] are based on acoustic sensors, such as Doppler data is fused together to map different underwater structures velocity log (DVL), Ultra-short Baseline (USBL), and sonar. by augmenting the visual-inertial state estimation package However, data collection with these sensors is expensive OKVIS [9]. This improves the trajectory estimate especially and sometimes not suitable due to the highly unstructured when there is varying visibility underwater, as sonar provides underwater environments. In recent years, many vision- robust information about the presence of obstacles with based state estimation algorithms have been developed using accurate scale. However, in long trajectories, drifts could monocular, stereo, or multi-camera system mostly for indoor accumulate resulting in an erroneous trajectory. and outdoor environments. Vision is often combined with In this paper, we extend our work by including an image Inertial Measurement Unit (IMU) for improved estimation of enhancement technique targeted to the underwater domain, pose in challenging environments, termed as Visual-Inertial introducing depth measurements in the optimization process, 1S. Rahman and I. Rekleitis are with the Computer Science and En- loop-closure capabilities, and a more robust initialization. gineering Department, University of South Carolina, Columbia, SC, USA These additions enable the proposed approach to robustly [email protected], [email protected] and accurately estimate the sensor’s trajectory, where every 2A. Quattrini Li is with the Department of Com- puter Science, Dartmouth College, Hanover, NH, USA other approach has shown incorrect trajectories or loss of [email protected] localization. To validate our proposed approach, first, we assess the within a window, formulating the problem as a graph opti- performance of the proposed loop-closing method, by com- mization problem. For feature-based visual-inertial systems, paring it to other state-of-the-art systems on the EuRoC as in OKVIS [9] and Visual-Inertial ORB-SLAM [8], the micro-aerial vehicle public dataset [15], disabling the fusion optimization function includes the IMU error term and the of sonar and depth measurements in our system. Second, we reprojection error. The frontend tracking mechanism main- test the proposed full system on several different underwater tains a local map of features in a marginalization window datasets in a diverse set of conditions. More specifically, which are never used again once out of the window. VINS- underwater data – consisting of visual, inertial, depth, and Mono [10] uses a similar approach and maintains a minimum acoustic measurements – has been collected using a custom number of features for each image and existing features made sensor suite [16] from different locales; furthermore, are tracked by Kanade-Lucas-Tomasi (KLT) sparse optical data collected by an Aqua2 underwater vehicle [17] include flow algorithm in local window. Delmerico and Scaramuzza visual, inertial, and depth measurements. The results on [32] did a comprehensive comparison specifically monitoring the underwater datasets illustrate the loss of tracking and/or resource usage by the different methods. While KLT sparse failure to maintain consistent scale for other state-of-the-art features allow VINS-Mono running in real-time on low- systems while our proposed method maintains correct scale cost embedded systems, often results into tracking failure without diverging. in challenging environments, e.g., underwater environments The paper is structured as follows. The next section with low visibility. In addition, for loop detection additional discusses related work. Section III presents the mathemat- features and their descriptors are needed to be computed for ical formulation of the proposed system and describes the keyframes. approach developed for image preprocessing, pose initial- Loop closure – the capability of recognizing a place that ization, loop-closure, and relocalization. Section IV presents was seen before – is an important component to mitigate results from a publicly available aerial dataset and a diverse the drift of the state estimate. FAB-MAP [33], [34] is an set of challenging underwater environments. We conclude appearance-based method to recognize places in a proba- this paper with a discussion on lessons learned and directions bilistic framework. ORB-SLAM [27] and its extension with of future work. IMU [8] use bag-of-words (BoW) for loop closure and relocalization. VINS-Mono also uses a BoW approach. II. RELATED WORK Note that all visual-inertial state estimation systems re- quire a proper initialization. VINS-Mono uses a loosely- Sonar based underwater SLAM and navigation systems coupled sensor fusion method to align monocular vision have been exploited for many years. Folkesson et al. [18] with inertial measurement for estimator initialization. ORB- used a blazed array sonar for real-time feature tracking. SLAM with IMU [8] performs initialization by first running A feature reacquisition system with a low-cost sonar and a monocular SLAM to observe the pose first and then, IMU navigation sensors was described in [19]. More recently, biases are also estimated. Sunfish [20] – an underwater SLAM system using a multi- Given the modularity of OKVIS for adding new sensors beam sonar, an underwater dead-reckoning system based and robustness in tracking in underwater environment – we on a fiber-optic gyroscope (FOG) IMU, acoustic DVL, and fused sonar data in previous work in [14] – we extend pressure-depth sensors – has been developed for autonomous OKVIS to include also depth estimate, loop closure capa- cave exploration. Vision and visual-inertial based SLAM bilities, and a more robust initialization to specifically target systems also developed in [21], [22], [23] for underwater underwater environments. reconstruction and navigation. Corke et al. [24] compared acoustic and visual methods for underwater localization III. PROPOSED METHOD showing the viability of using visual methods underwater This section describes the proposed system, SVIn2, de- in some scenarios. picted in Fig. 2. The full proposed state estimation system The literature presents many vision-based state estimation can operate

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us