Analysis of ROS-Based Visual and Lidar Odometry for a Teleoperated Crawler-Type Robot in Indoor Environment

Analysis of ROS-Based Visual and Lidar Odometry for a Teleoperated Crawler-Type Robot in Indoor Environment

Analysis of ROS-based Visual and Lidar Odometry for a Teleoperated Crawler-type Robot in Indoor Environment Maxim Sokolov, Oleg Bulichev and Ilya Afanasyev Institute of Robotics, Innopolis University, Universitetskaya str. 1, 420500 Innopolis, Russia Keywords: Monocular SLAM, ROS, Visual Odometry, Lidar Odometry, Crawler Robot, ORB-SLAM, LSD-SLAM. Abstract: This article presents a comparative analysis of ROS-based monocular visual odometry, lidar odometry and ground truth-related path estimation for a crawler-type robot in indoor environment. We tested these methods with the crawler robot ”Engineer”, which was teleoperated in a small-sized indoor workspace with office- style environment. Since robot’s onboard computer can not work simultaneously with ROS packages of lidar odometry and visual SLAM, we used online computation of lidar odometry, while video data from onboard camera was processed offline by ORB-SLAM and LSD-SLAM algorithms. As far as crawler robot motion is accompanied by significant vibrations, we faced some problems with these visual SLAM, which resulted in decreasing accuracy of robot trajectory evaluation or even fails in visual odometry, in spite of using a video stabilization filter. The comparative analysis shown that lidar odometry is close to the ground truth, whereas visual odometry can demonstrate significant trajectory deviations. 1 INTRODUCTION data. Very often, researchers define three cate- gories of monocular visual odometry: (1) feature- Over the last decade visual odometry has become the based, (2) appearance-based (also known as direct), valuable tool for estimation of a vehicle’s pose, orien- and hybrid methods (Scaramuzza and Fraundorfer, tation and trajectory through analysis of correspond- 2011). Appearance-based methods estimate intensity ing onboard camera images recorded during vehicle of all image pixels with the following direct image- motion. However, visual odometry methods are sen- to-image alignment, whereas feature-based methods sitive to illumination conditions and can fail in case of extract appreciable and repeatable features that can insufficiency of visual features since a scene requires be tracked through the frames. Finally, hybrid meth- enough texture to let explicit motion be estimated. ods apply to a combination of both previous ap- It makes essential to combine visual odometry with proaches. We need to emphasize that in our investiga- other measurements like wheel odometry, lidar odom- tion we exploit visual simultaneous localization and etry, global positioning system (GPS), inertial mea- mapping (V-SLAM) methods, which are originally surement units (IMUs), etc. (Scaramuzza and Fraun- focused on calculating a global map and robot path dorfer, 2011), (Zhang and Singh, 2015), (Sarvrood while tracking onboard camera position and orienta- et al., 2016). Nevertheless, the other types of on- tion. However, in our case we study a robot motion board odometry can have own drawbacks. For in- in small indoor workspace, thereby we extend the re- stance, lidar odometry at robot motion includes fluc- sults of V-SLAM to visual odometry, neglecting a dif- tuations in positions of point clouds over time. There- ference between these definitions and considering the fore, onboard sensor odometry should be provided workspace map as a global map. In our research we with ground truth1. use both feature-based and appearance-based meth- Since we utilize a single onboard camera, we ods for robot path recovery by choosing two monoc- are interested in monocular visual odometry, which ular V-SLAM: ORB-SLAM (Mur-Artal et al., 2015) is characterized by computing both relative vehi- and LSD-SLAM (Engel et al., 2014) recognized as cle motion and 3D scene structure from 2D video enhanced and robust methods. Both of these meth- ods have demonstrated good results in tracking, map- 1”Ground truth” is defined as a reference tool or a set ping, and camera localization for outdoor applica- of measurements that are known to be much more accurate tions, but there are some uncertainties with robust- than measurements from a system under investigation 316 Sokolov, M., Bulichev, O. and Afanasyev, I. Analysis of ROS-based Visual and Lidar Odometry for a Teleoperated Crawler-type Robot in Indoor Environment. DOI: 10.5220/0006420603160321 In Proceedings of the 14th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2017) - Volume 2, pages 316-321 ISBN: Not Available Copyright © 2017 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved Analysis of ROS-based Visual and Lidar Odometry for a Teleoperated Crawler-type Robot in Indoor Environment ness and feasibility to indoor robot navigation in typ- tracks, flippers, and a robotic arm, that support capa- ical office-style environment with monotone-painted bilities for climbing stairs, traversing doorways and walls (Buyval et al., 2017). Except V-SLAM we also narrow passages, negotiating obstacles, leveling itself provided ROS-based onboard lidar odometry in on- from sideways/upside down positions, etc. Moreover, line mode, getting more accurate information about the robotic arm is equipped by a head with sensors, robot motion trajectory. Since crawler robot motion controllers and grippers, that allows to have a flexible suffers significant shakes, vibrations and sharp turns, remote control for lifting heavy loads, opening differ- it brings deviations in onboard sensor outputs, de- ent doors, plus grasping, pushing or pulling objects. creasing accuracy of trajectory evaluation or even fail- The robot sensors pack can contain of a laser scan- ing visual odometry. Therefore, robot path verifica- ner, an optical zoom camera, a thermal vision cam- tion is provided by an external camera-based moni- era, a pair of stereo cameras, IMU and GPS naviga- toring system, which is not affected by vibrations. tion tools. The detailed information about our set of Our main goal of this study is to compute a robot vision system is presented in Table 1. crawler-type robot trajectory via different odome- Since our robot’ sensor pack does not have a try methods realized in ROS 2, providing: (1) vi- built-in laser scanner, we mounted the scanning laser sual odometry acquisition for robot motion within rangefinder Hokuyo URG-04LX-UG014 on the robot the workspace by using onboard camera and two V- head (see, Fig. 2). This laser is often used for au- SLAM packages: feature-based ORB-SLAM and di- tonomous robots, as far as it has light weight (160g), rect LSD-SLAM; (2) comparison of onboard visual low-power consumption (2.5W), convenient for in- and lidar odometry; (3) a verification of both visual door application scanning range from 2 cm to 5.6 m, and lidar odometry with the real robot trajectory mea- wide field of view up to 240◦, angular resolution of sured by the external camera-based monitoring sys- 0.36◦, refresh rate of 10Hz, and accuracy up to 3% tem as the ground truth. To summarize, this pa- relatively to measured distance. per analyzes and compares different monocular visual odometry with lidar odometry and the ground truth- based robot path estimation. Since onboard computer of our robot did not permit a simultaneous run of ROS packages for lidar and two visual SLAM, we launched ROS-based lidar odometry online and recorded on- board and external cameras’ video in synchronous mode. Then, we processed onboard video offline with ORB and LSD-SLAM, comparing visual and lidar odometry with ground truth-based robot trajectories. The rest of the paper is organized as following. Section 2 introduces system setup, Section 3 presents Figure 1: The Servosila ”Engineer” crawler robot. Courtesy ROS-based visual and lidar odometry, and Section 4 of Servosila company. describes indoor tests and robot trajectories evalua- tion. In Section 5 we analyze visual and lidar odome- try estimation. Finally, we conclude in Section 6. 2 SYSTEM SETUP 2.1 Robot System Configuration The crawler-type robot ”Engineer” (Fig. 1) is de- signed and manufactured by ”Servosila” company3 to Figure 2: The ”Engineer” robot head’s sensor system. overcome complex terrain while navigating in con- fined spaces and solving a number of challenging 2.2 Robot Vision System tasks for search and rescue missions. The robot has To validate onboard visual and lidar data, we used ex- 2Robot Operating System (ROS), which is a set of soft- ternal ground truth system based on Basler acA2000- ware libraries and tools for robot applications, www.ros.org 3”Servosila” company, www.servosila.com/en/ 4Hokuyo Automatic Co. 2D lidar, www.hokuyo-aut.jp 317 ICINCO 2017 - 14th International Conference on Informatics in Control, Automation and Robotics Table 1: The Servosila ”Engineer” robot vision system. Type Quantity Direction Resolution Palette Night vision Zoom Rate Mono one front 1280*720 RGB No Yes 60fps Stereo pair two front 640*480 YUYV Yes No 30fps Mono one rear 640*480 YUYV No No 30fps which form ORB SLAM7 method is a feature-based real-time SLAM library for monocular, stereo and RGB-D cameras (Mur-Artal et al., 2015). It is able to build a sparse 3D scene and to compute an on- board camera trajectory, thereby recovering a vision- based robot odometry. In cases of heterogeneous environment, ORB-SLAM demonstrates robustness to complex motion clutter, performing wide baseline loop detection and real time automatic camera re- localization. To process live monocular streams, a li- brary with a ROS node is used. The ORB-SLAM adapts main ideas of previous SLAM works, such as Parallel Tracking and Map- Figure 3: Workspace with the ground truth camera above. ping (PTAM, (Klein and Murray, 2007)) and Scale Drift-Aware Large Scale Monocular SLAM (Strasdat 50gc camera5 (see, characteristics in Table 2), which et al., 2010). Thus, it uses advanced approaches to lo- was hung up under the workspace on the height of calization, loop closing, bundle adjustment, keyframe about 3m and connected to PC (Fig. 3). Before tests selection, feature matching, point triangulation, cam- the camera was calibrated against a chessboard with era localization for every frame, and relocalization ”camera calibration” ROS package6.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us