Minimalistic and Learning-Enabled Navigation Algorithms

Minimalistic and Learning-Enabled Navigation Algorithms

MINIMALISTIC AND LEARNING-ENABLED NAVIGATION ALGORITHMS FOR UNMANNED GROUND VEHICLES by Tixiao Shan A DISSERTATION Submitted to the Faculty of the Stevens Institute of Technology in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Tixiao Shan, Candidate ADVISORY COMMITTEE Brendan Englot, Chairman Date Sven Esche Date Steven Hoffenson Date Long Wang Date STEVENS INSTITUTE OF TECHNOLOGY Castle Point on Hudson Hoboken, NJ 07030 2019 ©2019, Tixiao Shan. All rights reserved. iii MINIMALISTIC AND LEARNING-ENABLED NAVIGATION ALGORITHMS FOR UNMANNED GROUND VEHICLES ABSTRACT Limited by the on-board computational resources of most unmanned mobile robots, autonomous navigation becomes a challenging task as it requires real-time planning, robust localization, and accurate mapping simultaneously. In this disserta- tion, we present several minimalistic and learning-enabled navigation algorithms that can achieve real-time performance on the hardware of a lidar-equipped unmanned ground vehicle (UGV). First, we introduce three sampling-based multi-objective path planning algorithms that are designed for relevant tasks, such as planning under risk and planning under uncertainty. These planning algorithms are suitable for both single-query and multi-query planning problems. Second, we present a lightweight and ground-optimized lidar odometry and mapping system (LeGO-LOAM) that pro- vides real-time ego-estimation and is applicable in a wide range of indoor and outdoor environments. The system is lightweight in that it is able to run in real-time on an embedded system. It is also ground-optimized as it leverages the presence of the ground plane for ego-estimation. Third, we propose two learning-aided algorithms, Bayesian generalized kernel (BGK) terrain mapping, and lidar super-resolution, to address the sparse data problem that is encountered during mapping. BGK terrain mapping is a back-end approach that infers the traversability of gaps that exist in a robot's terrain map. Lidar super-resolution is a front-end approach, which uses deep learning to enhance range sensor resolution directly. The motivating application of this work has been real-time autonomous navi- gation of GPS-denied ground robots in complex indoor-outdoor environments. Along iv with making optimal decisions for path planning, knowing the robot's position dur- ing operation and reflecting the surrounding world accurately in the map are also essential. The presence of slopes, vegetation, curbs and moving obstacles pose a chal- lenging navigation problem. Deploying the proposed algorithms on a ground robot, we give results for autonomous navigation in a variety of unstructured environments where our UGV achieves high-performance path planning, low-drift localization, and accurate mapping. Author: Tixiao Shan Advisor: Brendan Englot Date: August 16, 2019 Department: Mechanical Engineering Degree: DOCTOR OF PHILOSOPHY v To my family. vi Acknowledgments I would like to thank my advisor Dr. Brendan Englot, for all the support and guidance during my time in Robust Field Autonomy Lab (RFAL). Dr. Englot is a wonderful mentor, teacher, leader, and friend to me. I have had a fantastic time working under his supervision. Dr. Englot and I would hold a one-on-one meeting every week to discuss research-related problems. He always shows a great positive attitude, which is not only encouraging but also inspiring. Dr. Englot also created such a relaxing environment for us. I always felt so comfortable while working in the lab. Besides that, Dr. Englot always tries his best to satisfy all our needs, such as lab supplies and research equipment. I want to thank Dr. Sven Esche, Dr. Steven Hoffenson, and Dr. Long Wang for serving as my committee members. I also want to thank you for your brilliant comments and suggestions, and for letting my defense be a memorable and enjoyable moment. I also would like to thank all my labmates of RFAL: Shi Bai, Fanfei Chen, Kevin Doherty, John Martin, Jake McConnell, Sumukh Patil, Erik Pearson, Paul Szenher and Jinkun Wang. Without their help and advice, many of my work would be impossible. It's a great pleasure working with them. I want to thank the visiting scholars of our lab, Dong Cui and Dengwei Gao, for being good friends during their stay. I want to thank Chenhui Zhao for being the most entertaining lunch buddy. I want to thank Dr. Souran Manoochehri for the support of the application for Fernando L. Fernandez Robotics and Automation Fellowship. I want to thank Dr. Mishah Salman for all the advice for my research career and it has been a great pleasure to be his teaching assistant. I want to thank Ton Duong and Jennifer Field vii for their help with my teaching assistantship duties. Last but not least, I want to thank all the friends, teachers and staff I met at Stevens. Finally, I want to thank my parents for their constant support pursuing my doctoral degree. Especially, I want to thank Mr. Jeffrey Shore, for being the most supportive companion. None of the above may happen without you. viii Table of Contents Abstract iii Dedication v Acknowledgments vi List of Tables x List of Figures xi 1 Introduction 1 1.1 Motivation and Problem Statement 1 1.2 Overview and Contributions 5 2 Background 9 2.1 Multi-Objective Motion Planning 9 2.1.1 Weighted Sum Method 10 2.1.2 Constraint-based Methods 11 2.1.3 Lexicographic Method 13 2.1.4 Applications 14 2.2 Lidar-based Localization 19 2.3 Traversability Mapping 22 2.4 Lidar Super-resolution 27 3 Efficient Multi-Objective Planning 29 3.1 Minimum-Risk Planning 29 ix 3.1.1 Problem Definition 30 3.1.2 Algorithm Description 32 3.1.3 Algorithm Analysis 35 3.1.4 Experiments 38 3.1.5 Conclusions 45 3.2 Min-Max Uncertainty Planning 46 3.2.1 Problem Definition 46 3.2.2 Algorithm Description 52 3.2.3 Algorithm Analysis 54 3.2.4 Experiments 60 3.2.5 Conclusions 64 3.3 Belief Roadmap Search 66 3.3.1 Problem Definition 66 3.3.2 The Belief Roadmap Search 68 3.3.3 Algorithm Analysis 71 3.3.4 Experiments 78 3.3.5 Conclusions 85 4 Lightweight Lidar Odometry 86 4.1 Introduction 86 4.2 LeGO-LOAM 87 4.2.1 Segmentation 89 4.2.2 Feature Extraction 91 4.2.3 Lidar Odometry 92 4.2.4 Lidar Mapping 94 4.3 Experiments 95 x 4.3.1 Small-Scale UGV Test 96 4.3.2 Large-Scale UGV Tests 99 4.3.3 Benchmarking Results 104 4.3.4 Loop Closure Test using KITTI Dataset 107 4.4 Conclusions 108 5 Learning-Enhanced Perception 110 5.1 BGK Inference for Terrain Traversability Mapping 110 5.1.1 Introduction 110 5.1.2 Technical Approach 110 5.1.3 Experiments 117 5.1.4 Conclusions 124 5.2 Lidar Super-resolution 125 5.2.1 Introduction 125 5.2.2 Technical Approach 126 5.2.3 Experiments 131 5.2.4 Conclusions 140 6 Conclusions and Future Work 144 6.1 Conclusions 144 6.2 Future Work 146 6.2.1 Deep Learning-Accelerated Planning Under Uncertainty 146 6.2.2 Aggressive Navigation for UGVs 149 Bibliography 149 Vita 169 xi List of Tables 3.1 Quatitative results for MM-RRT* and additive approach 62 4.1 Information of three outdoor datasets 100 4.2 Average feature content of a scan after feature extraction 103 4.3 Iteration number comparison for LeGO-LOAM 105 4.4 Runtime of modules for processing one scan (ms) 106 4.5 Relative pose estimation error when returning to start 106 5.1 Traversability mapping quantitative results of two datasets 122 5.2 Quantitative results for various super-resolution methods 135 xii List of Figures 1.1 Clearpath Jackal, an unmanned ground vehicle 4 2.1 RRT* tree demonstration 15 2.2 T-RRT tree demonstration 16 2.3 BRM and FIRM demonstration 17 2.4 Optimal substructure planning example 18 2.5 Collar line registration process 20 2.6 Segmented point cloud for loop closure detection 21 2.7 Edge and planar features from LOAM 22 2.8 Traversability values calculated using an elevation map 24 2.9 Traversability assessment using point cloud 25 3.1 MM-RRT* trees under different values of threshold 35 3.2 A comparison of the RRT*, T-RRT* and MR-RRT* algorithms 39 3.3 MR-RRT* tree in a terrain map 41 3.4 Paths produced by RRT*, T-RRT* and MR-RRT* 41 3.5 Mean accumulated distance cost and mean risk cost 1 43 3.6 Mean accumulated distance cost and mean risk cost 2 44 3.7 MM-RRT* rewiring example 54 3.8 Tree comparition of RRT* and MM-RRT* 57 3.9 Tree comparition of RRT* and MM-RRT* in a hallway 58 3.10 Benchmarking results of MM-RRT* 59 3.11 Tree comparison in Willow Garage map 61 3.12 Real-world test using MM-RRT* 65 xiii 3.13 Optimal substructure planning example 1 71 3.14 Optimal substructure planning example 2 72 3.15 Breadth-first search example 74 3.16 Search process of breadth-first search 75 3.17 Search process of BRMS 76 3.18 Paths returned from BFS and BRMS 79 3.19 Dubins paths planned by BFS and BRMS 80 3.20 Benchmarking results of BFS and BRMS 81 3.21 UAV planning example using BFS and BRMS 82 3.22 Real-world planning example using BRMS 83 4.1 System overview of LeGO-LOAM 88 4.2 Demonstration of point cloud ground separation 88 4.3 Demonstration of point cloud segmentation in an urban environment 89 4.4 Demonstration of point cloud segmentation in a noisy enviroment 90 4.5 Demonstration of feature extraction 91 4.6 Lidar odometry module overview 92 4.7 Lidar mapping feature matching 94 4.8 Demonstration of feature extraction of LOAM and LeGO-LOAM 96 4.9 Local maps from both LOAM and LeGO-LOAM on rough terrain 97 4.10 Global maps from both LOAM and LeGO-LOAM on rough terrain 98 4.11 Maps of LeGO-LOAM using two outdoor datasets 100 4.12 Map comparison between LOAM and LeGO-LOAM 101 4.13 Experiment 3 LeGO-LOAM mapping result.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    186 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us