Vision-Aided Planning for Robust Autonomous Navigation of Small
Total Page:16
File Type:pdf, Size:1020Kb
Vision-Aided Planning for Robust Autonomous Navigation of Small-Scale Quadruped Robots by Thomas Dudzik S.B., Computer Science and Engineering, and Mathematics, Massachusetts Institute of Technology (2019) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 2020 ○c Massachusetts Institute of Technology 2020. All rights reserved. Author................................................................ Department of Electrical Engineering and Computer Science August 14, 2020 Certified by. Sangbae Kim Professor Thesis Supervisor Accepted by . Katrina LaCurts Chair, Master of Engineering Thesis Committee 2 Vision-Aided Planning for Robust Autonomous Navigation of Small-Scale Quadruped Robots by Thomas Dudzik Submitted to the Department of Electrical Engineering and Computer Science on August 14, 2020, in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer Science Abstract Robust path planning in non-flat and non-rigid terrain poses a significant challenge for small-scale legged robots. For a quadruped robot to reliably operate autonomously in complex environments, it must be able to continuously determine sequences of feasible body positions that lead it towards a goal while maintaining balance and avoiding obstacles. Current solutions to the problem of motion planning have several shortcoming such as not exploiting the full flexibility of legged robots and not scaling well with environment size or complexity. In this thesis, we address the problem of navigation of quadruped robots by proposing and implementing a vision-aided plan- ning framework on top of existing motion controllers that combines terrain awareness with graph-based search techniques. In particular, the proposed approach exploits the distinctive obstacle-negotiation capabilities of legged robots while keeping the com- putational complexity low enough to enable planning over considerable distances in real-time. We showcase the effectiveness of our approach both in simulated environ- ments and on actual hardware using the MIT Mini-Cheetah Vision robotic platform. Thesis Supervisor: Sangbae Kim Title: Professor 3 4 Acknowledgments Many people helped make this work possible – I could not hope to acknowledge each and every individual that contributed in making this thesis a reality, in ways big and small, directly and indirectly. With that said, I hope to convey the immense appreciation I have for at least a few of these people. First and foremost, I would like to thank my advisor, Sangbae Kim, for welcoming me into the Biomimetics Lab and giving me the opportunity to develop on the robotic cheetah platform. His guidance along the way proved invaluable, teaching me to balance the ups and downs of research life while introducing me to many fascinating aspects of robotics, optimization, planning, and machine learning. Thank you to Donghyun Kim for mentoring me and spending endless hours brain- storming ideas and debugging code to help make the project a success. Without his contributions the robot codebase would still be in its infancy. I would also like to thank the various other members of the Biomimetics Lab with whom I’ve had the pleasure of collaborating with over the past year: Matt Chignoli, Gerardo Bledt, AJ Miller, Bryan Lim, and Albert Wang. Through long discussions and endless nights in the lab, I came to understand robotics at a deeper level from both a software and a hardware perspective. It was a huge honor to be a part of this lab and I appreciate each and every one of you for making it such an amazing place to learn and work. Last but certainly not least, I’d like to thank my family for their endless love and support throughout my academic journey and for instilling in me the drive to tackle the tough challenges, as well as my close friends who’ve made the past five years at MIT full of great memories and ultimately all worth it. 5 6 Contents 1 Introduction 17 1.1 Motivation . 18 1.2 Recent Advancements . 18 1.2.1 Quadruped Robotic Platforms . 19 1.2.2 Machine Learning . 21 1.2.3 Current Challenges . 21 1.3 Contributions . 22 1.4 Thesis Outline . 22 2 System Overview 25 2.1 MIT Mini-Cheetah . 25 2.1.1 Specifications . 26 2.1.2 Hardware . 27 2.2 MIT Mini-Cheetah Vision . 27 2.2.1 Specifications . 27 2.2.2 Hardware . 28 2.3 Robot System Model . 30 2.4 Software System Architecture . 33 2.5 Simulation Environment . 35 3 Dynamic Terrain Mapping 39 3.1 Terrain Modeling . 39 3.2 Elevation Map Construction . 40 7 3.3 Traversability Estimation . 41 3.4 Implementation Details . 43 4 Path Planning 45 4.1 Problem Formulation . 45 4.2 Algorithm Overview . 48 4.2.1 Implementation Details . 49 4.3 Path Tracking . 50 4.4 Continuous Replanning . 52 4.5 Limitations . 53 5 Practical Tests and Results 55 5.1 Evaluation in Simulation . 55 5.1.1 Software Demo 1 . 55 5.1.2 Software Demo 2 . 57 5.1.3 Software Demo 3 . 58 5.2 Evaluation on Hardware . 59 5.2.1 Hardware Demo 1 . 60 5.2.2 Hardware Demo 2 . 61 5.2.3 Hardware Demo 3 . 62 5.2.4 Hardware Demo 4 . 63 6 Conclusion 65 6.1 Future Work . 66 6.1.1 Perception Improvements . 66 6.1.2 Motion Controller and Gait Selection Improvements . 66 A Hardware Specifications 69 A.1 UP board . 69 A.2 NVIDIA Jetson TX2 . 70 8 B Algorithm Psuedocode 71 B.1 A* . 71 9 10 List of Figures 1-1 Popular Quadruped Robotic Platforms. From left to right: Boston Dynamics’ Spot Mini, ETH Zurich’s ANYmal, and IIT’s HyQ robot. 20 2-1 MIT Mini-Cheetah Robot. A small-scale, electrically-actuated, high-performance quadruped robot. 26 2-2 MIT Mini-Cheetah Vision Robot. A variant of the original Mini- Cheetah upgraded with additional compute power and three Intel Re- alSense perception sensors for exteroceptive sensing. 28 2-3 Intel RealSense D435 Depth Camera. A depth camera that pro- vides the robot with vision data used for constructing a terrain map of the surrounding area. 29 2-4 Intel RealSense T265 Tracking Sensor. A low-power tracking camera used for global localization of the robot. 30 2-5 Mini-Cheetah Vision Coordinate Systems. A body-fixed coordi- nate frame, B, originates at the robot’s CoM p. A second coordinate frame originates at the center of the depth camera mounted at the front, denoted as C. The robot itself moves around in the world iner- tial frame, I. ............................... 31 2-6 High-Level System Architecture. Block diagram visualizing the software architecture and the interaction between modules. Green rep- resents the perception and planning module, blue represents the state estimation module, and red represents the locomotion control module. The motion library (not shown) is a separate component. 34 11 2-7 The MIT Biomimetic Robotics Lab’s Open-Source Simulation Environment. A custom, open-source simulation software environ- ment was designed to allow for fast, risk-free experimentation with realistic dynamics. It includes a control panel GUI that allows an operator to change robot and simulation parameters on the fly. 36 3-1 Example Pointcloud and Heightmap. A visualization of the fine- resolution pointcloud output by the front-mounted RealSense D435 camera. The corresponding local heightmap is overlaid in simulation. 41 3-2 Overview of the Traversability Map Generation Process. An example of how a traversability map is derived from pointcloud data. The first figure contains a pointcloud representation of a stairset with a single noisy return. The sensor data is then binned into discrete cells to form a 2.5D heightmap. The heightmap is then subsequently filtered to deal with noise and sparsity. Finally, the gradients ofthe filtered heightmap are computed in order to segment the terrain based on traversability. In the right-most figure, blue represents traversable regions while yellow represents non-traversable regions. 43 4-1 Workspace. An illustration of an example workspace. The shaded regions represent obstacles in the environment. 46 4-2 Configuration Space. An illustration of the configuration space de- rived from the example workspace in Figure 4-1. The white space is Cfree, the inner shaded regions make up Cobs, and the shaded region between the solid lines and the dotted lines make up Cbuf . 47 5-1 Software Demo 1: Hallway. (a) The robot autonomously navigates through a simulated hallway to the desired waypoint (red sphere). The planned path is visualized as a blue line. (b) The local traversability map is overlaid in simulation. Blue patches represent non-traversable locations by the CoM while green is traversable. 56 12 5-2 Software Demo 2: Maze. The robot continuously replans for the optimal, shortest path as the user moves the waypoint (red sphere) to different locations. 57 5-3 Software Demo 3: Stairs. The robot successfully recognizes the stairs as a traversable obstacle and autonomously climbs to the top without directly stepping on any of the vertical sections. The over- laid local traversability map can be seen with blue signifying invalid foothold locations. 58 5-4 Hardware Demo 1: Treadmill Platform. The robot successfully recognizes and avoids two different-sized obstacles in its path asit walks from one side of the platform to the other end. 60 5-5 Hardware Demo 2: MIT Hallway. A timelapse of the Mini- Cheetah Vision autonomously navigating a cluttered hallway. 61 5-6 Hardware Demo 3: Outdoor Environments. The Mini-Cheetah Vision during a handful of experiments in various real-world outdoor environments. 62 5-7 Hardware Demo 4: Perturbation-Robustness Testing.