A Ros-Based Toy-Car Detect-And-Place Domestic Robot
Total Page:16
File Type:pdf, Size:1020Kb
A ROS-BASED TOY-CAR DETECT-AND-PLACE DOMESTIC ROBOT A Thesis Presented to the Faculty of California State Polytechnic University, Pomona In Partial Fulfillment Of the Requirements for the Degree Master of Science In Mechanical Engineering By Yifan Wang 2021 SIGNATURE PAGE THESIS: A ROS-BASED TOY-CAR DETECT- AND-PLACE DOMESTIC ROBOT AUTHOR: Yifan Wang DATE SUBMITTED: Spring 2021 Department of Mechanical Engineering Dr. Yizhe Chang Thesis Committee Chair Mechanical Engineering Dr. Campbell A. Dinsmore Mechanical Engineering Dr. Nolan E. Tsuchiya Mechanical Engineering ii ABSTRACT Robot Operating System (ROS) is an open-source framework for robot software with a collection of tools, libraries, and conventions that aim to simplify the task of creating complex and robust robot behavior across a wide variety of robotic platforms. The study in the thesis encompassed the integration of LiDAR, cameras, iRobot Create2 and ROS to implement simultaneous localization and mapping (SLAM), autonomous navigation, and real-time object detection. A robot to detect a specific object – toy cars and place them to a designated goal was developed. The odometry data from the mobile robot and the LiDAR input were utilized for SLAM and navigation. The camera inputs were applied for object detection. The performance of the system was evaluated by the operation time, detection accuracy, and operation success rate. The result demonstrated the implementation of the intended robot system in ROS. iii TABLE OF CONTENTS SIGNATURE PAGE .......................................................................................................... ii ABSTRACT ....................................................................................................................... iii LIST OF TABLES ............................................................................................................ vii LIST OF FIGURES ......................................................................................................... viii CHAPTER 1: INTRODUCTION .......................................................................................1 1.1 Background ............................................................................................................... 1 1.2 Objectives ................................................................................................................. 2 1.3 Thesis Organization .................................................................................................. 2 CHAPTER 2. THEORY ......................................................................................................4 2.1 ROS ........................................................................................................................... 4 2.2 Simultaneous Localization and Mapping (SLAM) ................................................... 6 2.2.1 Gmapping ........................................................................................................... 8 2.3 Autonomous Navigation ........................................................................................... 9 2.3.1 Move_base module .......................................................................................... 10 2.3.2 Global planner .................................................................................................. 10 2.3.3 Local planner ................................................................................................... 10 2.3.4 Costmap ........................................................................................................... 14 2.3.5 Augmented Monte Carlo Localization (AMCL) ............................................. 15 2.4 Object Detection ..................................................................................................... 18 iv 2.4.1 Deep Learning using Neural Network (CNN) ................................................. 18 2.4.2 YOLO .............................................................................................................. 22 CHAPTER 3. METHODOLOGY .....................................................................................25 3.1 Hardware Components............................................................................................ 25 3.1.1 Controller ......................................................................................................... 25 3.1.2 Robotic moving base........................................................................................ 25 3.1.3 3D-printed Dock .............................................................................................. 26 3.1.4 Sensors ............................................................................................................. 26 3.1.5 Assembly.......................................................................................................... 28 3.2 Software Implementation ........................................................................................ 28 3.2.1 Preparation and Mapping ................................................................................. 28 3.2.2 Training ............................................................................................................ 29 3.2.3 Calibration for Camera and LiDAR ................................................................. 31 3.2.4 Software Structure ........................................................................................... 36 CHAPTER 4. EXPERIMENT ...........................................................................................39 4.1 Setup ....................................................................................................................... 39 4.2 Result ...................................................................................................................... 40 4.2.1 Choice of Local Planners ................................................................................. 40 4.2.2 Performances of Different Maximum Velocities ............................................. 40 4.2.3 General Performance ....................................................................................... 41 v 4.2.4 Performances of Different Light Conditions ................................................... 43 4.2.5 Performances of Different Ground Conditions ................................................ 43 4.2.6 Performances with Confusions ........................................................................ 45 CHAPTER 5. DISCUSSION AND CONCLUSION ........................................................46 REFERENCES ..................................................................................................................48 APPENDIX A: INVOLVED LIBRARIES AND PACKAGES ........................................52 APPENDIX B: PROCEDURES OF DRIVING THE ROBOT AND THE LIDAR .........53 vi LIST OF TABLES Table 1: The Pseudocode for Basic MCL Algorithm [9] ................................................. 16 Table 2: The Pseudocode for AMCL Algorithm [9] ........................................................ 17 Table 3: The Abbreviations in Figure 15. 16, and 17 ....................................................... 24 Table 4: The Performance for Different Maximum Linear Velocities ............................. 41 Table 5: The Performances for the Detection and Placing of Different Numbers of Toy Cars ................................................................................................................................... 42 Table 6: The Performance for Different Light Conditions ............................................... 43 Table 7: The Repeatability and The Performance for Different Light Conditions ........... 44 Table 8: The Result of Toy Car Detection with Confusions ............................................ 45 vii LIST OF FIGURES Figure 1: The Consists of a Typical ROS Package ............................................................. 4 Figure 2: Communication Mechanism of ROS .................................................................. 5 Figure 3: The Computation Graph of Gmapping ................................................................ 8 Figure 4: The Architecture of Navigation Stack [44] ......................................................... 9 Figure 5: Collision Check for each trajectory ................................................................... 12 Figure 6: The Illustration of the 2nd phrase of the “Follow the Carrot” Algorithm ......... 13 Figure 7: The Inflation Process of Propagating Cost Values out from Occupied Cells [21] ........................................................................................................................................... 14 Figure 8: The Visualization of Navigation in RViz .......................................................... 15 Figure 9: The Illustration of How MCL Algorithm Localizes ......................................... 16 Figure 10: The Frame Transformations for Fake_localization and AMCL ...................... 18 Figure 11: Venn Diagram for Branches in Artificial Intelligence [22] ............................ 19 Figure 12: The Differences between the Branches in Artificial Intelligence [22] ............ 20 Figure 13: The Architecture of a Typical CNN [48] ........................................................ 20 Figure 14: Ball chart reporting the Top-1 accuracy (using only the center crop) vs. computational complexity (floating point operations per second required for a single forward pass) [The radius