Image- and Point Cloud-Based Detection of Damage in Robotic And
Total Page:16
File Type:pdf, Size:1020Kb
IMAGE- AND POINT CLOUD-BASED DETECTION OF DAMAGE IN ROBOTIC AND VIRTUAL ENVIRONMENTS By CALVIN DOBRIN A thesis submitted to the School of Graduate Studies Rutgers, The State University of New Jersey In partial fulfillment of the requirements For the degree of Master of Science Graduate Program in Mechanical and Aerospace Engineering Written under the direction of Aaron Mazzeo And approved by _________________________________ _________________________________ _________________________________ _________________________________ New Brunswick, New Jersey May, 2021 ABSTRACT OF THE THESIS IMAGE- AND POINT CLOUD-BASED DETECTION OF DAMAGE IN ROBOTIC AND VIRTUAL ENVIRONMENTS By CALVIN DOBRIN Thesis Director: Aaron D. Mazzeo Repair procedures are vital to maintain the integrity of long-term structures such as bridges, roads, and space habitats. To reduce the burden of manual inspection and repair of long- term environments, the proposed solution is an autonomous repair system used for damage detection and damage repair with very little human intervention. The primary purpose of this thesis is to lay the groundwork for the introductory steps related to detection of damage and creation of a virtual map for navigation in this system. It covers the process of initial detection of damage on a structure, confirmation of damage with detailed red-green-blue-depth (RGB-D) scanning, and development of a virtual map of the structure for navigation and localization of important features. We begin by reviewing numerous damage detection methods and establishing a case for optical 2D stereo imaging and 3D scanning. We performed image-processing and point cloud-processing methods to isolate damage in image and point cloud data. The potential of automating operation and data processing without human intervention is also discussed. To lay the groundwork of an autonomous system, a robot was set up to navigate in a virtual ROS environment and relay sensory information of its surroundings. This process establishes a framework for navigation and detecting damage in a system. ii ACKNOWLEDGEMENTS I would like to express my gratitude to my advisor, Professor Mazzeo, for his constant feedback, patience, and encouragement over the course of this project. His feedback during our meetings gave me new angles to approach the project with that I never would have considered. I would like to thank Dr. Patrick Hull for overseeing my progress and guiding the goals of the project during my internship at NASA’s Marshall Space Flight Center. Whenever I hit a roadblock, he would actively help me brainstorm the best way to overcome the problem and get me in touch with colleagues who could help. I would also like to acknowledge Noah Harmatz and Declan O’Brien from the Mazzeo Research group for their support on this project. Noah actively worked with me to integrate my research into damage detection with his work on patching and path planning and Declan graciously shared some of his datasets from working with his test articles. It was a pleasure brainstorming with them both on the larger scope of our project. Finally, I would like to thank my grandparents for their emotional support and encouragement. There were many occasions where I needed to step away and clear my head and they were always there for me. iii TABLE OF CONTENTS ABSTRACT OF THE THESIS ....................................................................................................... ii ACKNOWLEDGEMENTS ............................................................................................................ iii LIST OF FIGURES ........................................................................................................................ vi Chapter 1. Introduction .................................................................................................................... 1 1.1 Methods for Imaging, Mapping, and Detecting Damage ....................................................... 1 1.2 In-Space Manufacturing and Repair ...................................................................................... 4 1.3 Potential Obstacles to Characterizing Structural Integrity Autonomously ............................ 4 1.4 Overview of Envisioned System ............................................................................................ 5 1.5 Overview of Work Described in Thesis ................................................................................. 7 Chapter 2. Camera- and Laser-Based Acquisition of Real-World Environments ........................... 8 2.1 Cameras with Depth Information Acquisition ....................................................................... 8 2.1.1 Depth Information with ZED .......................................................................................... 8 2.1.2 Depth Information with Azure Kinect DK ................................................................... 12 2.1.3 Depth Information with Intel RealSense L515 ............................................................. 13 2.2 Structured Light Scanning with Einscan Pro 2X Plus ......................................................... 15 2.3 Localization with ZED Position Tracking ........................................................................... 17 2.4 SLAM-based Mapping: RTAB-Map ................................................................................... 19 2.4.1 Mapping with ZED ....................................................................................................... 19 2.4.2 Mapping with Azure Kinect .......................................................................................... 21 Chapter 3. Characterization of Features for Repairing Damage in Civil Infrastructure and Space- Relevant Habitats ........................................................................................................................... 24 3.1 Image Feature Extraction with RGB Images ....................................................................... 24 3.1.1 Surface features ............................................................................................................. 24 iv 3.1.2 Edge Detection .............................................................................................................. 29 3.2 Point Cloud Analysis ........................................................................................................... 37 3.2.1 K-Nearest Neighbor Approach ..................................................................................... 37 3.2.2 Fitting To 3D Surfaces .................................................................................................. 45 Chapter 4. Simulation/Virtual Environment .................................................................................. 52 4.1 Mapping ............................................................................................................................... 52 4.2 Post-Processing of Image and Depth Images ....................................................................... 59 4.3 Localization in Map ............................................................................................................. 61 Chapter 5. Conclusions .................................................................................................................. 62 Bibliography .................................................................................................................................. 63 v LIST OF FIGURES Figure 1: The SRMS and OBSS system. Image from [13] .............................................................. 3 Figure 2: System model of a concept for an autonomous repair system. The system is divided into a scan and patching phase. The IMU, Laser/LiDAR, and depth camera move over the surface to be scanned. The scan phase obtains image and point cloud data using an RGB-D camera for damage detection in MATLAB. The mapping software uses the joint IMU data, laser scans, and image data from the IMU, laser/LiDAR, and camera, respectively. If damage is detected, the robot attempts to re-localize itself to where it found damage and proceeds to the patching stage. 6 Figure 3: A ZED stereo camera. Image from [19]. .......................................................................... 8 Figure 4: A foam board test article used for damage detection identification. Damage on the surface consists of dropped items and punctured holes. The top right corner of the foam board shows five markers, used for feature detection, around a single crater. .......................................... 9 Figure 5: A confidence map of the upright foam board taken with ZED Depth Viewer. The shaded areas in the confidence map correspond to uncertainty in the depth map. Confidence values were set to approximately 85% confidence. ....................................................................... 10 Figure 6: A colored point cloud of the foam board surface displayed in MATLAB. The background points were filtered out. ............................................................................................. 10 Figure 7: A point cloud of the foam board surface taken with the ZED Depth Viewer. The color- coding based on values in the z-axis, where blue points are the lowest and yellow points are the highest. ........................................................................................................................................... 11 Figure 8: An Azure Kinect DK camera. Image from [20]. ...........................................................