<<

Dissertations and Theses

7-2020

An Open Source, Autonomous, Vision-Based Algorithm for Hazard Detection and Avoidance for Celestial Body Landing

Daniel Posada

Follow this and additional works at: https://commons.erau.edu/edt

Part of the Engineering Commons

This Thesis - Open Access is brought to you for free and open access by Scholarly Commons. It has been accepted for inclusion in Dissertations and Theses by an authorized administrator of Scholarly Commons. For more information, please contact [email protected]. AN OPEN SOURCE, AUTONOMOUS, VISION-BASED ALGORITHM FOR HAZARD DETECTION AND AVOIDANCE FOR CELESTIAL BODY LANDING

By

Daniel Posada

A Thesis Submitted to the Faculty of Embry-Riddle Aeronautical University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Aerospace Engineering

July 2020 Embry-Riddle Aeronautical University Daytona Beach, Florida ii

AN OPEN SOURCE, AUTONOMOUS, VISION-BASED ALGORITHM FOR HAZARD DETECTION AND AVOIDANCE FOR CELESTIAL BODY LANDING

By

Daniel Posada

This Thesis was prepared under the direction of the candidate’s Thesis Committee Chair, Dr. Troy Henderson, Department of Aerospace Engineering, and has been approved by the members of the Thesis Committee. It was submitted to the OÖce of the Senior Vice President for Academic AÄairs and Provost, and was accepted in the partial fulfillment of the requirements for the Degree of Master of Science in Aerospace Engineering.

THESIS COMMITTEE

Digitally signed by Troy A. Henderson Troy A. Henderson Date: 2020.08.03 18:43:31 -04'00'

Chairman, Dr. Troy Henderson

Digitally signed by Richard J. Prazenica Digitally signed by Moreno,Claudia Richard J. Prazenica Date: 2020.08.03 23:11:45 -04'00' Moreno,Claudia Date: 2020.08.03 18:31:22 -05'00'

Member, Dr. Richard Prazenica Member, Dr. Claudia Moreno

Digitally signed by Dr. Magdy S. Attia Dr. Magdy S. Attia Date: 2020.08.04 12:10:29 -04'00'

Graduate Program Coordinator, Date Dr. Magdy Attia

Dean of the College of Engineering, Date Dr. Maj Mirmirani

Digitally signed by Christopher Christopher Grant Grant Date: 2020.08.04 16:13:47 -04'00' Associate Provost of Academic Support, Date Dr. Christopher Grant iii

ACKNOWLEDGEMENTS First of all, I want to thank God for all the given opportunities and all the great persons I have come to meet along the way. I want to thank Dr. Claudia Moreno for giving me the chance of being here, and Dr. Troy Henderson for providing me the opportunity to work on a project that, although intimidating, it has been a journey of learning and encountering new passions. Dr. Troy Henderson, Dr. Richard Prazenica, and Dr. Claudia Moreno thank you for all the time you have spent in my development not only as engineers but as colleagues and friends, that guided me through this beautiful world of Dynamics and Control. Thank you, Jacob Korczyk, Dalton Korczyk, Dr. Francisco Franquiz, Chris Hays, my far away friends, and everyone at the Space Technologies Laboratory for your unconditional support throughout the development of this project, in the most adverse times you were all there to support me. Dr. Nicanor Quijano, I remember our first course in control; thank you for helping me to pursue my passion and be here today. Furthermore, I want to dedicate this work to my family, especially to my Abuelita, as her dream is to go to the ; now, I am getting her a little bit closer. To my Mom and Dad, I cannot thank you enough for all your sacrifices and support that you have given me during my life, and now more than ever during these uncertain and pandemic times. To my brother, thank you for the example you have set for me, and the lessons I have learned from you have been invaluable to me. Last but not least, I want to thank for the opportunity to work around such a marvelous team, and the possibility to engineer such a hazardous and mesmerizing task. iv

ABSTRACT Planetary exploration is one of the main goals that humankind has established as a must for in order to be prepared for colonizing new places and provide scientific data for a better understanding of the formation of our solar system. In order to provide a safe approach, several safety measures must be undertaken to guarantee not only the success of the mission but also the safety of the crew. One of these safety measures is the Autonomous Hazard, Detection, and Avoidance (HDA) sub-system for celestial body landers that will enable different spacecraft to complete solar system exploration. The main objective of the HDA sub-system is to assemble a map of the local terrain during the descent of the spacecraft so that a safe landing site can be marked down. This thesis will be focused on a passive method using a monocular camera as its primary detection sensor due to its form factor and weight, which enables its implementation alongside the proposed HDA algorithm in the Intuitive Machines NOVA-C as part of the Commercial Lunar Payload Services technological demonstration in 2021 for the NASA to take humans back to the moon. This algorithm is implemented by including two different sources for making decisions, a two-dimensional (2D) vision-based HDA map and a three-dimensional (3D) HDA map obtained through a Structure from Motion process in combination with a plane fitting sequence. These two maps will provide different metrics in order to provide the lander a better probability of performing a safe touchdown. These metrics are processed to optimize a cost function. v

TABLE OF CONTENTS

ACKNOWLEDEMENTS ...... iii

ABSTRACT ...... iv

LIST OF FIGURES ...... vii

LIST OF TABLES ...... xi

ABBREVIATIONS ...... xii

NOMENCLATURE ...... xiv

1. Introduction ...... 1 1.1. HDA Development Objectives ...... 4 1.2. Complexities of Vision-Based HDA ...... 5 1.3. Proposed Solution ...... 6 1.4. Test Cases ...... 7 1.5. Contributions ...... 7

2. Literature Review ...... 9 2.1. Hazard Detection and Avoidance - State of the Art ...... 9 2.2. Computer Vision and Image Processing Review ...... 13 2.2.1. Definitions ...... 13 2.2.1.1. 2D Points - Pixels ...... 13 2.2.1.2. 3D Points - Point Cloud ...... 13 2.2.1.3. Camera Intrinsics ...... 14 2.2.2. Contrast Enhancement ...... 14 2.2.3. Superpixel Segmentation ...... 16 2.2.4. Binary Segmentation and Conditionals ...... 18 2.2.5. Oriented FAST and Rotated BRIEF ...... 18 2.2.5.1. Features from Accelerated and Segments Test (FAST) ...... 19 2.2.5.2. Binary Robust Independent Elementary Feature (BRIEF) ...... 19 2.2.6. Occupancy Grid Mapping ...... 20 2.2.7. Structure from Motion ...... 21 2.2.8. Surface Fitting ...... 24 2.3. Test Cases ...... 25

3. Methodology ...... 29 3.1. Hazard Detection and Avoidance Algorithm Flow Chart ...... 29 3.2. Hazard Detection and Avoidance ...... 30 3.2.1. Image Enhancement ...... 30 3.2.2. Image Processing ...... 31 3.2.3. Landing Site Scoring ...... 34 3.2.4. Crater Detection ...... 35 vi

3.3. Datasets ...... 37 3.3.1. Mars Reconnaissance Orbiter Imagery ...... 38 3.3.2. Lunar Reconnaissance Orbiter Imagery ...... 39 3.3.3. Intuitive Machines and RPI Imagery ...... 40 3.3.3.1. Hardware Implementation ...... 41

4. Results ...... 43 4.1. LROC Imagery Results ...... 43 4.1.1. Image Enhancement ...... 43 4.1.2. Image Processing and Computer Vision ...... 43 4.2. MRO HiRISE Imagery Results ...... 46 4.2.1. Image Enhancement ...... 47 4.2.2. Image Processing and Computer Vision ...... 48 4.3. RPI Set 1 Imagery Results ...... 50 4.3.1. Image Enhancement ...... 51 4.3.2. Image Processing and Computer Vision ...... 52 4.4. RPI Set 2 Imagery Results ...... 55 4.4.1. Image Enhancement ...... 56 4.4.2. Image Processing and Computer Vision ...... 56

5. Conclusions and Future Work ...... 62 5.1. Hazard Detection and Avoidance ...... 62 5.2. Landing Site Scoring ...... 63 5.3. Open-Source Repository ...... 63 5.4. Future Work ...... 64

REFERENCES ...... 65 vii

LIST OF FIGURES Figure Page 1.1 Vikram lunar lander artist’s concept (ISRO, 2017)...... 1 1.2 lunar lander artist’s concept (Grush, 2019a)...... 1 1.3 HLS artist’s concept (, 2020)...... 2 1.4 Starship moon adaptation artist’s concept (Grush, 2019b)...... 2 1.5 NOVA-C lunar lander artist’s concept (SpaceNews, 2020)...... 2 1.6 XL-1 lunar lander artist’s concept (TechCrunch, 2020)...... 2 1.7 OSIRIS-REx artist’s concept (Garner, 2015)...... 3 1.8 Hayabusa 2 artist’s concept (News, 2018)...... 3 1.9 Possible hazards when landing in another celestial bodies...... 4 2.1 CLPS 2019 selected contractors props (Daines, 2019)...... 10 2.2 ALHAT HDA system tested in a helicopter (Epp et al., 2013). . . . . 11 2.3 ALHAT HDA system attached to Morpheus lander (Carson et al., 2014). 11 2.4 Example of -vision hazard detection (Matthies et al., 2008). . . 12 2.5 Camera Pinhole model...... 14 2.6 Example of histogram before equalization (Bradski, 2000)...... 15 2.7 Example of histogram after equalization (Bradski, 2000)...... 15 2.8 Comparison of image after equalization (Bradski, 2000)...... 16 2.9 Comparison of SLIC implementations including SLICO (Bradski, 2000). 17 2.10 ORB key points marked (Bradski, 2000)...... 19 2.11 Example of quadtree distribution...... 21 2.12 Synthetic image for testing quadtree. Different intensities to test de- composition...... 21 2.13 Quadtree output detecting the changes in the intensity...... 21 2.14 Example of general SfM pipeline...... 22 2.15 Example of SfM input (Bradski, 2000)...... 23 2.16 SfM point cloud output (Bradski, 2000)...... 23 viii

Figure Page 2.17 Plane fitting in a surface made from a cloud point...... 24

2.18 Point cloud with plane and vectors...... 25 2.19 Hazard field at NASA’s KSC. Inset is a digital elevation map of the design JPL provided KSC to build the field (Rutishauser et al., 2012). 26 2.20 LROC M1195740691R (NASA/GSFC/Arizona State University, 2009). 27 2.21 MRO image of the Opportunity rover ILS (NASA/GSFC/Arizona State University, 2005)...... 27 2.22 LRO LROC close-up on Lunar hazards (NASA/GSFC/Arizona State University, 2009)...... 28 2.23 Synthethic Lunar surface (Hong & Christian, 2020)...... 28 3.1 Flow Chart for passive HDA...... 29 3.2 Original histogram of image from RPI...... 30 3.3 CLAHE histogram equalization to redistribute pixel intensity. . . . . 30 3.4 Lunar surface before histogram equalization...... 31 3.5 Lunar surface after histogram equalization...... 31 3.6 Lunar surface divided in superpixels...... 32 3.7 Lunar surface with pixels labeled...... 32 3.8 Labels after binary conditionals applied to filter shadows...... 33 3.9 Binary occupancy grid used for region of interest analysis...... 33 3.10 Lunar surface occupancy grid made with quadtree...... 33 3.11 Point cloud reconstruction using 50 images of the approach at 10 Hz. Camera positions in red...... 34 3.12 Crater cut out from Figure 3.16...... 36 3.13 Crater region of interest (ROI)...... 36 3.14 Common segment detection (Bright and shadow)...... 36 3.15 Detection of bright and shadow zone of the crater, and ellipse fitting over the shadow zone...... 36 3.16 LROC Dante Crater M121044107R (NASA/GSFC/Arizona State Uni- versity, 2009)...... 37 ix

Figure Page 3.17 Detection of craters and rocks over Occupancy Grid...... 37

3.18 MRO Close up of ESP0540351915 RED (NASA/GSFC/Arizona State University, 2005)...... 38 3.19 LROC imagery (NASA/GSFC/Arizona State University, 2009). . . . 39 3.20 LROC Interface to search the Planetary Data System files...... 40 3.21 Lunar surface generated in Unreal Engine (Hong & Christian, 2020). 41 4.1 JMARS LROC Plato (NASA/GSFC/Arizona State University, 2009). 44 4.2 Superpixels segmentation process...... 44 4.3 LROC Labels of possible landing areas...... 45 4.4 LROC Quadtree decomposition...... 46 4.5 Occupancy grid mapping and feature detection...... 46 4.6 JMARS MRO ESP 054035 1915 (NASA/GSFC/Arizona State Univer- sity, 2005)...... 47 4.7 Superpixels segmentation process...... 48 4.8 MRO Labels of possible landing areas...... 48 4.9 MRO Quadtree decomposition...... 49 4.10 Occupancy grid mapping and feature detection...... 50 4.11 Synthetic Image (Hong & Christian, 2020)...... 51 4.12 RPI Squared Image ...... 52 4.13 Superpixels segmentation process...... 52 4.14 Labels of possible landing areas...... 53 4.15 Quadtree decomposition...... 54 4.16 Occupancy grid mapping and feature detection...... 54 4.17 HDA Lunar approach trajectory (Hong & Christian, 2020)...... 55 4.18 Synthetic Image (Hong & Christian, 2020)...... 56 4.19 RPI Set 2 Squared Image ...... 57 4.20 Superpixels segmentation process...... 57 x

Figure Page 4.21 RPI Set 2 Labels of possible landing areas...... 58

4.22 Occupancy grid mapping and feature detection...... 59 4.23 Areas that fit the VFDE...... 59 4.24 Red boxes represent the camera pose and the number corresponds to the image. Note: Axis are a reference, not to scale...... 60 4.25 Red boxes represent the camera pose and the number corresponds to the image. Note: Axis are a reference, not to scale...... 60 xi

LIST OF TABLES Table Page 3.1 5MP Camera Parameters. Camera Specifications for Synthetic Imagery...... 41 4.1 Example of Landing Site Scoring...... 61 xii

ABBREVIATIONS

HDA Hazard, Detection, and Avoidance

EDL Entry, Descent and Landing

NASA National Aeronautics and Space Administration

ESA

ISRO Indian Space Research Organisation

IAI Israel Aerospace Industries

JAXA Aerospace Exploration Agency

LPI Lunar Planetary Institute

SLS

CLPS Commercial Lunar Payload Services

TAG Touch-and-Go

JPL Jet Propulsion Laboratory

LiDAR Light Detection and Ranging

PANGU Planet and Asteroid Natural Scene Generation Utility

TRL Technology Readiness Level

VFDE Vehicle Footprint Dispersion Ellipse

PDS Planetary Data System

LRO Lunar Reconnaissance Orbiter

LROC Lunar Reconnaissance Orbiter Camera

LOLA Lunar Orbiter Laser Altimeter

MRO Mars Reconnaissance Orbiter

HiRISE High-resolution Imaging Science Experiment

JMARS Java Mission-Planning and Analysis for Remote Sensing

PL&HA Planetary Landing and Hazard Avoidance

DEM Digital Elevation Map

JSC Johnson Space Center xiii

LaRC Langley Research Center

CSDL Charles Stark Draper Labs

APL Applied Physics Laboratory

CLAHE Contrast Limited Adaptive Histogram Equalization

ARPOD Automated Rendezvous, Proximity Operations, and Docking

SLIC Simple Linear Iterative Clustering

SIFT Scale-Invariant Feature Transform

SURF Speeded Up Robust Feature

ORB Oriented FAST and rotated BRIEF

FAST Features from Accelerated and Segments Test

BRIEF Binary Robust Independent Elementary Feature

SfM Structure from Motion

RANSAC Random Sample Consensus

MLESAC Maximum Likelihood Estimation SAmple Consensus

MAPSAC Maximum A Posterior Sample Consensus

KALMANSAC Kalman Estimated Sample Consensus

ANN Artificial Neural Network

CNN Convolutional Neural Network

RPI Rensselaer Polytechnic Institute xiv

NOMENCLATURE

α Camera field of view

β Least-Squares fitted plane angle w.r.t local gravity vector

µx Camera vertical resolution

µy Camera vertical resolution a Fitted plane x axis coefficient b Fitted plane y axis coefficient c Fitted plane z axis coefficient

Cx Image Principal Point in the horizontal direction

Cy Image Principal Point in the horizontal direction de Least-Squares error - Roughness

Fx Focal length in the horizontal direction

Fy Focal length in the vertical direction

J Site scoring cost function

K Camera Intrinsic matrix p Fitted plane normal vector

S Shear or Akin-Skew distortion coefficient 1

1. Introduction The terminal phase of any space exploration mission is one of the most critical moments, especially when the final safety checks are performed to proceed to land. Hence, some people in the space industry have nicknamed the terminal phase “the last minutes of terror”. For this reason, the Entry, Descent, and Landing (EDL) maneuvers can be often acknowledged as a mission bottleneck; any failure encountered during this phase could lead to a high probability of the complete loss of the lander (Braun & Manning, 2006). Despite having multiple landing successes, the task is difficult and complex. To illustrate the hardship, some recent attempts and failures are the Indian Space Research Organisation (ISRO) Vikram in Figure (1.1) (Gurgaon, 2019), the Israel Aerospace Industries (IAI) Beresheet lunar lander in Figure (1.2) (Gibney, 2019; Shyldkrot et al., 2019), and the European Space Agency (ESA) Martian Schiaparelli EDL Demo Lander (Ferri et al., 2017).

Figure 1.1 Vikram lunar lander artist’s Figure 1.2 Beresheet lunar lander artist’s concept (ISRO, 2017). concept (Grush, 2019a).

The next generation of space exploration is being developed right now as a collaboration led by the National Aeronautics and Space Administration (NASA) in companionship with ESA, Japan Aerospace Exploration Agency (JAXA), and other agencies around the world to bring humans back to the moon in 2024. This will be followed by Mars under the Artemis program (NASA, 2019). This program has sprouted different possible solutions for these challenges including the SLS launch vehicle and the module as a government-supported approach. Unlike the Apollo 2

program, they have encouraged development by private contractors such as Dynetics in Figure (1.3), , OrbitBeyond, SpaceX in Figure (1.4), , Intuitive Machines (IM) in Figure (1.5), in Figure (1.6), and more (Daines, 2019) to provide their solutions and perform technological demonstrations that involve landing maneuvers.

Figure 1.3 HLS artist’s concept (Dynetics, Figure 1.4 Starship moon adaptation 2020). artist’s concept (Grush, 2019b).

Figure 1.5 NOVA-C lunar lander artist’s Figure 1.6 XL-1 lunar lander artist’s concept (SpaceNews, 2020). concept (TechCrunch, 2020).

Furthermore, other missions also require safe landing maneuvers such as the new Perseverance that is scheduled to be launched in late July of 2020 (NASA Astrobiology Blog, 2020), or the IM NOVA-C Lander in Figure (1.5) scheduled for late October 2021 as part of the Commercial Lunar Payload Services (CLPS) within the Artemis program. In addition to the missions toward planets and their , there has been a vibrant interest in visiting small bodies in the solar system such as asteroids and comets as they can provide valuable scientific data. A few missions of 3

this autonomous-type scenario have performed maneuvers such as Touch-and-Go (TAG) to collect, analyze, and return samples to .

Figure 1.7 OSIRIS-REx artist’s concept Figure 1.8 Hayabusa 2 artist’s concept (Garner, 2015). (News, 2018).

Just to mention a few, the OSIRIS-REx spacecraft in Figure (1.7) launched by NASA in 2016, is currently orbiting asteroid Bennu and performed recently the TAG rehearsal to verify all systems are working (Berry et al., 2013). Hayabusa 2 in Figure (1.8) by JAXA is currently returning to earth after collecting samples from asteroid Ryugu (Kawaguchi et al., 2008). The Rosetta orbiter by ESA deployed a lander on comet 67P/Churyumov-Gerasimenko in 2014 and then performed a hard-landing on the comet in 2016 (Bibring et al., 2007). Moreover, upcoming missions such as , developed by NASA’s Jet Propulsion Laboratory (JPL), are scheduled for launch in July 2022 (Hart et al., 2018). It can be concluded that almost all the examples mentioned above share the common challenge of designing a strategy for landing on a celestial body. Nevertheless, none of these missions is completely autonomous; they send data back to earth to be analyzed to avoid rocks, craters, or slopes that could harm the mission. Therefore, safety is the main driver in the mission analysis and design process. This is the reason why providing a safe and precise autonomous landing capability is a key element in the next space exploration systems. This renders the opportunity to adjust 4

Figure 1.9 Possible hazards when landing in another celestial bodies.

the trajectory during the descent while classifying hazardous from safe landing areas relying on the HDA suite. 1.1. HDA Development Objectives The main purpose of this research is to present a passive autonomous Hazard Detection and Avoidance algorithm to aid in terrain relative navigation (TRN). This will improve the probability of mission success of an autonomous landing spacecraft by providing hazard relative navigation (HRN) to the guidance, navigation, and control (GNC) system once the intended landing site is on the scope. As a remark, it is important to clarify that, currently, no complete HDA suite system has ever flown in a space-related mission. Therefore, in order to develop this algorithm some milestones are to be accomplished:

1. Perform some literature review on the state of the art in order to have a base comparison of different approaches to Hazard Detection and Avoidance.

2. Define metrics to evaluate the performance and the output of the algorithm.

3. Develop the Hazard Detection and Avoidance algorithm. 5

4. Code the algorithm in MATLAB!R .

5. Provide an analysis on possible features that could be included in the future development of this algorithm.

This thesis also has some secondary objectives:

• Establish a repository to provide the results from the algorithms in order for others to test and provide implementation feedback.

• Explore an approach using Open Source software such as OpenCV (Bradski, 2000) and Scikit-Image (van der Walt et al., 2014) using Python.

• Use part of these algorithms in other applications such as: Automated Rendezvous, Proximity Operations, and Docking (ARPOD), Orbital Navigation and Orbit Determination (OD), and Surface Mobility. 1.2. Complexities of Vision-Based HDA Performing an autonomous landing based on a vision-based method using a passive sensor such as a camera poses multiple challenges and complexities. Initially, the first task is to be able to acquire the required imagery. Once the images are obtained, the next challenge is to process them in order to extract the most information out of each frame. The subsequent task is the toughest one, as it requires to use the data from the images to define safe landing spots. Finally, all of this must happen in a short amount of time to guarantee that the lander’s fuel will suffice to reach the designated zone. Quality images are required to guarantee a highly detailed hazard map, and this can be affected by different parameters from the camera such as the field of view, the shutter speed, the streaming rate, the resolution, and the location on the spacecraft. All these constraints from the camera can be simplified into a geometry problem. Therefore, they can be taken into account in the trajectory of the lander to provide the best possible scenario to generate the best hazard map; for example, adjusting the angles of maneuvers such as the pitch-over to provide the full system the best possible 6 view of the landing site Additionally, the landing environment poses other parameters that will affect the mapping depending on the type the camera such as the amount of light being reflected back from the surface of the celestial body (Sunlight incidence angle). For this mission, the sun incidence angle will be reflecting over the lunar surface between 10◦ to 20◦. Finally, the most common limiting factor for a complete HDA suite is the computational efficiency dedicated for this task. Radiation hardened hardware complying with NASA’s Technology Readiness Level (TRL)(Hirshorn & Jefferies, 2016) is usually not very powerful and tends to be a couple generations behind the current processors technology (Keys et al., 2007). During the terminal phase, the software must be robust enough to provide the lander the opportunity to control its trajectory and perform any last second attitude correction; this implies the processor will be busy doing this task and will not have enough computational power to spare to perform HDA, which might present a challenge for the development of these algorithms. 1.3. Proposed Solution The proposed algorithm for HDA will be accomplished by including two main sources for making decisions: the first part will be a 2 dimensional (2D) vision-based HDA grid map, and the second will be a 3 dimensional (3D) depth HDA map obtained through a Structure from Motion process in combination with an adaptive occupancy grid. These two maps will supply different metrics in order to provide the lander a better probability of performing a safe touchdown. The merged hazard map produced using this method demonstrates a good approach for real hardware applications, not only to spacecraft but even extended to UAVs or autonomous vehicles such as rovers or cars. The following requirements are established for the algorithm based on the requirements from Intuitive Machines: 7

Terrain Shadows: Dark areas that cannot be processed by the sensors system should be classified a priori as unsafe; they could represent a deep crater, or the shadow generated by a boulder hiding more hazards.

Terrain Roughness: The physical and geometrical characteristics of the lander touchdown system determine the maximum hazard size the lander can take without damaging any part of the system (e.g., the NOVA-C lander can withstand rocks up to 30 cm).

Terrain Slopes: As mentioned above, the lander physical characteristics limit the slopes where it can land to avoid keeling (e.g., the NOVA-C lander slopes up to 10◦).

Size of Safe Area: Once everything mentioned above is assessed, the area of the safe zone is verified to guarantee that the Vehicle Footprint Dispersion Ellipse (VFDE) can be fitted inside.

Landing Site Scoring: The top 5 candidates for landing within the intended landing site will be provided to the GNC system alongside its metrics. This ranking is based on the minimization of a function with metrics where 10 is the best and 1 the worst for each of the conditions mentioned above. In the event that just 1 candidate is provided, it will still be considered a success, as the number of candidates will vary with the cases. 1.4. Test Cases To evaluate the algorithm, different sources of imagery will be used. Synthetic images will be recreated in order to provide a ground-truth estimate. Real pictures acquired by different planetary missions will also be used to test the avoidance of real hazards encountered in different celestial bodies. 1.5. Contributions This work provides a different approach using multiple computer vision techniques (Contrast enhancement, binary segmentation, and superpixels segmentation) in 8

combination with some state-of-the-art methods (Occupancy-grid, ORB feature tracking, and structure from motion) to create a computationally efficient algorithm. This code will be implemented as the Hazard, Detection, and Avoidance method for the Intuitive Machines NOVA-C mission, proving its efficiency when deployed in actual space-qualified hardware. This document is organized as follows: Chapter 1 gives an insight into the problem statement, the objectives, the challenges, the proposed solution and the cases for testing. Chapter 2 will provide state of the art and a literature review summary. Chapter 3 provides an overview of the algorithm architecture, the hardware used, and some examples of the datasets used. Chapter 4 shows the results of the algorithm, showing the different results for different steps of the algorithm and the logic compared with the defined metrics for performance. Finally, Chapter 5 provides conclusions, recommendations, and future work. 9

2. Literature Review This chapter presents an overview of work related to the task areas of Planetary Landing and Hazard Avoidance (PL&HA). The objective of this review is to identify some methods (Adams et al., 2008; Brady et al., 2009; Cheng et al., 2001; Crane, 2014; Huertas et al., 2006; Lunghi, 2017; Matthies et al., 2008; Sinclair & Fitz-Coy, 2003) that can be extended and adapted to the constraints specified in the field of HDA for a celestial body landing mission. 2.1. Hazard Detection and Avoidance - State of the Art As stated before in Chapter 1, currently, no complete HDA system has ever flown in a NASA space-related mission. Some techniques and methods have been proposed (Crane, 2014; Lunghi, 2017) and developed. But most of these systems are still in the lower scale of TRL developed by NASA (Villalpando et al., 2010). As these technologies cannot be trusted entirely under the TRL standards for spaceflight, NASA will not use any complete HDA system yet (A. E. Johnson et al., 2010). Thus, this testing is one of the challenges of the CLPS contract awarded by NASA in 2019 (Daines, 2019); the contractors in Figure (2.1), such as Intuitive Machines will test a complete online HDA to prove that these systems can comply with all the requirements to be cataloged safe for their exploitation in the coming years. The testing will add flight heritage to these technologies, and they will increase the TRL to be suited for future space exploration. In the last decade, diverse research initiatives such as NASA’s program focused on Autonomous Landing Hazard Avoidance Technology (ALHAT) have encountered the same difficulties mentioned above (Epp & Smith, 2007). This program was lead by NASA’s Johnson Space Center (JSC) with support from JPL, Langley Research Center (LaRC), Charles Stark Draper Labs (CSDL)(Brady et al., 2009), and the Johns Hopkins Applied Physics Laboratory (JHAPL)(Adams et al., 2008). ALHAT was based mainly on a combination of sensors: a laser altimeter, flash LiDAR, and a Doppler LiDAR (Brady & Schwartz, 2007; A. Johnson & Ivanov, 2011). These sensors 10

Figure 2.1 CLPS 2019 selected contractors props (Daines, 2019). performed multiple tasks such as terrain relative navigation and hazard relative navigation. The system was tested in different test-beds such as a helicopter as in Figure (2.2) flight (Epp et al., 2013) with the whole GNC system, and on the Morpheus Lander as showcased in Figure (2.3) (Carson et al., 2014). The Morpheus lander is a vertical takeoffand vertical landing test vehicle, designed to demonstrate a new nontoxic spacecraft propellant system based on methane and oxygen, and an autonomous landing and hazard detection technology. Attached on top of Morpheus, there is a complete navigation system with all the essential electronics and the arrangement of sensors. The gimbaled flash-lidar sensor provides a precise point cloud, providing a direct 3D measurement of the topography. The challenge of this approach is the TRL, size, and weight. This technology needs more experimentation in space, and there are physical limitations on sensor fabrication. 11

Figure 2.2 ALHAT HDA system tested in a helicopter (Epp et al., 2013).

Figure 2.3 ALHAT HDA system attached to Morpheus lander (Carson et al., 2014).

By way of contrast, the image processing approach requires that these algorithms are to be tried-out continuously to tune and improve the quality when measuring the hazard field (Cheng et al., 2001). It is imperative to guarantee that these estimations are not off-nominal by visual effects. On the plus side, these sensors have been tested 12

continuously in space and comply with TRL, size, and weight (Neveu et al., 2015). Due to all the benefits from the camera approach, more work has been focused on monocular vision as it is more suitable for small spacecraft (Brady et al., 2009). Following the objectives defined in Chapter 1, multiple vision-based hazard detection methods have been explored to employ monocular hazard data collection (Huertas et al., 2006). Most of the work in this area has focused on extracting hazard data from the imagery collected during the terminal phase (Sim˜oeset al., 2009; Sinclair & Fitz-Coy, 2003). An example of one of the most advanced research is JPL’s shadow-based vision techniques (Cheng et al., 2001).

Figure 2.4 Example of stereo-vision hazard detection (Matthies et al., 2008).

This work has been developed using multiple sources of imagery such as simulated, terrestrial, and from the Mars Orbiter (A. E. Johnson et al., 2002). JPL has also tested these algorithms to measure performance and its effectiveness in the implementation of HDA in an actual space-qualified embedded processor (A. E. Johnson et al., 2008). Some stereo vision techniques have also been proposed for landing hazard detection as in Figure (2.4). 13

Compared to monocular, stereo provides the benefit of determining and reconstructing the rock hazard size better using well-established processing techniques (Matthies et al., 2008). The main disadvantage of a stereo system is that it depends on the size and geometry of the lander. In other words, the range discrimination capability is restricted by the camera baseline capability on the proposed lander. 2.2. Computer Vision and Image Processing Review This section is to provide the reader some of the many computer vision techniques that could be used and that will be used in this work: 2.2.1. Definitions The following subsection will explain briefly some basic mathematical definitions needed to build the different concepts. 2.2.1.1. 2D Points - Pixels 2D points are geometric primitives that help to define a basis to describe an image (Szeliski, 2010). Every image is a matrix container where the intensity values correspond to a specific coordinate. In the case of monochromatic images, only one channel is mapped imparting the appearance of black and whites. Color images, on the other hand, can be analyzed in different spectra providing different mappings for each color analyzed. These values are ordered in the matrix given by a set of coordinates #x . #x =(x, y) R2 (2.1) ∈ 2.2.1.2. 3D Points - Point Cloud Just like the 2D points, 3D points are also a primitive that define a basis for three-dimensional shapes. This approach takes the 2D points and augments the vector with a new dimension.

G# =(x, y, z) R3 (2.2) ∈

This approach will be used to define all of the points in the point cloud. 14

2.2.1.3. Camera Intrinsics This matrix is equivalent to the projection of a 3D point through a small hole in a plane. This is commonly known as a pin-hole camera model as in Figures (2.5a, 2.5b). This ideal projection maps the intensity values in a plane by transforming 3D

camera-centered points to indexed integer pixel coordinates (xs,ys)(Szeliski, 2010). This matrix is a composition of the translation, rotation, and distortion of the sensor due to the lens.

(a) Horizontal direction. (b) Vertical direction.

Figure 2.5 Camera Pinhole model.

Fx SCx   K = (2.3)  0 Fy Cy        001     In some literature K might appear as the transpose. Fx is the focal length in the width direction, Fy is the focal length in the horizontal direction, Cx is the image

principal point in the width direction, Cy is the image principal point in the horizontal direction, and S is the shear or Akin-Skew which accounts for lens distortion on the image. 2.2.2. Contrast Enhancement Images acquired by the camera will often have defects caused by the instability of the sensors, the variation of the lighting conditions, noise from radiation, and the lack 15 of contrast. These defects can be mitigated with point operators that modify suitably the gray levels (pixel intensity) of the image to enhance the visual qualities of the image and boost its contrast. Amidst the techniques of image improvement, there are two major approaches used: the statistical approach of the values of the gray levels of the image and the spectral approach based on the spatial frequencies present in the image (Szeliski, 2010).

Figure 2.6 Example of histogram before equalization (Bradski, 2000).

Figure 2.7 Example of histogram after equalization (Bradski, 2000).

All the required transformation of the gray levels can be executed with operators that depend only on the actual gray level value of the input image, or by operators 16

that additionally take into consideration the position (i, j) of the pixel. A couple of examples from the last-mentioned operator are the algorithms for contrast and histogram manipulation as in Figures (2.6, 2.7).

Figure 2.8 Comparison of image after equalization (Bradski, 2000).

The common approach to redistribute the original measurement is to perform the following mapping:

1 I 1 C(I)= H(I)=C(I 1) + H(I) (2.4) N i=1 − N ' where I is the original image (Figure 2.6), and N is the number if pixels in I. This mapping integrates the distribution of the original histogram of the image H(I) to find the cumulative distribution C(I) and assigns the new value based on the corresponding percentile as in Figure (2.7). Finally, this new pixel intensity assignment highlights more details in the image improving the contrast as in Figure (2.8). 2.2.3. Superpixel Segmentation Image segmentation is the process of partitioning an image into multiple segments. The intent is to render the entire image as something more noteworthy and more manageable to analyze. To put it differently, image segmentation is the method of allocating a label to each pixel in an image, such that pixels with the same label share certain characteristics. Multiple existing algorithms in computer vision use the 17

pixel-grid segmentation approach as an underlying representation, for example, stochastic models of images or Markov random fields (Mester et al., 2011). Superpixel segmentation can be defined as a compact pixel-grid, where the pixels share common characteristics, for instance, pixel intensity as in Figure (3.6). This approach allows the categorization of the images into a defined number of segments. This pixel-grid, however, is not a natural representation of the actual image (Mori et al., 2004). Superpixels are very useful in image processing and vision-based algorithms like image segmentation, object detection, tracking, and many more. The main reason for them to be an adequate approach is that they are computationally efficient, as superpixels reduce the complexity of the images from many thousands of pixels to just a desired few pixels. This segmentation can be used to locate objects and boundaries such as lines, curves, and more. Nowadays, there are multiple superpixel algorithms to choose (Stutz et al., 2018) and for this application, the method that will be implemented is a derivation named Simple Linear Iterative Clustering (SLIC) (Achanta et al., 2012).

Figure 2.9 Comparison of SLIC implementations including SLICO (Bradski, 2000).

Within this algorithm, an option named SLICO which is an Adaptive Simple Linear Iterative Clustering is selected as in Figure (2.9); SLICO is characterized for autonomously adapting the size and compactness of the pixels in an optimal way that can be very useful for computationally demanding problems. A way of generating 18

approximately equal superpixels is by defining a grid interval to apply to the image. This approach is described in (Stutz et al., 2018) and is composed of the number of pixels N, and the number of desired superpixels k:

N S = (2.5) ( k

This squared ratio allows to reduce the chance of combining a good superpixel with noisy pixels, making them uniform. 2.2.4. Binary Segmentation and Conditionals Continuing the line of image segmentation mentioned above, binary segmentation also allows to label certain portions of the image based on a threshold. This will separate the image in a mask depending on the conditional used (<, , =, , and >). # $ The advantage of this particular m

binary mask = image > threshold (2.6)

2.2.5. Oriented FAST and Rotated BRIEF One of the crucial tools in a computer vision-based approach is feature detection. This tool allows identifying unique features in the images as in Figure (2.10), that provide significant information such as motion (Optical Flow) or the relation between image sequences to perform photogrammetry (Structure from Motion). Some of the most recognized feature detectors are the Scale-Invariant Feature Transform (SIFT) and the Speeded Up Robust Features (SURF). The first one is computationally efficient, but it was patented, restraining the use due to the costs. The patent for SIFT expired in March 2020, granting free use of this algorithm. Oriented FAST and rotated BRIEF (ORB) is a fast robust local feature detector, that combines Features from Accelerated and Segments Test (FAST) and Binary Robust Independent Elementary Feature (Rublee et al., 2011). ORB was a solution by the OpenCV foundation as a free fast and efficient algorithm (Karami et al., 2017). 19

Figure 2.10 ORB key points marked (Bradski, 2000).

2.2.5.1. Features from Accelerated and Segments Test (FAST) FAST is a corner detection technique, used to extract feature points. This classification was extracted from a machine learning approach in order to provide real-time feature detection while avoiding being computationally intensive (Rosten & Drummond, 2006). Due to this performance, FAST is one of the most used methods for real-time video processing. To detect the corners, FAST chooses a pixel in an array, then compares the brightness of the pixel to the surrounding 16 pixels that are in a small circle around. The pixels in this circle are then classified into three classes: brighter, darker, or similar (Rosten & Drummond, 2006; Rublee et al., 2011). If at least 8 pixels are brighter or darker than the selected pixel, this pixel is classified as a key point. However, the FAST method is susceptible to scale. If the image is downsampled, some pixels will combine and may not detect the key points. Another possibility is that FAST may assign the same corner to multiple pixels. ORB, on the contrary, uses pyramid levels (Rublee et al., 2011) to discern and compare the key points. 2.2.5.2. Binary Robust Independent Elementary Feature (BRIEF) ORB uses BRIEF to solve the corner coupling from FAST. The BRIEF algorithm helps to filter all the key points detected by FAST and converts them into a binary 20

feature vector (Calonder et al., 2010). This vector is also known as a descriptor; this descriptor becomes a unique identifier of the feature, solving the issue of having multiple pixels with the same corners. The combination of the key point and descriptor is known as an object, which improves the probability of matching the detected features with its corresponding pair. BRIEF smooths the image using a Gaussian Kernel to avoid creating sensibility to high-frequency noise. Similar to FAST, BRIEF picks a patch of pixels around the selected pixel. Next, it selects a random pair of pixels from this patch and compares them using the standard deviation of a Gaussian distribution from the key point and the first pixel. If the first pixel is brighter than the second, the flag is set to 1, otherwise to 0 (Ergo, the binary name). 2.2.6. Occupancy Grid Mapping Occupancy grid maps are significant tools for computer vision-based algorithms to address the issue of mapping the environment to avoid obstacles or hazards (Collins & Collins, 2007; Elfes, 1989). To create this grid, a fractal approach based on the quadtree algorithm was used, specifically the region quadtree (Kraetzschmar et al., 2004). This algorithm outlines a distribution of the image space in 2D by decomposing the region into four equal sections, sub-sections, and so forth as in Figure (2.11). Each leaf node contains data corresponding to a specific subregion. Once the initial space is divided, each node in the tree either has exactly four subregions or none as in Figure (2.13). The quadtrees that follow this decomposition strategy (subdividing into subregions as long as there is interesting data in the subregion for which more refinement is desired) are sensitive and dependent on the spatial distribution of compelling areas and the details in the space being decomposed as in Figure (3.9). To process a defined region of an image using the quadtree, an image that is squared and multiple of the power of 2n should be used to guarantee that it can always be subdivided into 4 subregions, where each pixel value is 0 or 1. 21

Figure 2.11 Example of quadtree distribution.

Figure 2.12 Synthetic image for testing Figure 2.13 Quadtree output detecting the quadtree. Different intensities to test changes in the intensity. decomposition.

Finally, to make it an occupancy grid the value of the intensity and its standard deviation provides the information. For example, a black value (0 uint8) can be represented as a hazard. 2.2.7. Structure from Motion The structure from motion (SfM), or stereophotogrammetry technique, is a photogrammetric range imaging method for estimating 3D coordinates of points and 22

different point structures (Point cloud-based 3D models) from two-dimensional image sequences that could be linked with body motion sensor measurements (H¨aming& Peters, 2010; Koenderink & Van Doorn, 1991; Schonberger & Frahm, 2016; Westoby et al., 2012). This method follows a common approach as depicted in Figure (2.14). This technique is studied in many fields of computer vision and visual-based perception methods, as they produce similar models to LiDAR approaches without having to rely on an expensive device. In planetary landings on a small spacecraft, a LiDAR might take a lot of functional payload space making a camera a more viable method to generate this point cloud.

Figure 2.14 Example of general SfM pipeline.

The main advantage of this technique is that it can be used to create high-resolution surface models with simple passive cameras. However, the main disadvantage is that it is highly sensitive to the image conditions, such as the attitude of the camera and the light source. Although this might represent a great disadvantage, the user can still decide to plan a trajectory that allows the user to have the most control of the environment to create the DEM. 23

To solve this problem, triangulation is used to calculate and build the relative positions of objects in a virtual space (x, y, z). The benefit of this application is that a DEM can be built using at least two images, as long as they have a high degree of overlap taken from different angles as in Figures (2.15, 2.16).

Figure 2.15 Example of SfM input (Bradski, 2000).

Figure 2.16 SfM point cloud output (Bradski, 2000).

Finally, to guarantee the quality of the points when performing different regressions, some statistical methods that can be used are: RANSAC (Nist´er,2005), MLESAC (P. H. Torr & Zisserman, 2000), MAPSAC (P. H. S. Torr & Davidson, 2003), or KALMANSAC (Vedaldi et al., 2005). The chosen method is applied to help filter and remove outliers and improve the quality of the data-set. 24

2.2.8. Surface Fitting To provide the last layer of information, the points obtained from the Structure from Motion photogrammetric technique are fitted into a plane. A generic plane passing through a surface as in Figure 2.17.

a#x + b#y + c#z = 1 (2.7)

where a, b, and c are coefficients that are obtained from the fitting process, and d can be set by the user to match a defined regression. This plane can be described by the normal vector p# perpendicular to that particular plane composed of those parameters. T p =[abc] (2.8)

p# Plane Fitting Roughness β

de de

de Local Horizon #g Local Gravity Vector

Figure 2.17 Plane fitting in a surface made from a cloud point.

There are different methods to fit the points such as the Linear Least-Squares (LLS) which solves the classic equation A#x = b, and other statistical methods mentioned in SfM. This plane provides the following information such as the angle of the plane and the error as described in (Cui et al., 2017).

Gp# = d#

p# =(GT G)−1GT d# (2.9) 25

T T −1 T abc =(G G) G d# (2.10) ) * where G is the point cloud matrix, and d is a unitary column vector. Once the coefficients are obtained, the angle β between the local gravity vector

(g =[001]T ) and the normal vector p# is obtained using the dot product rule:

T −1 p# #g β = cos | · | (2.11) + p# T #g , || || · || ||

−1 c β = cos 2 | |2 2 (2.12) +√a + b + c , If the the angle is larger than 90◦ as in Figures (2.18a, 2.18b) due to the direction of the normal vector, the angle can be corrected as:

β◦ = 180◦ β◦ (2.13) −

Finally, the fitting error that represents the roughness of the surface de can be calculated as: axk + byk + czk 1 de = | − |,k=1, 2,...,n (2.14) √a2 + b2 + c2

(a) Plane fitted in blue, point cloud in red. (b) Side view of point cloud with plane and vectors.

Figure 2.18 Point cloud with plane and vectors.

2.3. Test Cases Part of the challenges of developing the HDA algorithm is how to evaluate them on real hardware before sending it into a mission. Different approaches have been made 26

to mimic on Earth the terrain of the Moon and Mars in Figure (2.19) (Rutishauser et al., 2012). Although they provide an accurate test-bed, having some imagery of the

Figure 2.19 Hazard field constructed at NASA KSC’s SLF. Inset is a digital elevation map of the design JPL provided KSC to build the field (Rutishauser et al., 2012).

different celestial bodies can be more helpful in order to create the logic that will be able to deal with the different environments at the moment of the deployment of the HDA suite. Some of these images will be pulled from datasets obtained by previous spacecraft missions with science experiments sent to different celestial bodies such as LRO orbiting the Moon and MRO orbiting Mars in Figures (2.20, 2.21). This imagery is important as it showcases the actual hazards that the lander might encounter in Figure (2.22). Additionally, these images since the Apollo missions and previous observations have provided valuable data to create math models and analysis of the distribution of these hazards such as rocks (Lunar Planetary Institute, 2007) or craters (Neukum et al., 1975). Using this imagery presents another difficult challenge because some data have to be simulated accurately as there are no real data to use, and this is no easy task. Some solutions are being developed such as PANGU 27

Figure 2.20 LROC M1195740691R (NASA/GSFC/Arizona State University, 2009).

Figure 2.21 MRO image of the Opportunity rover ILS (NASA/GSFC/Arizona State University, 2005). by the Dundee University (Parkes et al., 2004) or the DSENDS software by JPL (Balaram et al., 2002). In this thesis, some of the imagery tested in Figures ( 2.23a, 2.23b) is courtesy of Dr. John Christian and his team at the sensing, estimation, and automation 28 laboratory in the Rensselaer Polytechnic Institute (Christian & Lightsey, 2012; Rhodes et al., 2017).

Figure 2.22 LRO LROC close-up on Lunar hazards (NASA/GSFC/Arizona State University, 2009).

(a) Lunar surface. (b) Lunar surface.

Figure 2.23 Synthethic Lunar surface (Hong & Christian, 2020). 29

3. Methodology This chapter presents the application of the different methods and techniques reviewed in Chapter 2 to create the Hazard Detection and Avoidance algorithm. 3.1. Hazard Detection and Avoidance Algorithm Flow Chart

Figure 3.1 Flow Chart for passive HDA. 30

3.2. Hazard Detection and Avoidance This section will expand on the HDA algorithm. As an initial approach these algorithms will be developed based on passive sensors (Adams et al., 2008; Cheng et al., 2001; Huertas et al., 2006; Neveu et al., 2015), that could be extended to include active sensors in future work in Chapter 5(A. E. Johnson et al., 2002). To perform this hazard detection, the process is simplified into three main tasks: image enhancement, image processing, and landing site scoring. These three tasks will be described below and a high-level interaction between the methods and the tasks is laid on the HDA Flow Chart in Figure (3.1). 3.2.1. Image Enhancement During this task, the images are enhanced by modifying the contrast and bringing out the details that might be missing at first sight. One pixel in the snapshot might represent part of a rock that could lead to damage to the lander. MATLAB!R

TM provides multiple functions within the Image Processing Toolbox (Mathworks, 2020) to increase the signal-to-noise ratio and accentuate image features by modifying the colors or intensities of an image. First, to remap the dynamic range, the applied method is the MATLAB!R implementation of the Contrast Limited Adaptive Histogram Equalization (CLAHE)(Szeliski, 2010).

Figure 3.2 Original histogram of image Figure 3.3 CLAHE histogram equalization from RPI. to redistribute pixel intensity. 31

This algorithm computes multiple histograms, to perform a local histogram equalization instead of applying the same distribution across the whole image. This approach also limits the amplification of the contrast; this amplification could insert noise to the image, creating undesired effects. In the case of PL&HA, ARPOD, and other space applications, the contrast enhancement can be used to bring out details that might have been omitted.

Figure 3.4 Lunar surface before histogram Figure 3.5 Lunar surface after histogram equalization. equalization.

3.2.2. Image Processing Once the image enhancement is complete, the images are ready to be processed. This task is crucial, as the output is entirely dependent on the correct image processing tools. To extract a large amount of data, an approach from big to small is implemented. The first layer of analysis is the superpixels approach. The MATLAB!R algorithm implementation uses the SLIC approach. The superpixels are clustering areas of similar intensity as in Figure (3.6). For the PL&HA case, the dark areas are the fundamental hazard to eliminate. 32

Figure 3.6 Lunar surface divided in Figure 3.7 Lunar surface with pixels superpixels. labeled.

These obscure zones serve only one purpose when using a passive sensor such as a camera: to not operate or direct the spacecraft for landing in those zones. The output of the superpixels is a label that contains the information about the first possible areas to land as in Figure (3.7). This approach refines the data processing in the next step. That label is converted to a binary mask as in Figure (3.8) to execute a logic approach to filter the superpixels based on size and intensity, and store the location of each centroid on the same structure. This particular segmentation is ideal for this application to detect shadows or ultra-bright spots that could represent steep slopes or craters. This process narrows the data to be analyzed. Next, the occupancy grid creation is created from a tree data structure in which each internal node has exactly four children (quadtree). The division of the grid is driven by the difference in dynamic range on each superpixel as in Figures (3.9, 3.10). This grid is refined by labeling only the children where the VFDE can fit. At the same time, these images are passed down to the structure from motion module, to reconstruct a point cloud as in Figures (3.11a, 3.11b). 33

Figure 3.8 Labels after binary conditionals applied to filter shadows.

Figure 3.9 Binary occupancy grid used for Figure 3.10 Lunar surface occupancy grid region of interest analysis. made with quadtree.

TM MATLAB!R provides multiple functions within the Computer Vision Toolbox (Mathworks, 2020) for designing and testing computer vision, 3D vision, and video processing systems. This point cloud is used to check the next hazards: slope and rocks. From the previous grid, the locations where the VFDE fits are verified for slope and rocks. 34

(a) Front view of point cloud. (b) Side view of point cloud.

Figure 3.11 Point cloud reconstruction using 50 images of the approach at 10 Hz. Camera positions in red.

3.2.3. Landing Site Scoring The last task to perform is scoring. All the information obtained by the image processing is used to make a final decision. A value is assigned to each hazard individually in order to rank the top five landing sites. The metric range for each hazard goes from 1 to 10. The best being 10, and 1 being the worst. Therefore, minimizing a cost function of the sum of the inverse to each hazard.

1 1 1 J = k1 + k2 + k3 (3.1) slope roughness area where k1, k2, and k3 are weights that can be adjusted depending on the knowledge of the terrain and the significance to the lander landing system. Because this algorithm only focuses on HDA, fuel constraints are not considered into this level of the architecture. Instead, the result from the cost function is passed to the navigation system which takes care of the fuel consumption within the designated landing sites. Other approaches, such as the safety index have been developed (Cui et al., 2017), but they require solving a complex optimization function which requires a large amount of computational resources. Therefore, this cost function has been developed to minimize computational requirements. 35

3.2.4. Crater Detection The objective of this task is to detect in the terminal descent during the Hazard Detection and Avoidance maneuver any crater or rock that could jeopardize the mission. Some approaches to this problem using passive methods can be verified in these works (Cohen et al., 2016; Emami et al., 2015; Jahn, 1994; Leroy et al., 2001; Salamuniccar & Loncaric, 2010; Smirnov, 2002; Woicke et al., 2018; Yu et al., 2014). These included gradient detection, Hough circles, shadow detection, ellipse fitting, and convolutional neural networks (CNN). First, an initial passive approach shadow detection will be used to detect the dark side of the craters and rocks. Then, a secondary layer will try to detect the bright side. Third, both masks including bright and shadow zones are combined to get an average centroid measurement of the feature. Fourth, a circle is fitted to cover the area to reduce the possibility of hitting that mark. Finally, this new mask is combined with the Occupancy Grid. This helps to eliminate landing areas that might have a rock and were not accounted for previously. Due to the number of features that could show up in the image, this task is computationally extensive; therefore, it has been removed from this implementation. This method detects multiple craters, although there might be better approaches that can be taken into account for future iterations of the algorithm such as a Convolutional Neural Network that can be trained to detect craters under more variable conditions (Vinogradova et al., 2002). These networks require less computational burden as they are pretrained on the Earth. This task will be added as part of the future work in Chapter 5. 36

Figure 3.12 Crater cut out from Figure Figure 3.13 Crater region of interest (ROI). 3.16.

Figure 3.14 Common segment detection Figure 3.15 Detection of bright and shadow (Bright and shadow). zone of the crater, and ellipse fitting over the shadow zone. 37

Figure 3.16 LROC Dante Crater Figure 3.17 Detection of craters and rocks M121044107R (NASA/GSFC/Arizona State over Occupancy Grid. University, 2009).

3.3. Datasets Testing an HDA algorithm requires high-fidelity images that can resemble the actual situation where a spacecraft might be under high failure probability due to any risk or hazard in its surrounding. Therefore, obtaining real data is laborious, as it requires to risk a multi-million dollar mission. For this reason, tools like PANGU (Parkes et al., 2004), DSENDS (Balaram et al., 2002), or different physics engines developed for video games and content creation have been very useful to create and simulate accurate synthetic data. NASA, the Air Force Research Laboratory (AFRL), and the Department of Defense (DoD) have used tools such as Gazebo (Open Source Foundation, 2014), a robotic simulation environment that provides realistic scenarios (Allan et al., 2019), in combination with the Robotic Operating System (ROS) and the Dynamic Animation and Robotics Toolkit (DART) (Wlodarczyk, 2014) which provides a complete physics engine (Siciliano & Khatib, 2016). 38

3.3.1. Mars Reconnaissance Orbiter Imagery The Mars Reconnaissance Orbiter (MRO) is a spacecraft designed by NASA. Launched in 2005, it reached Mars orbit in 2006 (Zurek & Smrekar, 2007). This spacecraft is currently fixed in a highly elliptical polar orbit in order to make the most out of all the different scientific experiments. This data has been used to help define a safe landing site for the Phoenix lander (2007), Mars Science Laboratory (2012), InSight lander (2018), and the Perseverance rover (2020).

(a) West of image. (b) East of image.

Figure 3.18 MRO Close up of ESP0540351915 RED (NASA/GSFC/Arizona State University, 2005).

Built by Lockheed Martin, this spacecraft incorporates different scientific payloads. The main goal was to provide data on the climate, search for water, and map and characterize the different geological features. The secondary objectives were to act as an antenna to relay any connection from ground missions and designate the safety and probability of potential future landing sites and Mars rover traverses. For the purpose of Hazard Detection and Avoidance, the main experiment that has the most valuable data is the High-Resolution Imaging Science Experiment (HiRISE) (McEwen et al., 2007). 39

MRO has exceeded its operational lifetime, and it is currently still actively orbiting Mars ( 14 years). HiRISE provides images with an accuracy of up to 30 ≈ centimeters/pixel. 3.3.2. Lunar Reconnaissance Orbiter Imagery The Lunar Reconnaissance Orbiter (LRO) mission is a designed by NASA and launched in 2009 (Chin et al., 2007). This spacecraft is located in an eccentric polar orbit in order to make the most out of all the different scientific experiments. These data are considered highly valuable as it has been used for different missions, and it is being used for planning the future human Moon exploration.

(a) Plato Mare. (b) Vikram intended landing site close up.

Figure 3.19 LROC imagery (NASA/GSFC/Arizona State University, 2009).

The majority of the scientific experiments were designed for mapping different features on the Moon. For Hazard Detection and Avoidance, the two main experiments that have the most valuable scientific data are the Lunar Orbiter Laser Altimeter (LOLA) and Lunar Reconnaissance Orbiter Camera (LROC). LOLA was designed to provide the most precise global lunar topographic model. On the other hand, LROC was meant to provide proof and validity of the Apollo landings and find 40

potential future landing sites through measurement requirements of landing site certification and polar illumination (NASA/GSFC/Arizona State University, 2009; Robinson et al., 2010). LROC was designed based on the success of the MRO mission. The mission lifetime was meant for one year, but due to the quality of the scientific data, the mission has been renewed and extended up to the current day ( 11 years). ≈ LOLA provides DEMs with an accuracy of up to 100 meters/pixel, and LROC up to 50 centimeters/pixel (Riris et al., 2008).

Figure 3.20 LROC Interface to search the Planetary Data System files.

3.3.3. Intuitive Machines and RPI Imagery The Sensing, Estimation, and Automation Laboratory (SEAL) located at RPI, has provided images to test the algorithm under parameters very close to the actual expected conditions for the NOVA-C mission. These synthetic images are produced with Unreal Engine (Epic Games, 2019), developed by Epic Games for video game development. This physics engine allows to simulate the trajectory of the camera as if it was on the actual lander at the moment of the descent providing a “real” image set 41

of what the camera sensor would capture. The extra benefit of this approach is that different rock model distributions can be tested (M. Golombek & Rapp, 1997; M. P. Golombek et al., 2003; Li et al., 2017; Lunar Planetary Institute, 2007).

Figure 3.21 Lunar surface generated in Unreal Engine (Hong & Christian, 2020).

3.3.3.1. Hardware Implementation For the purpose of this thesis and the set of images acquired from RPI, the following camera model is described in Table 3.1.

Table 3.1

5MP Camera Parameters. Camera Specifications for Synthetic Imagery

Parameter Value Active Array Size 2160 (H) x 2560 (V) ADC Resolution 22 bits (2 x 11-bit) Field of View 33.3◦ Chroma RGB or Monochrome Dynamic Range > 83.5 dB

Using the ideal pinhole camera model, the camera intrinsic parameters can be obtained for calibration purposes (Szeliski, 2010). This model is a good 42

approximation as the lens distortion is not as high as it would be in a fish-eye lens. The camera intrinsic matrix (K) allows to map the points in a 2D pixel-grid coordinate system. Using Equation 2.3 and the values in Table 3.1,

3611.27 0 1080   K = (3.2)  0 4280.03 1280        001     43

4. Results This chapter is focused on portraying the results from the algorithm in the different data sets. The sets from LROC and MRO HiRISE only provide imagery; therefore, these sets will be tested only for the 2D mapping approach. As opposed to the previous case, the RPI sets have all the required dynamics from the camera (Including the location in the spacecraft) to test the complete algorithm including the SfM in order to create the point cloud of the corresponding landing sites. 4.1. LROC Imagery Results The following image Figure (4.1a) is a reconstruction of the Plato Mare on the surface of the moon. This image was created using Java Mission-Planning and Analysis for Remote Sensing (JMARS) (Christensen et al., 2009) using as a direct source the products obtained from LROC and stored in the Planetary Data System (PDS). As the image is already squared, there is no need to graphically show the process in the algorithm. This image represents a situation were the lander could perform an initial assessment at an altitude of more than 100 km. Once the lander gets close to the intended landing site, it can run the complete algorithm to build the surface map with the actual roughness measurement. 4.1.1. Image Enhancement Following the process mentioned in Chapter 3, the original image portrayed in Figure (4.1a) is loaded in the system, and then it is enhanced using the adaptive histogram equalization method. The outcome of this process is displayed Figure (4.1b) is an image with increased contrast particularly on details such as hazard features. E.g., inside the Plato more craters are identified. This also helps to identify changes in the material surface by displaying a more complete dynamic range. 4.1.2. Image Processing and Computer Vision Continuing with the algorithm, the image is now ready to be analyzed. The first segmentation is the superpixels process. An initial layer is obtained as in Figure (4.2a). This layer is then refined in Figure (4.2b) to improve the quality of the superpixels. 44

(a) LROC Original image. (b) LROC Enhanced image.

Figure 4.1 JMARS LROC Plato (NASA/GSFC/Arizona State University, 2009).

(a) LROC Superpixels detection. (b) LROC Refined superpixels.

Figure 4.2 Superpixels segmentation process.

From each superpixel, information such as the mean intensity, size, and centroid is calculated. Then, this information is used to create a mask label as in Figure (4.3) with the initial selected superpixels. Ultra bright areas are omitted as they could be a slope by reflecting more light from the sun. Dark areas are avoided as they could 45

Figure 4.3 LROC Labels of possible landing areas. represent a deep crater, or hide other unobservable features in the shadow. The identified superpixels that meet the criteria are individually analyzed for features using the quadtree occupancy grid map. This quadtree subdivision clearly tracks features such as rocks, craters, and changes in surfaces in Figure (4.4). Finally, a complete map as in Figure (4.5a) is made of the multiple quadtree zones providing ideal areas for an initial landing site designation. These can be graphically identified for the viewer purposes in Figure (4.5b), as the computer uses the binary occupancy grid map for the calculations. Due to the altitude of the spacecraft at the current time in the image is > 100 km, the outcome should not be used to designate a final landing site. Instead, it can be used to designate an initial landing site and navigate towards it. Once the spacecraft is closer to the surface and the initial landing site, the code can be executed again to perform the complete HDA maneuver and designate the final landing site. 46

Figure 4.4 LROC Quadtree decomposition.

(a) LROC Occupancy grid map. (b) LROC Map overlay.

Figure 4.5 Occupancy grid mapping and feature detection.

4.2. MRO HiRISE Imagery Results The following image is a close up from a bigger snapshot (ESP 054035 1915 5 km 5 km) of an analysis being done by NASA/JPL/ASU for recent crater impact × 47

zones in Mars. This image was aquired by HiRISE on February 2018, at approximately 278 km in altitude (NASA/JPL/ASU, 2018). Additional information that can be extracted from this image for future work is: different color channels, a complete DEM with 1 m/pix resolution, and image scale. Due to the image being already squared, there is no need to graphically show the process in the algorithm. 4.2.1. Image Enhancement Following the process mentioned in Chapter 3, the original image portrayed in Figure (4.6a) is loaded in the system, and then it is enhanced using the adaptive histogram equalization method.

(a) Original Image. (b) Enhanced Image.

Figure 4.6 JMARS MRO ESP 054035 1915 (NASA/GSFC/Arizona State University, 2005).

The outcome of this process in Figure (4.6b) is an image with increased contrast particularly on details such as hazard features. E.g., in the planes more craters are identified. This also helps to identify changes in the material surface by displaying a more complete dynamic range. 48

4.2.2. Image Processing and Computer Vision Continuing with the algorithm, the image is now ready to be analyzed. The first segmentation is the superpixels process. An initial layer is obtained as in Figure (4.7a). This layer is then refined as in Figure (4.7b) to improve the quality of the superpixels.

(a) MRO Superpixels detection. (b) MRO Refined Superpixels.

Figure 4.7 Superpixels segmentation process.

Figure 4.8 MRO Labels of possible landing areas. 49

From each superpixel, information such as the mean intensity, size, and centroid is calculated. Then, this information is used to create a mask label as in Figure (4.8) with the initial selected superpixels. Ultra bright areas are omitted as they could be a slope by reflecting more light from the sun. Dark areas are avoided as they could represent a deep crater, or hide other unobservable features in the shadow. In this particular case the southwest area is darker. Therefore, more superpixels are omitted from this area. The thresholds can be adjusted for each particular case to account for multiple illumination conditions.

Figure 4.9 MRO Quadtree decomposition.

The identified superpixels that meet the criteria are individually analyzed for features using the quadtree occupancy grid map. This quadtree subdivision clearly tracks features such as rocks, craters, and changes in surfaces in Figure (4.9). Finally, 50

a complete map as in Figure (4.10a) is made of the multiple quadtree zones providing ideal areas for an initial landing site designation. These can be graphically identified for the viewer purposes in Figure (4.10b), as the computer uses the binary occupancy grid map for the calculations.

(a) MRO Occupancy Grid map. (b) MRO Overlay of Occupancy Grid.

Figure 4.10 Occupancy grid mapping and feature detection.

Just like the LROC case, due to the altitude of the spacecraft at the current time in the image is 278 km, the outcome should not be used to designate a final landing ≈ site. Instead, it can be used to designate an initial landing site and navigate towards it. Once the spacecraft is closer to the surface and the initial landing site, the code can be executed again to designate the final landing site. 4.3. RPI Set 1 Imagery Results This image was synthetically generated by RPI and represents the intended landing site for the NOVA-C mission by Intuitive Machines. Located at Lat. 24, Long. -50, this landing spot was chosen as it has a low rock distribution, and the topography by LOLA displays a leveled terrain. This snapshot is a reconstruction of the LOLA and LROC stereo images, providing a high quality DEM. In order to test HDA up to hazards of 30 cm/pix, the DEM is upscaled and rocks are added. This image is the 51

same as Figure (3.21) with the addition of rocks using the rock distribution values from the Lunar Planetary Institute (Lunar Planetary Institute, 2007). 4.3.1. Image Enhancement Following the process mentioned in Chapter 3, the original image portrayed in Figure (4.11a) is loaded in the system, and then it is enhanced using the adaptive histogram equalization method.

(a) RPI Original image. (b) RPI Enhanced image.

Figure 4.11 Synthetic Image (Hong & Christian, 2020).

The outcome of this process as in Figure (4.11b) is an image with increased contrast particularly on details such as hazard features. E.g., the shadows from the rocks and the roughness is highlighted providing more information for the feature detection section with an increased dynamic range. The final process of the enhancing process is to square the image as in Figure (4.12). This image is not square and takes into account the real resolution of the actual camera. 52

Figure 4.12 RPI Squared Image

4.3.2. Image Processing and Computer Vision Continuing with the algorithm, the image is now ready to be analyzed. The first segmentation is the superpixels process. An initial layer is obtained as in Figure (4.13a). This layer is then refined as in Figure (4.13b) to improve the quality of the superpixels.

(a) RPI Superpixels detection. (b) RPI Refined Superpixels.

Figure 4.13 Superpixels segmentation process. 53

Figure 4.14 Labels of possible landing areas.

From each superpixel, information such as the mean intensity, size, and centroid is calculated. Then, this information is used to create a mask label as in Figure (4.14) with the initial selected superpixels. Ultra bright areas are omitted as they could be a slope by reflecting more light from the sun. Dark areas are avoided as they could represent a deep crater, or hide other unobservable features in the shadow. In this particular case the shadows and craters are omitted the most. The thresholds can be adjusted for each particular case to account for multiple illumination conditions. The identified superpixels that meet the criteria are individually analyzed for features using the quadtree occupancy grid map. This quadtree subdivision clearly tracks features such as rocks, craters, and changes in surfaces in Figure (4.15). Finally, a complete map as in Figure (4.16a) is made of the multiple quadtree zones providing ideal areas for an initial landing site designation. These can be graphically identified for the viewer purposes in Figure (4.16b), as the computer uses the binary occupancy grid map for the calculations. For the RPI case, the altitude of the spacecraft at the current time in the image is 400 m, the outcome can be used to designate a final ≈ landing site. Because these data set does not have attitude included, only half of the 54

Figure 4.15 Quadtree decomposition. algorithm is tested to demonstrate the area selection. Therefore, it is missing the slope and roughness measurement that will be tested on the next set.

(a) RPI Occupancy Grid map. (b) RPI Overlay of Occupancy Grid.

Figure 4.16 Occupancy grid mapping and feature detection. 55

4.4. RPI Set 2 Imagery Results Just like the previous set, this image was synthetically generated by RPI and represents the intended landing site for the NOVA-C mission by Intuitive Machines. This set of images recorded at 10 Hz is accompanied by the following trajectory in Figure (4.17). This trajectory only encompasses the Hazard Detection and Avoidance section. Therefore, it starts closer to the 400 m mark and goes down to the 150 m, where the HDA module will provide the navigation system with the metrics to make the final decision on where to land. This maneuver spans around 10 secs.

Figure 4.17 HDA Lunar approach trajectory. The color scale indicates the correspondence of the image file to the actual trajectory data (Hong & Christian, 2020). 56

4.4.1. Image Enhancement Following the process mentioned in Chapter 3, the original image displayed in Figure (4.18a) is loaded in the system, and then it is enhanced using the adaptive histogram equalization method.

(a) RPI Set 2 Original image. (b) RPI Set 2 Enhanced image.

Figure 4.18 Synthetic Image (Hong & Christian, 2020).

The outcome of this process as in Figure (4.18b) is an image with increased contrast particularly on details such as hazard features. E.g., the roughness and surface is highlighted providing more information for the feature detection section. The final process of the enhancing process is to square the image as on Figure (4.19). This image is not square and takes into account the real resolution of the actual camera. 4.4.2. Image Processing and Computer Vision Continuing with the algorithm, the image is now ready to be analyzed. The first segmentation is the superpixels process. An initial layer is obtained as in Figure 57

Figure 4.19 RPI Set 2 Squared Image

(4.20a). This layer is then refined as in Figure (4.20b) to improve the quality of the superpixels.

(a) RPI Set 2 Superpixels detection. (b) RPI Set 2 Refined Superpixels.

Figure 4.20 Superpixels segmentation process.

From each superpixel, information such as the mean intensity, size, and centroid is calculated. Then, this information is used to create a mask label as in Figure (4.21) 58

Figure 4.21 RPI Set 2 Labels of possible landing areas. with the initial selected superpixels. Ultra bright areas are omitted as they could be a slope by reflecting more light from the sun. Dark areas are avoided as they could represent a deep crater, or hide other unobservable features in the shadow. In this particular case the shadows and craters are omitted the most. The thresholds can be adjusted for each particular case to account for multiple illumination conditions. The identified superpixels that meet the criteria are individually analyzed for features using the quadtree occupancy grid map. This quadtree subdivision clearly tracks features such as rocks, craters, and changes in surfaces. Finally, a complete map displayed in Figure (4.22a) is made of the multiple quadtree zones providing ideal areas for an initial landing site designation. These can be graphically identified for the viewer purposes in Figure (4.22b), as the computer uses the binary occupancy grid map for the calculations. For the RPI case, the altitude of the spacecraft at the current time in the image is 400 m, the outcome ≈ can be used to designate an ideal final landing site. These superpixels comply with avoiding dark spots and include at least an area where the VFDE fits as seen in Figures (4.23a, 4.23b). The next step, is to perform the SfM analysis to build the 59 point cloud portrayed in Figures (4.24a, 4.24b, 4.25a) and calculate the slope and roughness on the superpixels that meet all the criteria as in Figure( 4.25b).

(a) RPI Set 2 Occupancy Grid map. (b) RPI Set 2 Map Overlay.

Figure 4.22 Occupancy grid mapping and feature detection.

(a) RPI Set 2 Refined Grid with (b) RPI Set 2 Overlay of refined Occupancy only areas that fit the VFDE. Grid.

Figure 4.23 Areas that fit the VFDE. 60

(a) Lunar surface point cloud almost top (b) Lunar surface point cloud side view. view.

Figure 4.24 Red boxes represent the camera pose and the number corresponds to the image. Note: Axis are a reference, not to scale.

(a) Lunar surface point cloud angled (b) Example of local point cloud ROI bottom view. selection.

Figure 4.25 Red boxes represent the camera pose and the number corresponds to the image. Note: Axis are a reference, not to scale.

Finally, the metrics are calculated using a linear interpolation between the allowable ranges and the correspondent maximum criteria limits. From this table, the best five sites metrics are passed down to the navigation sub-sytem of the NOVA-C Lander into their own cost function. Below, Table 4.1 is an example of ten different 61

sites from Figure (4.23b); five are selected to sent to the navigation module (for this case a total of 49 sites were identified).

Table 4.1

Example of Landing Site Scoring Metrics - Areas have been filtered to only include bigger than the VFDE area. Therefore, the area metric will be 10 for all of them.

Site Area [m2] Metric Slope [deg] Metric Roughness [m] Metric ≈ 1 563.50 10 2.53 8 0.135 6 2 495.84 10 3.23 7 0.149 6 3 418.88 10 3.48 7 0.175 5 4 413.76 10 3.74 6 0.235 3 5 445.57 10 4.13 6 0.135 6 6 563.50 10 4.28 6 0.256 2 7 563.50 10 4.56 5 0.345 1 8 482.06 10 5.39 4 0.453 1 9 515.72 10 7.12 3 0.128 6 10 360.21 10 8.12 3 0.456 1 62

5. Conclusions and Future Work A simple but effective open-source algorithm towards the implementation of a full autonomous Hazard Detection and Avoidance system for space applications is presented in this work. This work will be configured to satisfy the requirements of the Intuitive Machines NOVA-C Lunar lander. Attention has been focused on the segmentation methods for feature identification that have not been used previously on similar space applications.

5.1. Hazard Detection and Avoidance This method has proven to yield reliable results once it is tailored and tuned for the specific application. The algorithm provided every time at least one safe landing site under the requirements of the NOVA-C Lunar lander (Table 4.1). The segmentation performed by superpixels improved the speed of the analysis by reducing the number of pixels to be examined, proving their usefulness in space applications under low computing capabilities. The algorithm, instead of analyzing each time 6, 553, 600 pixels each time (2560 2560), it analyzed between 75 and 150 superpixels sharing similar × characteristics. The quadtree approach for feature identification proved a reliable method to create a map of where not to go during the terminal descent. The occupancy map based on the fractal strategy is a technique reliable to define safe areas. Compared to other methods that require the identification of the features by using different masks, some for rocks, others for craters, the occupancy grid does not require any previous knowledge of the appearance of a crater or a rock. This mapping can autonomously avoid them by marking them on the map when the variations of the dynamic range were detected. The combination of the superpixels with the quadtree proved that they are computationally effective methods (MATLAB execution time of 2.61secs 3.1GHz i7 - 16GB ram memory - This code is being optimized and refined for the implementation in the actual space-qualified obc for the NOVA-C lander). They run only on the desired regions of interest based on 63

the safe landing criteria. This approach reduced the amount of data that need to be processed, which is one of the problems that the implementation must overcome. It is important to emphasize that due to the amount of data to process, a separate crater detection method had to be removed from the main pipeline as it needed a lot of computing power. Part of the future work will be to investigate other novelty methods (ANNs, CNNs, AI, etc.) that could improve this detection. This implementation also demonstrated the effectiveness of using low-cost space-qualified sensors. This application remains sensor-agnostic and active sensors such as LiDAR can be introduced as well; the only change in the system will be the source of the point cloud. Finally, open-source tools such as OpenCV with Python demonstrated an efficient and robust code development enough to program and use the code directly without having to tailor the application too much. 5.2. Landing Site Scoring For this implementation, a simple metric system based on goal achievement was implemented. Although optimal and better solutions exist, these are also computationally expensive, introducing a challenge in the implementation of a complete HDA system on a spacecraft. This simple cost function demonstrated its practicality to provide the navigation system with enough information to guarantee the safety of the lander when performing Hazard Detection and Avoidance maneuvers in the terminal descent phase. In the second set of RPI images, multiple landing sites were detected and classified with the metric providing multiple landing safe spots. The optimal solution from these metrics is yet to be determined by the automated flight navigation system that takes into account the fuel consumption as well. 5.3. Open-Source Repository This algorithm will be open-source and available for anyone to use. This repository will be initially set up by Intuitive Machines and the license is currently being defined. This repository will also include other uses of parts of the algorithm for different space-related tasks such as ARPOD, OD, and Surface Mobility. 64

5.4. Future Work To conclude this work, different things can be done to improve this algorithm and make it cutting-edge technology:

• The fractal approach using the quadtree decomposition to identify features might be replaceable with a k-tree implementation making the occupancy grid more efficient.

• Replace the plane fitting for a recursive adaptive octree mapping to create a more accurate height map and obtain a better approach to terrain than estimating a plane in a surface. An example of this application being tested on UAVs can be reviewed in (Vergara, 2019). This could also be used as the occupancy grid, reducing the size of the implementation.

• Testing this algorithm in UAVs for cases where humans cannot reach due to the hazards, for example a collapsed building.

• Development of a forked library from OpenCV adapted for space applications.

• Testing of newer segmentation methods such as Convolutional Neural Networks (CNN), Artificial Neural Networks (ANN), and techniques of deep learning, machine learning, and artificial intelligence.

• Development of synthetic imagery (Digital or recreated in a lab) to increase the variety of test cases, such as Lunar south pole.

• Include dynamics of spacecraft to perform simulations that incorporate other factors such as estimation, noise rejection, fuel consumption, and trajectory planning.

• Testing of different cost functions. Additionally, consider other variables such as the fuel and speed of the lander.

• Set up a secondary repository as a back up for the original repository created by Intuitive Machines. 65

REFERENCES

Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., & S¨usstrunk,S. (2012). SLIC superpixels compared to state-of-the-art superpixel methods. IEEE transactions on pattern analysis and machine intelligence, 34 (11), 2274-2282.

Adams, D., Criss, T. B., & Shankar, U. J. (2008). Passive optical terrain relative navigation using APLNav. In 2008 ieee aerospace conference (p. 1-9).

Allan, M., Wong, U., Furlong, P. M., Rogg, A., McMichael, S., Welsh, T., . . . others (2019). Planetary rover simulation for lunar exploration missions. In 2019 ieee aerospace conference (p. 1-19).

Balaram, J., Austin, R., Banerjee, P., Bentley, T., Henriquez, D., Martin, B., . . . Sohl, G. (2002). DSENDS-a high-fidelity dynamics and spacecraft simulator for entry, descent and surface landing. In Proceedings, ieee aerospace conference (Vol. 7, p. 7-7).

Berry, K., Sutter, B., May, A., Williams, K., Barbee, B. W., Beckman, M., & Williams, B. (2013). OSIRIS-REx touch-and-go (TAG) mission design and analysis.

Bibring, J.-P., Rosenbauer, H., Boehnhardt, H., Ulamec, S., Biele, J., Espinasse, S., . . . others (2007). The ROSETTA lander (“PHILAE”) investigations. Space science reviews, 128 (1-4), 205-220.

Bradski, G. (2000). The OpenCV Library. Dr. Dobb’s Journal of Software Tools.

Brady, T., Robertson, E., Epp, C., Paschall, S., & Zimpfer, D. (2009). Hazard detection methods for lunar landing. In 2009 ieee aerospace conference (p. 1-8).

Brady, T., & Schwartz, J. (2007). ALHAT system architecture and operational concept. In 2007 ieee aerospace conference (p. 1-13).

Braun, R. D., & Manning, R. M. (2006). Mars exploration entry, descent and landing challenges. In 2006 ieee aerospace conference (p. 18-pp).

Calonder, M., Lepetit, V., Strecha, C., & Fua, P. (2010). Brief: Binary robust independent elementary features. In European conference on computer vision (p. 778-792).

Carson, J. M., Trawny, N., Robertson, E., Roback, V. E., Pierrottet, D., Devolites, J., . . . Estes, J. N. (2014). Preparation and integration of ALHAT precision landing technology for Morpheus flight testing. In Aiaa space 2014 conference and exposition (p. 4313).

Cheng, Y., Johnson, A. E., Mattheis, L. H., & Wolf, A. A. (2001). Passive imaging based hazard avoidance for spacecraft safe landing. 66

Chin, G., Brylow, S., Foote, M., Garvin, J., Kasper, J., Keller, J., . . . others (2007). Lunar reconnaissance orbiter overview: The´ainstrument suite and mission. Space Science Reviews, 129 (4), 391-419.

Christensen, P., Engle, E., Anwar, S., Dickenshied, S., Noss, D., Gorelick, N., & Weiss-Malik, M. (2009). JMARS-a planetary GIS. AGUFM , 2009 , IN22A-06.

Christian, J., & Lightsey, E. G. (2012). An on-board image processing algorithm for a spacecraft optical navigation sensor system. In Aiaa space 2010 conference & exposition (p. 8920).

Cohen, J. P., Lo, H. Z., Lu, T., & Ding, W. (2016). Crater detection via convolutional neural networks. arXiv preprint arXiv:1601.00978 .

Collins, T., & Collins, J. (2007). Occupancy grid mapping: An empirical evaluation. In 2007 mediterranean conference on control & automation (p. 1-6).

Crane, E. S. (2014). Vision-Based Hazard Estimation during Autonomous Lunar Landing (Unpublished doctoral dissertation). Stanford University.

Cui, P., Ge, D., & Gao, A. (2017). Optimal landing site selection based on safety index during planetary descent. Acta Astronautica, 132 , 326-336.

Daines, G. (2019, Mar). Commercial Lunar Payload Services. Retrieved from https://www.nasa.gov/content/commercial-lunar-payload-services

Dynetics. (2020). Dynetics To Develop Artemis Human Lunar Landing System. Retrieved from https://www.dynetics.com/newsroom/news/2020/ dynetics-to-develop-nasas-artemis-human-lunar-landing-system

Elfes, A. (1989). Using occupancy grids for mobile robot perception and navigation. Computer, 22 (6), 46-57.

Emami, E., Bebis, G., Nefian, A., & Fong, T. (2015). Automatic crater detection using convex grouping and convolutional neural networks. In International symposium on visual computing (p. 213-224).

Epic Games. (2019). Unreal Engine — What is Unreal Engine 4. Unreal Engine. Retrieved from https://www.unrealengine.com/en-US/

Epp, C. D., Robertson, E. A., & Ruthishauser, D. K. (2013). Helicopter Field Testing of NASA’s Autonomous Landing and Hazard Avoidance Technology (ALHAT) System fully Integrated with the Morpheus Vertical Test Bed Avionics.

Epp, C. D., & Smith, T. B. (2007). Autonomous precision landing and hazard detection and avoidance technology (ALHAT). In 2007 ieee aerospace conference (p. 1-7). 67

Ferri, F., Ahondan, A., Colombatti, G., Bettanini, C., Bebei, S., Karatekin, O., . . . others (2017). Atmospheric mars entry and landing investigations & analysis (AMELIA) by ExoMars 2016 Schiaparelli Entry Descent module: The ExoMars entry, descent and landing science. In 2017 ieee international workshop on metrology for aerospace (metroaerospace) (p. 262-265).

Garner, R. (2015, February). OSIRIS-REx. Retrieved from http://www.nasa.gov/osiris-rex

Gibney, E. (2019). Israeli spacecraft Beresheet crashes into the Moon. Nature, 568 (7752), 286-287.

Golombek, M., & Rapp, D. (1997). Size-frequency distributions of rocks on Mars and Earth analog sites: Implications for future landed missions. Journal of Geophysical Research: Planets, 102 (E2), 4117-4129.

Golombek, M. P., Haldemann, A., Forsberg-Taylor, N., Dimaggio, E. N., Schroeder, R., Jakosky, B., . . . Matijevic, J. (2003). Rock size-frequency distributions on mars and implications for mars exploration rover landing safety and operations. Journal of Geophysical Research: Planets, 108 (E12).

Grush, L. (2019a, April). Israeli team says it will build a second lunar lander after its first one crashed into the Moon. Retrieved from https://www.theverge.com/2019/ 4/15/18311238/israel-spaceil-beresheet-lunar-lander-second-try

Grush, L. (2019b, November). NASA partners with SpaceX, Blue Origin, and more to send large payloads to the Moon. Retrieved from https://www.theverge.com/2019/11/18/20971307/ nasa-clps-program--blue-origin-sierra-nevada-ceres-tyvak--rover

Gurgaon. (2019). ISRO Chandrayaan 2 Mission: Vikram Lander Location Found, Hope of Mission Being Successful Lives On. Dataquest. Retrieved from http://ezproxy.libproxy.db.erau.edu/login?url=https://search-proquest-com .ezproxy.libproxy.db.erau.edu/docview/2286836444?accountid=27203

H¨aming,K., & Peters, G. (2010). The structure-from-motion reconstruction pipeline - a survey with focus on short image sequences. Kybernetika, 46 (5), 926-937.

Hart, W., Brown, G. M., Collins, S. M., Pich, M. D. S.-S., Fieseler, P., Goebel, D., . . . others (2018). Overview of the spacecraft design for the Psyche mission concept. In 2018 ieee aerospace conference (p. 1-20).

Hirshorn, S., & Jefferies, S. (2016). Final Report of the NASA Technology Readiness Assessment (TRA) Study Team.

Hong, L., & Christian, J. (2020). Nova-c imagery dataset. 68

Huertas, A., Cheng, Y., & Madison, R. (2006). Passive imaging based multi-cue hazard detection for spacecraft safe landing. In 2006 ieee aerospace conference (p. 14-pp).

ISRO. (2017). Chandrayaan2 spacecraft. Retrieved from https://www.isro.gov.in/chandrayaan2-spacecraft

Jahn, H. (1994). Crater detection by linear filters representing the Hough Transform. In Isprs commission iii symposium: spatial information from digital photogrammetry and computer vision (Vol. 2357, p. 427-431).

Johnson, A., & Ivanov, T. (2011). Analysis and testing of a lidar-based approach to terrain relative navigation for precise lunar landing. In Aiaa guidance, navigation, and control conference (p. 6578).

Johnson, A. E., Huertas, A., Werner, R. A., & Montgomery, J. F. (2008). Analysis of on-board hazard detection and avoidance for safe lunar landing. In 2008 ieee aerospace conference (p. 1-9).

Johnson, A. E., Keim, J. A., & Ivanov, T. (2010). Analysis of flash lidar field test data for safe lunar landing. In 2010 ieee aerospace conference (p. 1-11).

Johnson, A. E., Klumpp, A. R., Collier, J. B., & Wolf, A. A. (2002). Lidar-based hazard avoidance for safe landing on Mars. Journal of guidance, control, and dynamics, 25 (6), 1091-1099.

Karami, E., Prasad, S., & Shehata, M. (2017). Image matching using SIFT, SURF, BRIEF and ORB: performance comparison for distorted images. arXiv preprint arXiv:1710.02726 .

Kawaguchi, J., Fujiwara, A., & Uesugi, T. (2008). Hayabusa—Its technology and science accomplishment summary and Hayabusa-2. Acta Astronautica, 62 (10-11), 639-647.

Keys, A., Adams, J., Frazier, D., Patrick, M., Watson, M., Johnson, M., . . . Kolawa, E. (2007). Developments in radiation-hardened electronics applicable to the vision for space exploration. In Aiaa space 2007 conference & exposition (p. 6269).

Koenderink, J. J., & Van Doorn, A. J. (1991). Affine structure from motion. JOSA A, 8 (2), 377-385.

Kraetzschmar, G. K., Gassull, G. P., Uhl, K., Pags, G., & Uhl, G. K. (2004). Probabilistic quadtrees for variable-resolution mapping of large environments. In Proceedings of the 5th ifac/euron symposium on intelligent autonomous vehicles (p. 1-6).

Leroy, B., Medioni, G., Johnson, E., & Matthies, L. (2001). Crater detection for autonomous landing on asteroids. Image and Vision Computing, 19 (11), 787-792. 69

Li, B., Ling, Z., Zhang, J., & Chen, J. (2017). Rock size-frequency distributions analysis at lunar landing sites based on remote sensing and in-situ imagery. Planetary and Space Science, 146 , 30-39.

Lunar Planetary Institute. (2007). Rock Distributions on the Lunar Surface - LunaRef. Retrieved from https://www.lpi.usra.edu/wiki/lunaref/index.php/ Rock Distributions on the Lunar Surface

Lunghi, P. (2017). Hazard detection and avoidance systems for autonomous planetary landing.

TM Mathworks. (2020). Image Processing Toolbox User’s Guide R2020a. Retrieved from https://www.mathworks.com/help/pdf doc/images/images ug.pdf

Matthies, L., Huertas, A., Cheng, Y., & Johnson, A. (2008). Stereo vision and shadow analysis for landing hazard detection. In 2008 ieee international conference on robotics and automation (p. 2735-2742).

McEwen, A. S., Eliason, E. M., Bergstrom, J. W., Bridges, N. T., Hansen, C. J., Delamere, W. A., . . . others (2007). Mars reconnaissance orbiter’s high resolution imaging science experiment (HiRISE). Journal of Geophysical Research: Planets, 112 (E5).

Mester, R., Conrad, C., & Guevara, A. (2011). Multichannel segmentation using contour relaxation: fast super-pixels and temporal propagation. In Scandinavian conference on image analysis (p. 250-261).

Mori, G., Ren, X., Efros, A. A., & Malik, J. (2004). Recovering human body configurations: Combining segmentation and recognition. In Proceedings of the 2004 ieee computer society conference on computer vision and pattern recognition, 2004. cvpr 2004. (Vol. 2, p. II-II).

NASA. (2019). What is Artemis? Retrieved from https://www.nasa.gov/what-is-artemis

NASA Astrobiology Blog. (2020). NASA Perseverance Mars Rover Scientists Train in the Nevada Desert May 8, 2020. Retrieved from http://ezproxy.libproxy.db.erau.edu/login?url=https://search-proquest-com .ezproxy.libproxy.db.erau.edu/docview/2399695339?accountid=27203

NASA/GSFC/Arizona State University. (2005). Mars Reconnaissance Orbiter. Retrieved from https://mars.nasa.gov/mro/

NASA/GSFC/Arizona State University. (2009). Lunar Reconnaissance Orbiter Camera. Retrieved from http://lroc.sese.asu.edu/images/

NASA/JPL/ASU. (2018). HiRISE — Candidate Recent Impact Site (ESP 054035 1915). Retrieved from https://www.uahirise.org/ESP 054035 1915 70

Neukum, G., K¨onig, B., & Arkani-Hamed, J. (1975). A study of lunar impact crater size-distributions. The moon, 12 (2), 201-229.

Neveu, D., Mercier, G., Hamel, J.-F., Bilodeau, V. S., Woicke, S., Alger, M., & Beaudette, D. (2015). Passive versus active hazard detection and avoidance systems. CEAS Space Journal, 7 (2), 159-185.

News, N. (2018). Japan’s Hayabusa 2 space probe snuggles up to odd-looking asteroid. Retrieved from https://www.nbcnews.com/mach/science/ japan-s-hayabusa-2-space-probe-snuggles-odd-looking-asteroid-ncna886446

Nist´er,D. (2005). Preemptive RANSAC for live structure and motion estimation. Machine Vision and Applications, 16 (5), 321-329.

Open Source Robotics Foundation. (2014). Gazebo. Retrieved from http://gazebosim.org

Parkes, S., Martin, I., Dunstan, M., & Matthews, D. (2004). Planet surface simulation with PANGU. In Space ops 2004 conference (p. 389).

Reza, A. M. (2004). Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. Journal of VLSI signal processing systems for signal, image and video technology, 38 (1), 35-44.

Rhodes, A. P., Christian, J. A., & Evans, T. (2017). A concise guide to feature histograms with applications to LIDAR-based spacecraft relative navigation. The Journal of the Astronautical Sciences, 64 (4), 414-445.

Riris, H., Sun, X., Cavanaugh, J. F., Ramos-Izquierdo, L., Liiva, P., Jackson, G. B., . . . Smith, D. E. (2008). The lunar orbiter laser altimeter (LOLA) on NASA’s lunar reconnaissance orbiter (LRO) mission. In Conference on lasers and electro-optics (p. CMQ1).

Robinson, M., Brylow, S., Tschimmel, M., Humm, D., Lawrence, S., Thomas, P., . . . others (2010). Lunar reconnaissance orbiter camera (LROC) instrument overview. Space science reviews, 150 (1-4), 81-124.

Rosten, E., & Drummond, T. (2006). Machine learning for high-speed corner detection. In European conference on computer vision (p. 430-443).

Rublee, E., Rabaud, V., Konolige, K., & Bradski, G. (2011). ORB: An efficient alternative to SIFT or SURF. In 2011 international conference on computer vision (p. 2564-2571).

Rutishauser, D., Epp, C., & Robertson, E. (2012). Free-flight terrestrial rocket lander demonstration for NASA’s autonomous landing and hazard avoidance technology (ALHAT) system. In Aiaa space 2012 conference & exposition (p. 5239). 71

Salamuniccar, G., & Loncaric, S. (2010). Method for crater detection from martian digital topography data using gradient value/orientation, morphometry, vote analysis, slip tuning, and calibration. IEEE transactions on Geoscience and Remote Sensing, 48 (5), 2317-2329.

Schonberger, J. L., & Frahm, J.-M. (2016). Structure-from-motion revisited. In Proceedings of the ieee conference on computer vision and pattern recognition (p. 4104-4113).

Shyldkrot, H., Shmidt, E., Geron, D., Kronenfeld, J., Loucks, M., Carrico, J., . . . Taylor, J. (2019, 08). THE FIRST COMMERCIAL LUNAR LANDER MISSION: BERESHEET. In (p. 2019 AAS/AIAA Astrodynamics Specialist Conference). Portland, ME: ACM Press.

Siciliano, B., & Khatib, O. (2016). Springer Handbook of Robotics. Springer.

Sim˜oes,L. F., Pais, T. C., Ribeiro, R. A., Jonniaux, G., & Reynaud, S. (2009). Search methodologies for efficient planetary site selection. In 2009 ieee congress on evolutionary computation (p. 1981-1987).

Sinclair, A. J., & Fitz-Coy, N. G. (2003). Comparison of obstacle avoidance strategies for Mars landers. Journal of spacecraft and rockets, 40 (3), 388-395.

Smirnov, A. A. (2002). Exploratory study of automated crater detection algorithm. Boulder, CO.

SpaceNews. (2020). Intuitive Machines selects landing site for CLPS mission. Retrieved from https://spacenews.com/intuitive-machines-selects-landing-site-for-clps-mission/

Stutz, D., Hermans, A., & Leibe, B. (2018). Superpixels: An evaluation of the state-of-the-art. Computer Vision and Image Understanding, 166 , 1-27.

Szeliski, R. (2010). Computer Vision: Algorithms and Applications. Springer Science & Business Media.

TechCrunch. (2020). NASA selects Masten Space Systems to deliver cargo to the Moon in 2022. Retrieved from https://social.techcrunch.com/2020/04/08/ nasa-selects-masten-space-systems-to-deliver-cargo-to-the-moon-in-2022/

Torr, P. H., & Zisserman, A. (2000). MLESAC: A new robust estimator with application to estimating image geometry. Computer vision and image understanding, 78 (1), 138-156.

Torr, P. H. S., & Davidson, C. (2003). IMPSAC: Synthesis of importance sampling and random sample consensus. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25 (3), 354-364. 72

van der Walt, S., Sch¨onberger, J. L., Nunez-Iglesias, J., Boulogne, F., Warner, J. D., Yager, N., . . . the scikit-image contributors (2014, 6). scikit-image: image processing in Python. PeerJ , 2 , e453. Retrieved from https://doi.org/10.7717/peerj.453 doi: 10.7717/peerj.453

Vedaldi, A., Jin, H., Favaro, P., & Soatto, S. (2005). KALMANSAC: Robust filtering by consensus. In Tenth ieee international conference on computer vision (iccv’05) volume 1 (Vol. 1, p. 633-640).

Vergara, P. L. (2019). Adaptive learning terrain estimation for unmanned aerial vehicle applications.

Villalpando, C. Y., Johnson, A. E., Some, R., Oberlin, J., & Goldberg, S. (2010). Investigation of the tilera processor for real time hazard detection and avoidance on the lunar lander. In 2010 ieee aerospace conference (p. 1-9).

Vinogradova, T., Burl, M., & Mjolsness, E. (2002). Training of a crater detection algorithm for Mars crater imagery. In Proceedings, ieee aerospace conference (Vol. 7, p. 7-7).

Westoby, M. J., Brasington, J., Glasser, N. F., Hambrey, M. J., & Reynolds, J. M. (2012). ‘structure-from-motion’photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology, 179 , 300-314.

Wlodarczyk, S. J. (2014). Advanced Inspection System: Integrating Robotic Configurations & Controls.

Woicke, S., Moreno Gonzalez, A., El-Hajj, I., Mes, J., Henkel, M., & Klavers, R. (2018). Comparison of crater-detection algorithms for terrain-relative navigation. In 2018 aiaa guidance, navigation, and control conference (p. 1601).

Yu, M., Cui, H., & Tian, Y. (2014). A new approach based on crater detection and matching for visual navigation in planetary landing. Advances in Space Research, 53 (12), 1810-1821.

Zurek, R. W., & Smrekar, S. E. (2007). An overview of the Mars Reconnaissance Orbiter (MRO) science mission. Journal of Geophysical Research: Planets, 112 (E5).