Lunar Terrain Relative Navigation Using a Convolutional Neural Network for Visual Crater Detection
Total Page:16
File Type:pdf, Size:1020Kb
Lunar Terrain Relative Navigation Using a Convolutional Neural Network for Visual Crater Detection Lena M. Downes1, Ted J. Steiner2 and Jonathan P. How3 Abstract— Terrain relative navigation can improve the pre- lighting conditions, noise, camera parameters, and viewing cision of a spacecraft’s position estimate by detecting global angles, and require precise modeling and specific tuning features that act as supplementary measurements to correct for different sets of images. Neural networks have been for drift in the inertial navigation system. This paper presents a system that uses a convolutional neural network (CNN) and successfully applied as a robust solution to many computer image processing methods to track the location of a simulated vision problems due to their strength in generalizing from spacecraft with an extended Kalman filter (EKF). The CNN, one dataset to another, but have rarely been used in space called LunaNet, visually detects craters in the simulated camera applications due to a perceived lack of confidence in their frame and those detections are matched to known lunar craters measurements [4]. We have previously developed a system in the region of the current estimated spacecraft position. These matched craters are treated as features that are tracked using that visually detects craters in a camera image using a the EKF. LunaNet enables more reliable position tracking over neural network and matches those detected craters to a a simulated trajectory due to its greater robustness to changes in database of known lunar craters with absolute latitudes and image brightness and more repeatable crater detections from longitudes [5]. This crater detector is called LunaNet. This frame to frame throughout a trajectory. LunaNet combined work explores the inclusion of LunaNet’s measurements in with an EKF produces a decrease of 60% in the average final position estimation error and a decrease of 25% in average a navigation system. LunaNet’s repeatable detections from final velocity estimation error compared to an EKF using an frame to frame across a trajectory and increased robustness image processing-based crater detection method when tested on to changes in camera lighting conditions enable an EKF that trajectories using images of standard brightness. has increased precision position and velocity estimation with more reliable performance in variable lighting conditions. I. INTRODUCTION Previous lunar missions have relied mostly on inertial II. RELATED WORK navigation methods that time-integrate measurements from TRN has been studied for years as a method to improve an inertial measurement unit (IMU) to estimate spacecraft the landing accuracy of lunar landers [2], [6]. A common position and velocity. As such, past lunar landing errors focus in TRN literature is to detect craters and use them have been as high as several kilometers [1] because inertial as landmarks for localization. Many crater locations have navigation systems drift and accumulate error over time. already been catalogued since craters are present on nearly However, many of the locations of interest for future lunar the entire lunar surface [7]. In addition, a detected crater’s landing missions are surrounded by or are close to hazardous location can be uniquely identified based on the local con- terrain, motivating the requirement for increased estimation stellation of nearby detected craters. precision throughout the mission. One method to reduce Previous crater-based TRN systems utilized traditional estimation error is terrain relative navigation (TRN) [2]. TRN image processing techniques to detect craters from visual obtains measurements of the terrain around a vehicle and imagery. However, these approaches are typically not robust uses those measurements to estimate the vehicle’s position. A to changes in lighting conditions, viewing angles, and camera camera-based TRN system is an especially appealing option parameters. Recent work has demonstrated that automated arXiv:2007.07702v1 [cs.CV] 15 Jul 2020 for space applications due to the availability and low cost of crater detection can achieve significant improvements in space-rated cameras, as well as the rich data they provide. robustness by applying advances from the fields of computer Previous works on lunar TRN systems have developed vision and deep learning. CraterIDNet [8] applied a fully crater detectors that are based upon traditional image pro- convolutional network (FCN) [9] to both visually detect cessing methodologies like thresholding, edge detection, and craters in images and match those detected craters to known filtering [3]. These crater detectors are highly sensitive to craters. An FCN is a type of convolutional neural network (CNN) that filters an image input using convolutional layers. 1Lena M. Downes is with the Department of Aeronautics and Astronau- tics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA, Unlike our work, CraterIDNet focused on crater detection and is a Draper Fellow with the Perception and Autonomy Group, Draper, for the purpose of cataloguing craters, therefore it had low Cambridge, MA 02139, USA [email protected] requirements for the precision of its crater detections. Our 2Ted J. Steiner is with the Perception and Autonomy Group, Draper, Cambridge, MA 02139, USA [email protected] research focuses on repeatable detections over a trajectory 3Jonathan P. How is with the Faculty of Aeronautics and Astronau- for navigation, and for that reason we maintain strict re- tics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA quirements for precision of crater detection. Another recent [email protected] Research funded by Draper and computation support through Amazon system, DeepMoon [10], applied a CNN to detect craters Web Services from a digital elevation map (DEM) represented as overhead imagery. DeepMoon used a U-net [11] style architecture to processing methods to analyze camera imagery of the lu- perform pixel-wise classification of craters. DEM imagery nar surface simulated over orbital trajectories around the has significantly different micro-scale variation than camera Moon. This architecture was based on the architecture of imagery because it is not affected by lighting effects such [10], called DeepMoon, which detected lunar craters from as glare and shadowing. In contrast, LunaNet uses camera elevation imagery. A U-net [11] architecture was appealing images and thus must accommodate for shadows and other because it is used for semantic segmentation, or detection forms of visual noise, which introduces additional detection and localization of distinct objects in an image, with pixel- complexities. Moreover, DeepMoon did not focus on identi- level resolution. The crater detection problem is a semantic fying or matching the craters it detected to known craters. segmentation problem, where the objects that need to be Crater identification, or matching of detected craters to detected and localized are craters. Low-level predictions of known craters, has been explored in terms of two main where the crater rims lie are needed in order to obtain precise types of problems. One problem is crater identification in a crater detections. Semantic segmentation enables the labeling “lost-in-space” scenario, where no knowledge of spacecraft of each pixel in an image as crater, or not crater, and therefore position is used to match detected craters to known craters. provides high-precision labels of crater locations. The other problem is crater identification with some infor- LunaNet’s CNN was initialized on the weights from mation available about the spacecraft position, but without DeepMoon and was additionally trained with 800 Lunar full confidence in that position information. This paper Reconnaissance Orbiter Camera (LROC) [14]–[17] intensity addresses on the latter problem, which [3] attempted to solve images for ten epochs. Warm starting on [10]’s neural with a simple method: determining which craters would network weights was appealing because their network was be in the camera field of view if the position information trained on many lunar elevation images. Although intensity were correct, projecting these craters into image coordinates imagery has a slightly different appearance than elevation im- to obtain expected craters, and matching detected craters agery, additional training on intensity imagery could account to the closest expected crater using least mean squares. for these differences. The training images were obtained from This approach was successful if the position information the LROC Wide Angle Camera (WAC) Global Morphology provided was very close to the true position, but resulted Mosaic with resolution of 100 meters per pixel. This mo- in a large number of mismatches otherwise. An expansion saic is a simple cylindrical map projection comprising over upon the crater identification method of [3] was explored by 15,000 images from -90◦ to 90◦ latitude and all longitudes. [12]. This approach incorporated random sample consensus The input training images are cropped from the mosaic using (RANSAC) [13] to check whether the translation vectors similar methods to those of DeepMoon: randomly cropping between detected craters and known craters were consistent a square area of the mosaic, downsampling the image to in images where multiple craters were detected. If a crater 256×256 pixels, and transforming the image to orthographic match produced a translation vector that was an outlier from projection. Each cropped mosaic training image