Vision-Based Shipwreck Mapping: on Evaluating Features Quality and Open Source State Estimation Packages

A. Quattrini Li, A. Coskun, S. M. Doherty, S. Ghasemlou, A. S. Jagtap, M. Modasshir, S. Rahman, A. Singh, M. Xanthidis, J. M. O’Kane and I. Rekleitis Computer Science and Engineering Department, University of South Carolina Email: [albertoq,yiannisr,jokane]@cse.sc.edu, [acoskun,dohertsm,sherving,ajagtap,modasshm,srahman,akanksha,mariosx]@email.sc.edu

Abstract—Historical shipwrecks are important for many rea- sons, including historical, touristic, and environmental. Cur- rently, limited efforts for constructing accurate models are performed by divers that need to take measurements manually using a grid and measuring tape, or using handheld sensors. A commercial product, Google Street View1, contains underwater panoramas from select location around the planet including a few shipwrecks, such as the SS Antilla in Aruba and the Yongala at the Great Barrier . However, these panoramas contain no geometric information and thus there are no 3D representations available of these wrecks. This paper provides, first, an evaluation of visual features quality in datasets that span from indoor to underwater ones. Second, by testing some open-source vision-based state estimation packages on different shipwreck datasets, insights on open chal- Fig. 1. Aqua robot at the Pamir shipwreck, Barbados. lenges for shipwrecks mapping are shown. Some good practices for replicable results are also discussed. demonstrations. However, applying any of these packages on a I.INTRODUCTION new dataset has been proven extremely challenging, because Historical shipwrecks tell an important part of history and of two main factors: software engineering challenges, such at the same time have a special allure for most humans, as as lack of documentation, compilation, dependencies; and exemplified by the plethora of movies and artworks of the algorithmic limitations—e.g., special initialization motions for Titanic. Shipwrecks are also one of the top monocular cameras, number of and sensitivity to parame- attractions all over the world, see Fig. 1. Many historical ters [5]. Also, most of them are usually developed and tested shipwrecks are deteriorating due to warm, salt water, human with urban settings in mind. interference, and extreme weather (frequent tropical storms). This paper analyzes first different feature detectors and Constructing accurate models of these sites will be extremely descriptors in several datasets taken from indoor, urban, and valuable not only for the historical study of the shipwrecks, underwater domains. Second, some open source packages for but also for monitoring subsequent deterioration. Currently, visual SLAM are evaluated. The main contribution of this limited mapping efforts are performed by divers that need to paper is to provide, based on this evaluation, insights on the take measurements manually using a grid and measuring tape, open challenges in shipwreck mapping so that when designing or using handheld sensors [1]—a tedious, slow, and sometimes a new mapping algorithm they are taken into consideration. dangerous task. Automating such a task with underwater The next section discusses research on shipwreck mapping. robots equipped with cameras—e.g., Aqua [2]—would be Section III presents an analysis of the visual feature quality. extremely valuable. Some attempts have been performed by Section IV shows qualitative results of some visual SLAM al- using underwater vehicles with expensive setup—e.g., Remote gorithms. Finally, Section V concludes the paper by discussing Operated Vehicles (ROV) [3], [4]. some insights gained by this methods evaluation. Autonomous mapping using visual data has received a lot of II.RELATED WORK attention in the last decade, resulting in many research papers and open source packages published, supported by impressive Different technologies have been used to survey shipwreck areas, including ROVs, AUVs, and diver held sensors. Nornes 1https://www.google.com/maps/streetview/#oceans et al. [6] acquired from an ROV stereo images off the coast of Trondheim Harbour, Norway, where M/S Herkules shipwreck of images from several datasets from different domains (see sank. A commercially available software has been used to Fig. 2), including: process the images to reconstruct a model of the shipwreck. indoor environment, collected with a Clearpath Husky • In [3] a deepwater ROV is adopted to map, survey, sample, and equipped with a camera; excavate a shipwreck area. Sedlazeck et al. [4], preprocessing outdoor urban environments, specifically the KITTI • images collected by an ROV and applying a Structure from dataset [17]; Motion based algorithm, reconstructs a shipwreck in 3D. The outdoor natural environments, in particular the Devon • images used for testing such an algorithm contained some Island rover navigation dataset [18]; structure and a lot of background, where only water was coral reefs, collected by a drifter equipped with a monoc- • visible. ular camera [19]; Other works use AUVs to collect datasets and build ship- and shipwrecks, collected with Aqua2 underwater robot • wrecks models. Bingham et al. [7] used the SeaBED AUV to equipped with front and back cameras. build a texture-mapped bathymetry of the Chios shipwreck site in Greece. Gracias et al. [8] deployed the Girona500 AUV for Figs. 3-7 show: (a) the average number of detected features, surveying the ship ‘La Lune’ off the coast of Toulon, France. (b) the number of inliers used for the estimated homographies, Bosch et al. [9] integrated an omnidirectional camera to an and (c) the number of images matched together. Each figure AUV to create underwater virtual tours. presents the above measure for the different datasets used, with different combinations of feature detectors and descriptors Integrating inertial and visual data helps better estimating 2 the pose of the camera, especially in underwater domain, available in OpenCV . Note that the number of features where images are not as reliable as in ground environments. detected is a single frame is not necessarily a measure of Hogue et al. [10] demonstrated shipwreck reconstruction with how good a feature is. Many features cannot be matched with the use of a stereo vision-inertial sensing device. Moreover, features in subsequent frames, thus they do not contribute structured lights can provide some extra information to recover to the robot localization and the environment reconstruction. structure information of the scene. In [11], structured light was Other features are not stable changing location over time. used to aid the reconstruction of high resolution bathymetric Indeed, even if some methods are able to find many features, maps of underwater shipwreck sites. the number of inliers is generally relatively low. Some of the Methods to process such datasets are becoming more and combinations of feature detector and descriptor extractor are more reliable. Campos et al. [12] proposed a method to recon- not present in the figures, because no feature could be found. struct underwater surface by using range scanning technology. In indoor environment, there are several combinations of Given raw point sets, smooth approximations of surfaces to feature detector/descriptor extractor that work well and the triangle meshes are performed. The method was tested on features are quite stable as can be observed by the low standard several datasets, including the ship La Lune. Williams et deviation in the number of images matched; see Fig. 3. al. [13] described techniques to merge multiple datasets, which In outdoor urban environments, the distribution of number include stereo images of a shipwreck off the coast of the Greek of detected features and the number of inliers is similar to island of Antikythera, collected during different expeditions. the one in indoor environment; see Fig. 3 and Fig. 4. This However, usually these works collect data by teleoperating similarity can be explained by the fact that both classes the robot and process the data offline. To automate the explo- of environments are quite rich in features and are similarly ration task real-time methods for localizing the robot and at structured. However, the number of matched images decreases the same time mapping the environment are necessary. in the urban environment compared to the results from the indoor ones. One of the reasons is that in urban environments III.VISUAL FEATURE QUALITY dynamic obstacles, such as cars and bikers, are often present in There are two main classes of visual SLAM techniques: the scene, violating the common assumption of static scenes. sparse and dense. Sparse visual SLAM utilizes selected fea- In outdoor natural environment datasets, while the number tures to track the motion of the camera and reconstruct the of features detected is high as there are many rocks in the scene. Dense visual SLAM uses segments of the image and scene, the number of inliers drops; see Fig. 5. The probability attempts to reconstruct as much of the scene as possible. of mismatches is higher, given the fact that the terrain with In [14] SURF features are used for localizing and mapping rocks does not have any distinctive feature. This happens also scenes in the underwater domain. Thus, it is important to in the dataset. The combinations of feature detector identify stable features to use. The quality of some feature and descriptor extractor have different distributions for the detectors, descriptors, and matchers in underwater domain is coral reef dataset compared to the results for the above water assessed by Shkurti et al. [15]. The influence of underwater ones. conditions such as blurring and illumination changes has been In shipwreck datasets, the features number varies a lot studied by Oliver et al. [16]. over images—i.e., high variance in the number of detected In the following, feature detectors and descriptors that are features—compared to the other datasets; see Fig. 7. One of available as open source implementation in OpenCV are tested using the default parameters. The tests are run on a subset 2http://opencv.org Fig. 2. Characteristic images from the evaluated datasets, namely indoor, outdoor urban, outdoor natural, coral reefs, shipwreck. the reasons is that in shipwrecks the illumination changes Several tests were performed in order to tune reasonably leading to situations in which no features can be detected. each package’s parameters following all available suggestions In terms of feature detector and descriptor, in this evaluation, from the packages’ authors. different combinations of them provide good results in indoor ORB-SLAM has shown the most promising results, being environment, including combinations of ORB, SIFT, SURF, able to track features over time, while LSD-SLAM, being a and DAISY, as most of the images are matched. In the method based on optical flow, is more affected by illumination outdoor urban environment, the combination of SURF/SIFT changes. RatSLAM utilizes a learning process for adjusting provides the best results. In the outdoor natural environ- how neurons are triggered, thus it needs the robot to visit ment, BRISK/SIFT, FAST/SIFT, FAST/SURF, GFTT/SIFT, the same place multiple times to improve the trajectory. SVO and Agast/SIFT show the highest number of images matched. and PTAM work reasonably well in a small area resulting In coral reefs, there are just SURF/SIFT and ORB/SURF in a correct partial trajectory in the tested datasets. g2o is which display many images matched. In shipwrecks, employed in a small number of keyframes in some of the GFTT/DAISY, ORB/SURF, Agast/DAISY, SURF/DAISY, methods, including ORB-SLAM and LSD-SLAM, and reduces FAST/SIFT, FAST/DAISY, ORB/DAISY/ GFTT/SIFT, and the error in the trajectory and the map. BRISK/DAISY all present good results. This results suggest Shipwrecks are quite rich in texture because they are that there are some feature detectors that work better in some biodiversity hotspots. This allows the methods to reasonably specific datasets than others. track the camera. However, mapping them presents a different set of challenges compared to other scenarios—e.g., coral IV. VISUAL SLAMQUALITATIVE EVALUATION reef monitoring. While the robot is moving, illumination can Six of the most promising open source packages are eval- greatly change in overlapping scenes. This leads in most cases uated on four different datasets. The packages have been se- to a loss of localization; see Fig. 9. Applying image restoration lected to cover different types of state-of-the-art techniques— techniques such as the ones proposed in [26], [27] that can be i.e., feature-based: PTAM [20], ORB-SLAM [21]; semi-direct used in real-time will be part of our future work. method: SVO [22]; direct methods: LSD-SLAM [23]; neural- In Fig. 10 a sample run of ORB-SLAM and LSD-SLAM is based: RatSLAM [24]; global optimization: g2o [25]. shown on a GoPro dataset. held the camera facing The datasets used have been collected over the years inside downwards and hovered over a shipwreck. As LSD-SLAM is and outside shipwrecks off the coast of Barbados, by employ- a direct method that relies on pixels intensity, presence of ing an Aqua2 underwater robot, a AUV equipped with an IMU moving objects like fishes affects its performance. Instead, and front/back cameras, and also by employing a GoPro 3D ORB-SLAM, a feature-based method, is more robust to the Dual Hero System with two GoPro Hero3+ cameras operated presence of dynamic obstacles. Indeed, ORB-SLAM shows by a scuba diver; see Fig. 8. As the data were collected at a better trajectory than LSD-SLAM. However, a feature- different times, the datasets contain images that have different based method provides a sparse reconstruction of the scene. conditions, including lighting variations, variable visibility, Nevertheless, note that some errors are still present, because different levels of , loss of contrast, and different features could be detected over fishes and tracked for some motion types; thus, covering a broad spectrum of situations frames. For a test of a larger set of open source packages in which the robot might find itself. These datasets are in please refer to [5]. “rosbag” format3 defined for ROS so that they can easily V. DISCUSSIONAND CONCLUSIONS exported and processed. Shipwrecks provide an interesting testbed where to test 3http://www.ros.org/wiki/rosbag mapping algorithms that need to be robust with respect to, illumination changes, lack of contrast, and presence of dy- [7] B. Bingham, B. Foley, H. Singh, R. Camilli, K. Delaporta, R. Eustice, namic obstacles. Providing a method that is robust to such a A. Mallios, D. Mindell, C. Roman, and D. Sakellariou, “Robotic tools for deep water archaeology: Surveying an ancient shipwreck with an domain will improve the state of the art in state estimation in autonomous underwater vehicle,” Journal of Field Robotics, vol. 27, other domains. This preliminary test also highlights some good no. 6, pp. 702–717, 2010. practices for replicable and measurable research in the field. [8] N. Gracias, P. Ridao, R. Garcia, J. Escartin, M. L’Hour, F. Cibecchini, R. Campos, M. Carreras, D. Ribas, N. Palomeras, L. Magi, A. Palomer, Releasing the code as open-source allows other researchers to T. Nicosevici, R. Prados, R. Hegedus, L. Neumann, F. de Filippo, and test and possibly adapt the methods. Indeed, there are many A. Mallios, “Mapping the Moon: Using a lightweight AUV to survey from other research groups that showed great per- the site of the 17th century ship ‘La Lune’,” in Proc. of MTS/IEEE OCEANS, 2013, pp. 1–8. formance without releasing the code—e.g., [28]—thus making [9] J. Bosch, P. Ridao, D. Ribas, and N. Gracias, “Creating 360◦ underwater it hard to evaluate and use. Another important aspect is the virtual tours using an omnidirectional camera integrated in an AUV,” in plethora of parameters that need to be tuned for the methods, Proc. of MTS/IEEE OCEANS – Genova, 2015, pp. 1–7. [10] A. Hogue, A. German, and M. Jenkin, “ recon- such as the number of tracked features and the number of struction using stereo and inertial data,” in Proc. of IEEE Int. Conference RANSAC iterations. As finding the optimal set of parameters on Systems, Man and Cybernetics, 2007, pp. 2372–2377. is not easy, especially if the experimenter is not the developer, [11] C. Roman, G. Inglis, and J. Rutter, “Application of structured light imaging for high resolution mapping of underwater archaeological sites,” the recommended values for these parameters and the effects in Proc. of MTS/IEEE OCEANS – Sidney, 2010, pp. 1–9. induced by their variation should be well documented. Among [12] R. Campos, R. Garcia, P. Alliez, and M. Yvinec, “A surface recon- the special requirements of certain packages are some special struction method for in-detail underwater 3D optical mapping,” The Int. Journal of Robotics Research, vol. 34, no. 1, pp. 64–89, 2015. motions that need to be performed to initialize the algorithm. [13] S. B. Williams, O. Pizarro, and B. Foley, Return to Antikythera: Multi- For example PTAM requires a lateral translation, a motion dif- session SLAM Based AUV Mapping of a First Century B.C. Wreck Site. ficult to achieve with most vehicles. When collecting datasets, Springer Int. Publishing, 2016, pp. 45–59. [14] J. Aulinas, M. Carreras, X. Llado, J. Salvi, R. Garcia, R. Prados, and it is important to consider such special motions. Moreover, Y. R. Petillot, “Feature extraction for underwater visual SLAM,” in Proc. the availability of public datasets together with ground truth of MTS/IEEE OCEANS – Spain, 2011, pp. 1–7. would boost the proper evaluation and benchmarking of the [15] F. Shkurti, I. Rekleitis, and G. Dudek, “Feature tracking evaluation for pose estimation in underwater environments,” in Proc. of the Canadian packages. Conference on Computer and Robot Vision, 2011, pp. 160–167. Ongoing work includes a more in-depth analysis of the [16] K. Oliver, W. Hou, and S. Wang, “Image feature detection and matching feature quality, a quantitative evaluation and comparison of in underwater conditions,” in Proc. SPIE 7678, Ocean Sensing and Monitoring II, 2010. the packages, the study of the effects of the parameters, and [17] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets Robotics: the investigation of more open-source packages on the same The KITTI Dataset,” The Int. Journal of Robotics Research, vol. 32, datasets. In addition, more data will be collected off the coast no. 11, pp. 1231–1237, 2013. [18] P. T. Furgale, P. Carle, J. Enright, and T. D. Barfoot, “The Devon Island of South Carolina, and the datasets will be made public to Rover Navigation Dataset,” Int. Journal of Robotics Research, vol. 31, foster benchmarking. The result of this evaluation would serve no. 6, pp. 707–713, 2012. as input for improving the state of the art for vision-based state [19] M. Xanthidis, A. Quattrini Li, and I. Rekleitis, “Shallow coral reef surveying by inexpensive drifters,” in Proc. of the MTS/IEEE OCEANS estimation method for shipwreck mapping, and possibly to the – Shanghai, 2016, pp. 1–9. underwater domain. [20] G. Klein and D. Murray, “Parallel tracking and mapping for small ar workspaces,” in IEEE and ACM Int. Symp. on Mixed and Augmented ACKNOWLEDGMENT Reality, 2007, pp. 225–234. The authors would like to thank the generous support of a [21] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos,´ “ORB-SLAM: A Google Faculty Research Award and of the National Science Versatile and Accurate Monocular SLAM System,” IEEE Trans. Robot., Foundation grant (NSF 1513203). vol. 31, no. 5, pp. 1147–1163, 2015. [22] C. Forster, M. Pizzoli, and D. Scaramuzza, “SVO: Fast semi-direct monocular visual odometry,” in Proc. of IEEE Int. Conference on REFERENCES Robotics and Automation, 2014, pp. 15–22. [1] J. Henderson, O. Pizarro, M. Johnson-Roberson, and I. Mahon, “Map- [23] J. Engel, T. Schps, and D. Cremers, “LSD-SLAM: Large-Scale Di- ping submerged archaeological sites using stereo-vision photogramme- rect Monocular SLAM,” in European Conference on Computer Vision try,” Int. Journal of Naut. Archaeology, vol. 42, pp. 243–256, 2013. (ECCV), ser. Lecture Notes in Computer Science, D. Fleet, T. Pajdla, [2] J. Sattar, G. Dudek, O. Chiu, I. Rekleitis, P. Giguere, A. Mills, N. Pla- B. Schiele, and T. Tuytelaars, Eds. Springer Int. Publishing, 2014, vol. mondon, C. Prahacs, Y. Girdhar, M. Nahon, and J.-P. Lobos, “Enabling 8690, pp. 834–849. autonomous capabilities in underwater robotics,” in Proc. of IEEE/RSJ [24] D. Ball, S. Heath, J. Wiles, G. Wyeth, P. Corke, and M. Milford, “Open- Int. Conference on Intelligent Robots and Systems, 2008, pp. 3628–3634. RatSLAM: an open source brain-based SLAM system,” Autonomous [3] F. Søreide and M. E. Jasinski, “Ormen Lange: Investigation and exca- Robots, vol. 34, no. 3, pp. 149–176, 2013. [25] R. Kummerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, vation of a shipwreck in 170m depth,” in Proc. of MTS/IEEE OCEANS, 2 2005, pp. 2334–2338. “g o: A general framework for graph optimization,” in IEEE Int. Conf. [4] A. Sedlazeck, K. Koser,¨ and R. Koch, “3D reconstruction based on on Robotics and Automation, 2011, pp. 3607–3613. underwater video from ROV Kiel 6000 considering underwater imaging [26] I. Vasilescu, C. Detweiler, and D. Rus, “Color-accurate underwater conditions,” in Proc. of MTS/IEEE OCEANS, 2009, pp. 1–10. imaging using perceptual adaptive illumination,” Autonomous Robots, [5] A. Quattrini Li, A. Coskun, S. M. Doherty, S. Ghasemlou, A. S. Jagtap, vol. 31, no. 2, pp. 285–296, 2011. M. Modasshir, S. Rahman, A. Singh, M. Xanthidis, J. M. O’Kane, and [27] G. Bianco, M. Muzzupappa, F. Bruno, R. Garcia, and L. Neumann, “A I. Rekleitis, “Experimental comparison of open source vision based state new color correction method for underwater imaging,” in ISPRS/CIPA estimation algorithms,” in Int. Symp. on Experimental Robotics, 2016. Workshop on Underwater 3D Recording and Modeling, 2015. [6] S. M. Nornes, M. Ludvigsen, Øyvind Ødegard, and A. J. Sørensen, [28] J. Hesch, D. Kottas, S. Bowman, and S. Roumeliotis, “Consistency “Underwater photogrammetric mapping of an intact standing steel wreck Analysis and Improvement of Vision-aided Inertial Navigation,” IEEE with ROV,” in Proc. of the Int. Federation of Automatic Control (IFAC), Transactions on Robotics, vol. 30, no. 1, pp. 158–176, 2014. 2015, pp. 206–211. (a)

(b)

(c)

Fig. 3. Number of features, of inliers, and of images matched; in the legend, it is reported the feature detector, the descriptor, and the matcher used in indoor environment. (a)

(b)

(c)

Fig. 4. Number of features, of inliers, and of images matched; in the legend, it is reported the feature detector, the descriptor, and the matcher used in outdoor urban environment. (a)

(b)

(c)

Fig. 5. Number of features, of inliers, and of images matched; in the legend, it is reported the feature detector, the descriptor, and the matcher used in outdoor natural environment. (a)

(b)

(c)

Fig. 6. Number of features, of inliers, and of images matched; in the legend, it is reported the feature detector, the descriptor, and the matcher used in coral reefs. 4500 4000 3500 3000 2500 2000 1500 1000 500 0 1 2 3 4 5 6 7 8 9

Number of detected features 101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899 100101102103104105106107108109110 1 FAST FREAK BruteForce 38 MSER DAISY(a) BruteForce 75 BRISK FREAK BruteForce 2 ORB ORB BruteForce-Hamming 39 Star BRISK BruteForce-Hamming 76 Agast SURF BruteForce 3 GFTT FREAK BruteForce 40 FAST BRISK BruteForce-Hamming 77 SIFT FREAK BruteForce 800 4 ORB AKAZE BruteForce 41 SimpleBlob BRISK BruteForce-Hamming 78 FAST SURF BruteForce 5 SimpleBlob SIFT BruteForce 42 SimpleBlob SURF BruteForce 79 Star KAZE BruteForce 600 6 BRISK AKAZE BruteForce 43 Star FREAK BruteForce 80 MSER AKAZE BruteForce 400 7 MSER BRISK BruteForce-Hamming 44 SimpleBlob DAISY BruteForce 81 FAST AKAZE BruteForce 8 GFTT DAISY BruteForce 45 Star LATCH BruteForce 82 SimpleBlob KAZE BruteForce 200 9 MSD AKAZE BruteForce 46 MSD KAZE BruteForce 83 SURF DAISY BruteForce 0 10 SURF SIFT BruteForce 47 BRISK KAZE BruteForce 84 Agast LATCH BruteForce Number of inliers 200 11 MSD LATCH BruteForce 48 SURF ORB BruteForce-Hamming 85 GFTT BriefDescriptorExtractor BruteForce − 1 2 3124 5 SIFT6 7 8 LATCH9 BruteForce 49 GFTT SURF BruteForce 86 GFTT BRISK BruteForce-Hamming 101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899 13 FAST LATCH BruteForce 50 SIFT SURF BruteForce 87 MSD FREAK BruteForce100101102103104105106107108109110 141 FAST GFTT FREAK LATCH BruteForce BruteForce 5138 MSER MSER(b) BriefDescriptorExtractor DAISY BruteForce BruteForce 8875 MSDBRISK BriefDescriptorExtractor FREAK BruteForce BruteForce 152 ORB SIFT ORB AKAZE BruteForce-Hamming BruteForce 5239 Star Star BriefDescriptorExtractor BRISK BruteForce-Hamming BruteForce 8976 StarAgast ORB SURF BruteForce-Hamming BruteForce 163 GFTT MSD SIFT FREAK BruteForce BruteForce 5340 SURF FAST BRISK BruteForce-Hamming 9077 FASTSIFT FREAK DAISY BruteForce BruteForce 174 ORB Star AKAZE BruteForce 5441 Star SimpleBlob DAISY BruteForce BRISK BruteForce-Hamming 9178 SimpleBlobFAST SURF ORB BruteForce BruteForce-Hamming 185 SimpleBlob FAST ORB SIFT BruteForce-Hamming BruteForce 42 SimpleBlob SURF BruteForce 79 Star KAZE BruteForce 600 55 SIFT DAISY BruteForce 92 MSER KAZE BruteForce 6 BRISK AKAZE BruteForce 43 Star FREAK BruteForce 80 MSER AKAZE BruteForce 500 19 BRISK SURF BruteForce 56 SURF FREAK BruteForce 93 SIFT BRISK BruteForce-Hamming 7 MSER BRISK BruteForce-Hamming 44 SimpleBlob DAISY BruteForce 81 FAST AKAZE BruteForce 400 20 Agast BriefDescriptorExtractor BruteForce 57 ORB FREAK BruteForce 94 BRISK BRISK BruteForce-Hamming 8 GFTT DAISY BruteForce 45 Star LATCH BruteForce 82 SimpleBlob KAZE BruteForce 300 21 SURF BriefDescriptorExtractor BruteForce 58 Agast DAISY BruteForce 95 BRISK LATCH BruteForce 9 MSD AKAZE BruteForce 46 MSD KAZE BruteForce 83 SURF DAISY BruteForce 200 22 SIFT ORB BruteForce-Hamming 59 Agast BRISK BruteForce-Hamming 96 SimpleBlob AKAZE BruteForce 10 SURF SIFT BruteForce 47 BRISK KAZE BruteForce 84 Agast LATCH BruteForce 100 23 MSER ORB BruteForce-Hamming 60 ORB BriefDescriptorExtractor BruteForce 97 GFTT AKAZE BruteForce 11 MSD LATCH BruteForce 85 GFTT BriefDescriptorExtractor BruteForce 0 24 ORB SURF BruteForce 6148 GFTT SURF ORB ORB BruteForce-Hamming BruteForce-Hamming 98 SimpleBlob FREAK BruteForce 12 SIFT LATCH BruteForce 100 25 BRISK BriefDescriptorExtractor BruteForce 6249 MSD GFTT ORB SURF BruteForce-Hamming BruteForce 9986 StarGFTT SURF BRISK BruteForce BruteForce-Hamming − 2613 Agast FAST AKAZE LATCH BruteForce 50 SIFT SURF BruteForce 87 MSD FREAK BruteForce 1 2 3 4 5 6 7 8 9 63 ORB KAZE BruteForce 100 BRISK ORB BruteForce-Hamming

Number of images matched 101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899 2714 Agast GFTT ORB LATCH BruteForce-Hamming BruteForce 6451 MSD MSER BRISK BriefDescriptorExtractor BruteForce-Hamming BruteForce 10188 MSD FAST BriefDescriptorExtractor KAZE BruteForce100101102103 BruteForce104105106107108109110 28151 FAST SURF SIFT FREAK AKAZE AKAZE BruteForce BruteForce BruteForce 655238 MSD StarMSER BriefDescriptorExtractor SURF DAISY BruteForce BruteForce BruteForce 1028975 StarBRISK ORB ORB DAISY FREAK BruteForce-Hamming BruteForce BruteForce 29162 ORB SURF MSD ORB SIFT KAZE BruteForce-Hamming BruteForce BruteForce 665339 Star SURFStar SIFT BRISK BRISK BruteForce BruteForce-Hamming BruteForce-Hamming 1039076 FASTAgast Agast DAISYSURF SIFT BruteForce BruteForce 30173 GFTT Agast Star AKAZE FREAK FREAK BruteForce BruteForce BruteForce 675440 SIFT StarFAST DAISY KAZE BRISK BruteForce BruteForce BruteForce-Hamming 1049177 SimpleBlobSIFT GFTT FREAK SIFT ORB BruteForce BruteForce BruteForce-Hamming 31184 ORB SimpleBlob FAST AKAZE ORB LATCH BruteForce-Hamming BruteForce BruteForce 685541 MSER SIFTSimpleBlob DAISY FREAK BRISK BruteForce BruteForce BruteForce-Hamming 1059278 MSERFAST BRISK SURF KAZE SIFT BruteForce BruteForce BruteForce 32195 SimpleBlob SURF BRISK LATCH SURF SIFT BruteForce BruteForce BruteForce 695642 MSER SURFSimpleBlob SIFTFREAK SURFBruteForce BruteForce BruteForce 1069379 SIFTStar ORB KAZE BRISK LATCH BruteForce BruteForce-Hamming BruteForce 33206 BRISK ORB Agast SIFT AKAZEBriefDescriptorExtractor BruteForce BruteForce BruteForce 705743 Agast ORBStar FREAK FREAK KAZE BruteForce BruteForce BruteForce 1079480 BRISKMSER MSD DAISY AKAZEBRISK BruteForce BruteForce-Hamming BruteForce 34217 MSER ORB SURF BRISK BRISK BriefDescriptorExtractor BruteForce-Hamming BruteForce-Hamming BruteForce 715844 GFTT AgastSimpleBlob KAZE DAISY DAISY BruteForce BruteForce BruteForce 1089581 BRISKFAST SIFT AKAZE SIFT LATCH BruteForce BruteForce BruteForce 35228 GFTT MSER SIFT DAISYORB LATCH BruteForce-Hamming BruteForce BruteForce 725945 SIFT AgastStar LATCH BriefDescriptorExtractor BRISK BruteForce BruteForce-Hamming BruteForce 1099682 SimpleBlob SURF SURF AKAZEKAZE BruteForce BruteForce BruteForce 36239 MSD SimpleBlob MSER AKAZE ORB BriefDescriptorExtractor BruteForceBruteForce-Hamming BruteForce 736046 FAST ORBMSD BriefDescriptorExtractorKAZE SIFT BruteForce BruteForce BruteForce 1109783 GFTTSURF FAST AKAZE DAISY BriefDescriptorExtractor BruteForce BruteForce BruteForce 372410 MSER ORBSURF SURF SURF SIFT BruteForce BruteForce 746147 BRISK GFTTBRISK ORB DAISY KAZE BruteForce-Hamming BruteForce BruteForce 9884 SimpleBlobAgast LATCH FREAK BruteForce BruteForce 2511 BRISKMSD LATCH BriefDescriptorExtractor BruteForce BruteForce 6248 MSDSURF ORB ORB BruteForce-Hamming BruteForce-Hamming 9985 StarGFTT SURF BriefDescriptorExtractor BruteForce BruteForce 2612 AgastSIFT LATCH AKAZE BruteForce BruteForce 6349 ORBGFTT KAZE SURF BruteForce BruteForce 10086 GFTT BRISK BRISK ORB BruteForce-Hamming 2713 AgastFAST LATCHORB BruteForce-Hamming BruteForce 6450 MSDSIFT SURFBRISK BruteForce BruteForce-Hamming 10187 MSD FAST FREAK KAZE BruteForce 2814 SURFGFTT LATCHAKAZE BruteForce BruteForce 6551 MSDMSER SURF BriefDescriptorExtractor BruteForce BruteForce 10288 MSD ORB BriefDescriptorExtractor DAISY BruteForce BruteForce 2915 SURFSIFT AKAZE KAZE BruteForce BruteForce 6652 Star SIFTBriefDescriptorExtractor BruteForce BruteForce 10389 Star Agast ORB SIFT BruteForce-Hamming BruteForce 3016 AgastMSD SIFT FREAK BruteForce BruteForce 6753 SIFTSURF KAZE BRISK BruteForce BruteForce-Hamming 10490 FAST GFTT DAISY SIFT BruteForce 3117 SimpleBlobStar AKAZE LATCH BruteForce BruteForce 6854 MSERStar DAISY FREAK BruteForce BruteForce 10591 SimpleBlob BRISK SIFT ORB BruteForce BruteForce-Hamming 3218 SURFFAST ORB LATCH BruteForce-Hamming BruteForce 6955 MSERSIFT DAISY SIFT BruteForce 10692 MSER ORB LATCH KAZE BruteForce BruteForce 3319 ORBBRISK SIFT SURF BruteForce BruteForce 7056 AgastSURF KAZE FREAK BruteForce BruteForce 10793 SIFT MSD BRISK DAISY BruteForce-Hamming BruteForce 3420 ORBAgast BRISK BriefDescriptorExtractor BruteForce-Hamming BruteForce 7157 GFTTORB FREAK KAZE BruteForce BruteForce 10894 BRISK SIFT SIFT BRISK BruteForce BruteForce-Hamming 3521 MSERSURF BriefDescriptorExtractor LATCH BruteForce BruteForce 7258 SIFTAgast BriefDescriptorExtractor DAISY BruteForce BruteForce 10995 BRISK SURF LATCH SURF BruteForce BruteForce 3622 SimpleBlobSIFT ORB BruteForce-Hamming BriefDescriptorExtractor BruteForce 7359 FASTAgast SIFTBRISK BruteForce BruteForce-Hamming 11096 SimpleBlob FAST BriefDescriptorExtractor AKAZE BruteForce BruteForce 3723 MSER SURFORB BruteForce-Hamming BruteForce 7460 BRISKORB BriefDescriptorExtractor DAISY BruteForce BruteForce 97 GFTT AKAZE BruteForce 24 ORB SURF BruteForce 61 GFTT ORB BruteForce-Hamming 98 SimpleBlob FREAK BruteForce 25 BRISK BriefDescriptorExtractor BruteForce 62 MSD ORB BruteForce-Hamming 99 Star SURF BruteForce 26 Agast AKAZE BruteForce 63 ORB KAZE BruteForce 100 BRISK ORB BruteForce-Hamming 27 Agast ORB BruteForce-Hamming 64 MSD BRISK BruteForce-Hamming 101 FAST KAZE BruteForce 28 SURF AKAZE BruteForce 65 MSD SURF BruteForce 102 ORB DAISY BruteForce 29 SURF KAZE BruteForce 66 Star SIFT BruteForce 103 Agast SIFT BruteForce 30 Agast FREAK BruteForce 67 SIFT KAZE BruteForce 104 GFTT SIFT BruteForce 31 SimpleBlob LATCH BruteForce 68 MSER FREAK BruteForce 105 BRISK SIFT BruteForce 32 SURF LATCH BruteForce 69 MSER SIFT BruteForce 106 ORB LATCH BruteForce 33 ORB SIFT BruteForce 70 Agast KAZE BruteForce 107 MSD DAISY BruteForce 34 ORB BRISK BruteForce-Hamming 71 GFTT KAZE BruteForce 108 SIFT SIFT BruteForce 35 MSER LATCH BruteForce 72 SIFT BriefDescriptorExtractor BruteForce 109 SURF SURF BruteForce 36 SimpleBlob BriefDescriptorExtractor BruteForce 73 FAST SIFT BruteForce 110 FAST BriefDescriptorExtractor BruteForce 37 MSER SURF BruteForce 74 BRISK DAISY BruteForce

(c)

Fig. 7. Number of features, of inliers, and of images matched; in the legend, it is reported the feature detector, the descriptor, and the matcher used in shipwreck. Fig. 8. Characteristic images from the evaluated datasets. AUV outside and inside wreck, Manual underwater outside and inside wreck.

Fig. 9. Sample of images captured by Aqua robot inside Bajan Queen shipwreck in an interval of 30 seconds.

Fig. 10. Sample run of ORB-SLAM (first and second figures) and LSD-SLAM (third and fourth figure) on a footage collected with GoPro.