Characterizing a Debris Field Using Digital Mosaicking and CAD Model Superimposition from Underwater Video

Jay M. Vincelli, Fatih Calakli, Michael A. Stone, Graham E. Forrester, Timothy Mellon, and John D. Jarrell

Abstract Identifying submerged objects is critical for several disciplines For instance, the camera on an ROV can only capture a limited such as marine archaeology and search and rescue. However, view of the seabed. Even with wide-angle lenses, the field of identifying objects in underwater searches presents many chal- view in focus is only a small portion of the total surround- lenges, particularly if the only data available to analyze is poor- ings. When the seabed contains many objects of interest over quality video where the videographer did not plan for photogram- a widely-spaced area, this can be problematic. Without an metric techniques to be utilized. In this paper, we discuss the use expanded view of the proposed debris field, objects and their of adaptive sampling of the underwater video to extract sharp ratiometric relationships to each other cannot be analyzed. still images for stitching and analysis, and creating mosaicked While it has been possible to extract still frames, or images, images by identifying and matching local scale-invariant feature from video for a long time, still frames can suffer from the ef- transform features using computationally efficient algorithms. fects of compression due to interlacing and distortion (Negah- Computer aided design models of suspected aircraft components daripour and Khamene, 2000). Without intensive processing, were superimposed, and a feature common in multiple mo- a comparison between single images can show qualitative saicked images was used to identify a common feature between relationships between objects but often lacks meaningful purported objects to assess goodness of fit. The superimposition analytical observation. Furthermore, each image is taken at a method was replicated using landing gear from a reference air- different point of view, skewing potential analysis. In some craft and a rope of known dimensions, and favorably compared cases, imagery from different perspectives is desired, such as against the remotely operated vehicle (ROV) analysis results. in stereo-photogrammetry, where common points are identi- fied on each image, and rays can be constructed to each point Delivered by Ingentato triangulate the position of the points and allow for three-di- IntroductionDocumenting seabed environments,IP: 192.168.39.151 mapping On: the Thu, mensional30 Sep 2021 reconstruction. 05:15:31 Using photogrammetry tools such seabed, and identifying submergedCopyright: objectsAmerican is critical Society for for sever Photogrammetry- as Agisoft Photoscan and Remote (Agisoft, Sensing LLC, St. Petersburg, Russia) al disciplines including marine archaeology, geology, biology, allows for orthographic reconstruction in some situations. search and rescue, offshore drilling, and shipping industries. Here, we present an alternate approach for instances when Establishing positive identification of objects in underwater the method of video collection, and conditions of the water, searches presents many challenges. The costs involved in mean that photogrammetry cannot successfully be employed. search and recovery operations make false-positive identifica- We developed a method that allows us to retrospectively tions a pricey and protracted error. Traditionally, a dive team identify objects and scale them in situations where no scaling or a remotely operated underwater vehicle (ROV) will search device was used during the video recording and where there the proposed area for pieces of potential wreckage in a pre- is erratic or unplanned tracking of the camera. This is typical determined search pattern (Lirman et al., 2007; Zhukovsky et when the video recording was not intended to be used for al., 2013). Underwater mapping is still largely carried out by object analysis. We illustrate this approach using a case study, manual surveys and performing distance-based measurements in which underwater ROV footage with an irregular search (Telem and Filin, 2013; Ruppé and Barstad, 2002). These pattern and no method for scaling images was used (Figure tasks become much more difficult when physical contact with 1). The approach uses image processing techniques to extract the studied objects is not possible, and when scaling markers frames from HD video, de-interlace, remove time stamps, and are absent. In these circumstances, an HD video recording will stitch the remastered still frames together to create a mo- often be used while executing search patterns (Negahdaripour saicked debris field with a single perspective. We further pre- and Khamene, 2000). On its own, high-definition video can sented a strategy to use an object of known scale to measure be a powerful tool for identification but lacks many features other proposed pieces of wreckage and aid in identification. useful for characterizing a debris field (Camposet al., 2014). Background In this case study, we describe using the combination of im- Jay M. Vincelli is with Materials Science Associates, 315 age processing and mosaicking techniques using underwater Commerce Park Rd., Unit 1, North Kingstown, RI 02893 video as source data to assess the geometry of objects purport- ([email protected]). ed to be from the 02 July 1937 crash site of Amelia Earhart’s Fatih Calakli, Michael A. Stone, and John D. Jarrell are with lost airplane, a Lockheed Electra Model 10E, construction Materials Science Associates, 315 Commerce Park Rd., Unit 1, number 1055, off of the island of Nikumaroro in the western North Kingstown, RI 02893; and Brown University, 182 Hope St., Providence, RI 02912. Photogrammetric Engineering & Remote Sensing Vol. 82, No. 3, March 2016, pp. 223–232. Graham E. Forrester is with the University of Rhode Island, 0099-1112/16/223–232 Center for Biotechnology and Life Sciences, 120 Flagg Road, © 2016 American Society for Photogrammetry Kingston, RI 02881. and Remote Sensing Timothy Mellon is an independent consultant. doi: 10.14358/PERS.82.3.223

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING March 2016 223

03-16 March Peer Reviewed Revised.indd 223 2/17/2016 12:12:40 PM Delivered by Ingenta IP: 192.168.39.151 On: Thu, 30 Sep 2021 05:15:31 Copyright: American Society for Photogrammetry and Remote Sensing

Figure 1. The tubular structure illustrates the camera trajectory derived from underwater ROV video, which consists of tightly-spaced, multi-colored rectangles, where each rectangle represents the camera pose in different frames of the video. The sparse point cloud (granular data) represents the view of the seabed generated from the underwater ROV video.

Figure 2. Amelia Earhart’s Lockheed Electra Model 10E aircraft. Scanned from Lockheed Aircraft Since 1913 by René Francillon (Photo credit: USAF).

Pacific Ocean. This airplane has an overall length of approxi- crash site by a third party, at a depth of 150 to 300 m. We mately 11.8 m, a wingspan of 16.8 m, and a height of 3.1 m received the video for analysis retrospectively, and we were (Figure 2). Two of the objects proposed to be seen in the video tasked with extracting as much information as possible from are the front landing gear and the rear wheel. Historical docu- the video footage itself. During an internal review of the ments were reviewed, and the rear tire was identified in a 19 video, two objects resembling the front and rear landing gear May 1937 aircraft inspection report to be a “Goodyear 16×7” were identified. The high levels of sedimentation and/or tire. Exemplar front and rear landing gears from an extant calcareous growth covering the purported objects prevented Lockheed Electra Model 10E, construction number 1042, were positive identification of the objects. However, meaningful provided for dimensional and visual analysis. analysis could still be performed to identify whether the un- Video was recorded during a ROV search of the suspected derwater objects seen in the video are consistent with objects

224 March 2016 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

03-16 March Peer Reviewed Revised.indd 224 2/17/2016 12:12:43 PM from the airplane. Due to the remoteness of the crash site and (Gracias et al., 2003; Gracias and Santos-Victor, 2001). difficulty involved in safely retrieving the objects, the objec- Image mosaicking technology has many applications. Appli- tive of this study was to assess the geometry of the purported cations utilizing panoramic techniques are widely available for airplane components to determine whether additional investi- smartphones, where overlapping photographs are taken from gation of these objects, such as retrieval, is merited. a single position in three-dimensional space. In aerial photo- This case study has the following features, which may be grammetry, overlapping photographs are taken from multiple widespread in underwater object analysis. It involved the positions in three-dimensional space, which is heavily used search for debris over a wide area in water too deep for div- in scientific and research applications as well. The fieldwork ers, so an ROV was employed. The ROV made an erratic search needed to construct a three-dimensional representation of a site pattern as it followed the contours of the steep underwater land- can be dramatically decreased by using digital photogrammetry. scape, the slope of a Pacific atoll. As theROV pilot examined po- A single camera can be used to capture photographs to be pro- tential pieces of debris, there was no scale marker in the video, cessed off site. The methodology can be seen in the excavation and the debris were far enough apart that they never appeared of the Phanagoria wreck (Zhukovsky et al., 2013). At sites where together in the same frame of the video. Due to the depth, light the use of photogrammetry has been predetermined, a strategy from the surface was completely attenuated, and a single light to overlap photographs in regular patterns can be implemented source originating from the ROV was used. Bubbles caused by (Drap et al., 2007; Canciani et al., 2003). In instances where pho- cavitation of the propellers also obscured segments of the video. togrammetry was not a priority, additional processing may be Digital image mosaicking is a process for combining sev- needed to make the footage useful for taking measurements. eral images by detecting overlapping regions. The detected re- gions are then used to compute two-dimensional perspective image transformations, and each image is transformed and Methodology ratioed such that the final mosaic image is from a single per- spective. Underwater photogrammetry was first used in the Overview Two purported objects were identified in theROV video, front 1960s and 1970s using a pair of stereo cameras using adapted landing gear and rear landing gear. A rope was observed aerial surveying techniques (Hohle, 1971). In the 1980s, im- adjacent to both objects. If the size and orientation of each age matching to produce a mosaic using a set of local inter- landing gear could be used to independently measure the est points was developed (Moravec, 1977). This method was diameter of the rope, then the purported objects are consistent subsequently improved into efficient motion tracking and with the airplane components in geometry and relative size. structure from motion (Harris, 1992). As underwater vehicles Furthermore, the ROV claw was also observed holding a piece become cheaper and more prevalent, their use has rapidly in- of rope, providing a third independent way for measuring the creased. Underwater photogrammetry is becoming a standard rope, this time using a known object with known dimensions, tool used in many different disciplines, including mainte- the ROV claw. Finally, all three independent measurements of nance and inspection of drilling platforms and oil pipelines, the rope can be compared to identify whether every mode of geological surveys, environmental monitoring, and Deliveredunderwa- by Ingenta measurement is consistent with each other. ter archaeology where accurate measurementsIP: 192.168.39.151 at short range On: Thu, 30 Sep 2021 05:15:31 An orthographic reconstruction was attempted, but due to is necessary (LeatherdaleCopyright: and Turner, American 1991; Gracias Society et al for., Photogrammetry and Remote Sensing the quality and path of the video, this was unsuccessful. An 2003; Santos-Victor and Sentieiro, 1994; Maas and Hampel, alternative means of measurement using a perspective view 2006). Many other underwater discovery methods differ in was performed. accuracy, cost, and simplicity (Telem and Filin 2013; Telem Figure 3 illustrates an overview of the approach used for and Filin 2010). Image-based analysis offers great efficiency assessing the geometry of the two suspected objects identified in handling large volumes of detail and the optional benefit in the video, the front and rear landing gears. A rope, unam- of off-site documentation and analysis (Drap et al., 2007). biguously man-made, was also identified near each object. In addition, underwater photogrammetry can be used for However, the rope was never seen in any frame together with navigation and orientation in the marine environment where the proposed front landing gear object. Image stitching was long range electromagnetic signal communication is absent

Figure 3. The overall process used in this case study.

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING March 2016 225

03-16 March Peer Reviewed Revised.indd 225 2/17/2016 12:12:44 PM These conditions, typical for ROV, neces- sitated adaptive sampling of the video to extract sharp still images for stitching and analysis. The images were first sampled ac- cording to visual sharpness to remove blurry frames. The pixels were convert- ed to grayscale, and the grayscale pixel values were compared with neighboring pixels. For sharp images, changes in con- trast are abrupt, which are identified by large grayscale changes of neighboring pixels. Conversely, blurry images have a more gradual change in grayscale values in regions of contrast changes. Thus, a threshold value for the derivative of the grayscale pixel value over pixel distance was used to remove blurry images. The sharp image dataset was then sampled to remove redundant frames. If the camera is stationary, the change in pixel value over time is small. Thus, Figure 4. Video processing and mosaicking techniques utilized. stationary images can be removed by calculating the magnitude of the partial used to place both the rope and the proposed front landing derivative of the grayscale pixel value gear object together in a single image from the same perspec- over time. tive. CAD models of both the front and rear landing gears were Combining Images (Video Mosaicking) created using measurements taken from the front and rear Using the sampled frames from the video, we began to con- landing gears of an extant Lockheed Electra 10E, construction struct the debris field on the ocean floor. The composite image number 1042, and information from historical documents. was acquired by combining several still images using our cus- These CAD models were subsequently overlaid on histori- tomized implementation of the auto-stitch algorithm (Brown cal photographs of the subject Lockheed Electra Model 10E and Lowe, 2007). For this method to work properly either the to verify visually that a good fit was obtained. A CAD model camera has to only rotate or the scene has to be planar. Since overlay was then performed on mosaicked, still frame imagesDelivered by Ingenta IP: 192.168.39.151 On: Thu,the ROV30 Sep changes 2021 position, 05:15:31 we rely on the second assumption taken from the ROV video of each proposed landing gear. Both Copyright: American Society for Photogrammetrythat the scene, in and this Remote case the Sensing seabed, is roughly planar. mosaicked images contained a rope, and using the overlaid Local scale-invariant feature transform (SIFT) features in CAD models and the top-down view of the flat seabed as all images were identified (Lowe, 2004). Timestamps were references, the diameter of the rope was measured within present in every frame of the video, which created spurious the CAD software SolidWorks. Separately, a piece of rope was features during a subsequent matching step. Due to the time identified in the ROV video when a mechanical claw on the stamp not occupying a fixed, real-world location, the pixel lo- ROV grasped the rope. The dimensions of the mechanical claw cations of the time stamp box were identified in every image, were measured and then used to estimate the diameter of the and any SIFT features located within this box were removed. rope. Finally, the diameter of the rope was compared between The remaining features were saved per image. As adjacent all three sources of information, the proposed front landing images were paired, corresponding features were loaded and gear, the proposed rear landing gear, and the mechanical claw, matched. Using the pixel coordinates of the matched features, to assess whether all three measurements of the diameter of an eight degrees of freedom (8-DOF) homography matrix was the rope were consistent. calculated using least squares error minimization. Outliers Figure 4 illustrates the overall video imaging and mosaick- were eliminated using Random Sample Consensus (RanSaC) ing techniques utilized to create mosaics containing non-blurry algorithm (Fischler and Bolles, 1981). Using the transforma- images from the video with the timestamp removed. These tion matrix, the paired frames of the video were stitched, and steps will be described in more detail in the following sections. corresponding features were united. The stitched image and Adaptive Sampling of the Video the united features were saved to be used during the next The video that was the basis of the analysis was recorded iteration. The overlapping regions of the stitched images using a Sony FCBH11 high definition color block camera were determined by selecting the image which was visually in MPEG-4 codec using the BT.709 RGB color space, with a sharper. This was achieved by comparing the magnitude of resolution of 1,440 × 1,080, 29.97 frames per second, and the gradient image and selecting the higher magnitude image a video bit rate of 55.7 megabits per second. The camera is to lay on top of the lower magnitude image. The advantage specified to have a focal length between 5.1 mm and 51.0 mm. to this method of handling the overlapping regions is that it Although the focal length at the time of capture is unknown, minimizes error that is accumulated during each iteration due it was observed to constant during the video. The camera to resampling. To improve the visual quality of the stitched path in the video was erratic, which is not uncommon during pairs, the time stamp region was modified to have an alpha due to the ebb and flow of water and transparency of zero. When the images are stitched, this al- the ROV, relative to a stationary scene. This causes a majority lows for overlapping of non-time stamped pixels of one image of the video to be blurry or pointing in regions of no interest, within the transparent region of the other image, thus filling and is unusable for further analysis. On occasion, however, in regions containing a timestamp with scene data (Figure 5). the camera would temporarily be stationary during changes The pairwise SIFT matching, and transformation process is in direction, such as the instance from ebb to flow, or have inherently a parallel process, in which multiple pairs can be extended stationary periods while it rested on the seabed. stitched concurrently on different processor cores. Subsequent

226 March 2016 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

03-16 March Peer Reviewed Revised.indd 226 2/17/2016 12:12:47 PM Figure 7. For comparison, sequential matching requires T(n −1) time. Seven iterations from eight images are shown in this example.

Figure 5. The timestamp was removed and replaced with scene data during the masking and mosaicking process, allowing an unobstructed view of the circular object partially masked by the time stamp in the upper image. Delivered by Ingenta IP: 192.168.39.151 On: Thu, 30 Sep 2021 05:15:31 Copyright: American Society for Photogrammetry and Remote Sensing

Figure 6. The pairwise matching, transformation and stitching

process for n images completes in T(log2(n)) time. Three iterations from eight images are shown in this example.

iterations of pairwise stitching were then performed using the new sets of stitched, paired images. This process continued until the image pairs were exhausted, thus generating the final, stitched, perspective image. As seen in Figure 6, there are

log2(n) stitches performed for n images in the parallel processing method, compared to n −1 stitches if the processing method was sequential, as shown in Figure 7. While both approaches will result in a mosaicked image, the pairwise matching method uti- lizes fewer computational resources and due to requiring fewer Figure 8. Scaled stitched composite from 64 images showing steps, errors accumulated during image resampling at each step debris field showing the proposed front and rear landing gear in a are reduced, thereby improving the final image quality. single perspective image. Several potential manmade objects were identified in the video by members of the team familiar with the model of rope, in a single perspective image. This figure was stitched aircraft that was the subject of the search as well as engineers, from 64 sampled frames of the video, which never contained marine biologists, and a coral ecologist. Front and rear land- the front landing gear and the rope within a single frame. ing gear were tentatively identified, along with an obviously In the following sections, we assess the likelihood that man-made rope. Figure 8 shows the entire scene of interest these two pieces of debris were actually the landing gear from containing the proposed front and rear landing gears, and the the crashed plane.

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING March 2016 227

03-16 March Peer Reviewed Revised.indd 227 2/17/2016 12:12:51 PM within SolidWorks. These CAD models were then visually superimposed on the stitched images to match the pose of the features in the underwater video. The diameter of the rope was then measured within SolidWorks, assuming that the rope and the features lay in the same plane rela- tive to the mosaic, which defined the rela- tive coordinate system relative to which measurements were made. Using the CAD rear landing gear as a size reference, the rope was measured at five locations and was identified to have a diameter of 15.5 mm, with a standard deviation of 0.3 mm (Figure 9). Using the CAD front landing gear as a size reference, the rope was again measured at five locations and was identi- fied to have a diameter of 18.0 mm, with a standard deviation of 0.8 mm (Figure 10). ROV Claw An opportunity arose during analysis of Figure 9. Using the CAD rear landing gear as a size reference, the rope was identified to the video to retrospectively create a scale have a diameter of 15.5 ±0.3 mm. marker based on a man-made object, even though no markers were intentionally used to shoot the video. An ROV articulating claw was observed in the video grasping a rope. The dimensions of the claw were ini- tially unknown, but information about the ROV was discovered after reviewing data embedded in the RAW video. The video data contained information that allowed us to identify the model of Delivered by Ingenta used. The articulating claw seen in the IP: 192.168.39.151 On: Thu, 30 Sep 2021 05:15:31video was then identified based on discus- Copyright: American Society for Photogrammetry andsions Remote with Sensing the submersible manufacturer. A reference claw was then acquired from the ROV manufacturer and analyzed, mea- sured, and photographed. The claw was then compared against ropes of differing diameters (Figure 11). Of the three diameters tested, the 15.9 mm di- ameter rope most closely matched a video scene where the claw grabbed the rope. A second estimate of the rope diameter was made using video footage of the claw. Us- ing known dimensions of the claw, scale references were applied to a still image from the ROV video of the claw (Figure 12). The camera’s principal axis is approxi- mately perpendicular to the rope. The rope, together with the camera’s principal axis, creates a depth plane. Dimensions of cross- sections of objects cut by this plane are photogrammetrically comparable. Note that dimensions across different depth planes cannot be compared because the magnifica- Figure 10. Using the CAD front landing gear as a size reference, the rope was identified tion factor is unknown. The cross-section to have a diameter of 18.0 ±0.8 mm. dimensions of the lightening holes of the ROV claw at the depth plane of the rope Theory/Calculation was measured to be approximately 15.24 To assist with using the stitched images to match debris-field mm, which provides scale to the image. The estimated rope objects with known man-made objects, CAD models of the diameter was then calculated to be approximately 15.24 mm. front and rear landing gear were created based on measure- ments taken from an extant Lockheed Electra Model 10E, and Method Verification the rear tire was modeled using dimensions specified on the A rear landing gear from was acquired from an extant Lock- Earhart aircraft inspection report, which identified the rear heed Electra Model 10E, construction number 1042. This wheel as a Goodyear 3TWA with a four ply Goodyear 16×7 exemplar landing gear was posed and photographed from a tire. The stitched image was placed as a background image similar perspective to the unidentified object in Figure 9. The

228 March 2016 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

03-16 March Peer Reviewed Revised.indd 228 2/17/2016 12:12:59 PM Figure 11. Varying rope diameters (from left to right, 19.1 mm, 15.9 mm, 12.7 mm) were compared against the articulating claw.

Delivered by Ingenta IP: 192.168.39.151 On: Thu, 30 Sep 2021 05:15:31 Copyright: American Society for Photogrammetry and Remote Sensing

Figure 12. Image-based rope dimensional analysis from the video.

modern tire on this exemplar landing gear is different from Results the historical tire on Amelia Earhart’s Lockheed Electra, so for Using the CAD model of the front landing gear superimposed this part of the analysis, a CAD model was created based on on the stitched image from the ROV video, the diameter of the measurements taken from the exemplar tire, which measured rope was measured at five locations, having an average diam- approximately 343×178 mm. A 15.9 mm diameter manila eter of 18.0 mm with a standard deviation of 0.8 mm. Similar- rope was laid next to the rear landing gear for comparison. ly, using the CAD model of the rear landing gear superimposed For a qualitative assessment, the image of the exemplar on the stitched image, the diameter of the rope was measured rear landing gear with the 15.9 mm diameter rope was scaled at five locations, having an average diameter of 15.5 mm with and superimposed on a single perspective digital mosaic cre- a standard deviation of 0.3 mm. Dimensional analysis of the ated from the ROV video (Figure 13) for a visual comparison. video which contained the ROV’s claw identified the rope This was performed by scaling the tire from the real landing diameter as approximately 15.24 mm. gear to the size of the purported tire seen in the ROV video. For a quantitative assessment, dimensional analysis using a photograph of the exemplar rear landing gear and a CAD Discussion model of the exemplar gear was performed in the same man- While scaled orthogonal projections yield true dimensions ner within SolidWorks as was performed for the ROV images. along the plane of the projection, it is not always feasible to The rope was measured at five locations within SolidWorks generate an orthogonal image, and a perspective image must and was identified as having a diameter of 15.5 mm, with a be used. In this case study, the camera is specified to have a standard deviation of 0.5 mm (Figure 14). The known rope focal length between 5.1 mm and 51.0 mm, although the focal diameter, as measured with digital calipers, was 15.9 mm. length at the time of capture is unknown, but constant. The

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING March 2016 229

03-16 March Peer Reviewed Revised.indd 229 2/17/2016 12:13:02 PM Figure 13. A photograph of the rear landing gear from the same model aircraft next to a 15.9 mm diameter manila rope is superimposed over a stitched image, adjacent to the location of the suspected rear landing gear.

object-to-camera distance was similarly unknown due to a lack of any known scale markers in the scene, although a visualDelivered es- by Ingenta timation puts the camera at a height of IP:approximately 192.168.39.151 3.0 m toOn: Thu, 30 Sep 2021 05:15:31 4.5 m. Due to the apparently Copyright:large distance American between Society the cam for- Photogrammetry and Remote Sensing era and the seabed, the perpendicular perspective, the objects of interest lying near the center of the perspective view, and a predominantly flat scene, it was not anticipated that differenc- es between perspective and orthogonal projections would yield significantly different results. The validity of this assumption for this specific case was verified by photographing an exem- plar landing gear from an extant Lockheed Electra Model 10E, construction number 1042, and a rope with a known diameter, from the same perspective as in the mosaicked image from the ROV video, and comparing measurements from the photograph with known dimensions of the landing gear and rope. The 15.9 mm diameter known rope was measured to have a diameter of 15.5 mm with a standard deviation of 0.5 mm using this CAD model superimposition method, which is sufficiently accurate for the purpose of this analysis. The limits of video recording are problematic for many industries. With only the small field-of-view visible in each frame of a video, much of the spatial context of each frame location is lost. Using only a single camera, there are few options for improvement. Through use of our mosaics, we Figure 14.The diameter of the 15.9 mm rope was measured at were successfully able to observe multiple objects of inter- five locations in the CAD model overlay, using the actual rear est in a common field-of-view. Our method put the selected landing gear from a reference aircraft. The rope diameter was images into the same perspective, thus giving unique advan- identified using this method to be 15.5 mm with a standard tages without physically disturbing the environment. It also deviation of 0.5 mm. addressed the issue of when a time-stamp or other embedded watermark obscures objects of interest. sequential stitching requires −1 stitches, a marked improve- In software engineering, it is increasingly vital to program n ment. Not only is processing time improved, but quality as using parallel processes to improve performance due to the well. Error is propagated after each iteration by resampling of increasing core count of modern CPUs. Image analysis can be the images after each transformation. In the sequential case a highly parallel process, as seen in the SIFT algorithm. In our (Figure 7), the first image is stitchedn −1 times. In the pair- method, the images were selected in a pairwise manner for wise case (Figure 6), the first image is stitched log (n) times, stitching, rather than selected sequentially. Pairwise stitch- 2 thus drastically reducing propagated error for large datasets. ing requires only requires log2(n) stitches for n-images, while

230 March 2016 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

03-16 March Peer Reviewed Revised.indd 230 2/17/2016 12:13:06 PM Another improvement to image quality was made while presented in this paper positively identified the potential determining how to handle the overlapping region of two im- landing gear as a good geometric and ratiometric match for ages during the stitching process. The overlapping region of the Earhart Lockheed Electra Model 10E and that the results the transformed, paired images was determined by selecting merit further retrieval and analysis efforts. the image which was visually sharper. This was achieved by comparing the magnitude of the gradient image and selecting the higher magnitude image to lie on top of the lower magni- Acknowledgments tude image. The primary advantage to this method of han- Funding for this project was provided by Timothy Mellon. dling the overlapping regions is that it minimizes error that is The authors would like to thank Grace McGuire for her kind- accumulated during each iteration, due to resampling. ness in photographing and lending components from her Using video of the ROV grasping the rope and frames ex- Lockheed Electra 10E for our analysis. tracted from the video, we began calculating the diameter of the rope. This feature was selected because it could be used as a scale marker for the debris field due to its consistent, man- References made geometry. The frames were then stitched together into Agisoft Photoscan, 2016. URL: http://www.agisoft.com, Agisoft LLC, one mosaicked image with the rope spanning the length of 11 Degtyarniy per., St. Petersburg, Russia, 191144 (last date accessed: 15 January 2016). the debris field in view. We could then relate the size of other objects back to the rope and produce meaningful measure- Brown, M., and D.G. Lowe, 2007. Automatic panoramic image stitching using invariant features, International Journal of ments and ratios. After the proposed pieces of wreckage were Computer Vision, 74(1):59–73. measured we compared the CAD models to historical photos of Campos, R., R. Garcia, P. Alliez, and M. Yvinec, 2014. A surface the aircraft and a historical photo of rope adjacent to Amelia reconstruction method for in-detail underwater 3D optical Earhart’s aircraft. In the historical photograph of the rope mapping, The International Journal of Robotics Research, adjacent to Earhart’s aircraft, and using the same methods 34(1):64–89. described above, the rope was measured in six locations and Canciani, M., P. Gambogi, F. Romano, G. Cannata, and P. Drap, 2003. calculated to have an average diameter of 16.26 mm with a Low cost digital photogrammetry for underwater archaeological standard deviation of 1.02 mm, which is consistent with the site survey and artifact isertion, The case study of the Dolia average diameter of 16.25 mm observed across the three inde- Wreck in secche della meloria-livorno-italia, International pendent measurements made from the underwater video (ROV Archives of Photogrammetry, Remote Sensing Spatial claw, near front landing gear, near rear landing gear). The Information Sciences, 34:95–100. rope material has not yet been identified, and any age-related Drap, P., J. Seinturier, D. Scaradozzi, P. Gambogi, L. Long, and swelling of the rope is currently unknown. This methodology F. Gauch, 2007. Photogrammetry for virtual exploration st could be applied to other retrospective video analyses where of underwater archeological sites, Proceedings of the 21 an inadvertent, man-made scale marker can be used to ratio- International Symposium, CIPA 2007: AntiCIPAting the Future of the Cultural Past, 01–06 October 2007, Athens Greece. metrically compare purported objects of known dimensions.Delivered by Ingenta Differentiating man-made objectsIP: from 192.168.39.151 protrusions of On: the Thu, Fischler,30 Sep M.A.2021 and 05:15:31 R.C. Bolles, 1981. Random sample consensus: a Copyright: American Society for Photogrammetryparadigm and for modelRemote fitting Sensing with applications to image analysis seabed is also an important step in identifying wreckage. The and automated cartography, Communications of the ACM, debris field is located on a steep incline and was searched at a 24(6):381–395. depth of 150 to 300 m. At the depth of the debris field, known Gracias, N.R., S. Van der Zwaan, A. Bernardino, and J. Santos-Victor, as the dysphotic zone, the intensity of sunlight is reduced 2003. Mosaic- based navigation for autonomous underwater to by over 99 percent and the visible spectrum is extremely vehicles, IEEE Journal of Oceanic Engineering, 28(4):609–624. limited (Lorenzen, 1972). This phenomenon, coupled with Gracias, N.R., and J. Santos-Victor, 2001. Trajectory reconstruction growth and sediment covering, makes identifying specific with uncertainty estimation using mosaic registration, Robotics objects difficult. Having a view of the context of the landscape and Autonomous Systems, 35(3):163–177. can aid in identifying irregularities in shape, texture, and col- Harris, C., 1993. Tracking with rigid models, Active Vision, MIT or for both wreckage recovery as well as ecological research. Press, pp. 59–73. Hohle, J., 1971. Reconstruction of underwater object, Photo- grammetric Engineering & Remote Sensing, 37(9):949–954. Conclusions Leatherdale, J.D., and D.J. Turner, 1991. Operational experience in Using the method we have outlined in this paper, it is pos- underwater photogrammetry, ISPRS Journal of Photogrammetry sible in a retrospective analysis to create a single-perspective and Remote Sensing, 46(2):104–112. mosaic of a debris field from video, and identify obscured Lirman, D., N.R. Gracias, B.E. Gintert, A.C.R. Gleason, R.P. Reid, objects when no scale marker was used. Processing the data S. Negahdaripour, and P. Kramer, P., 2007. Development and in the manner described can give insight into the relative size application of a video-mosaic survey technology to document and shape of objects within the context of the entire debris the status of coral communities, Environmental Monitoring field. Even when planned scale markers are absent, scale and Assessment, 125(1-3):59–73. markers can be potentially applied to the entire debris field Lorenzen, C.J., 1972. Extinction of light in the ocean by using identifiable man-made objects present in the video, phytoplankton, Journal du Conseil, 34(2):262–267. and used to construct a larger, more accurate representation Lowe, D.G., 2004. Distinctive image features from scale-invariant of the marine landscape. Ratiometric comparisons between keypoints, International Journal of Computer vision, 60(2): 91–110. CAD models of the purported objects and the inadvertent scale Maas, H.G., and U. Hampel, 2006. Photogrammetric techniques in markers can help assess the likelihood that the underwater civil engineering material testing and structure monitoring, objects are the purported objects. Visual detection of debris Photogrammetric Engineering & Remote Sensing, 72(1):39–45. can be greatly improved with a wider field of view than that Moravec, H.P., 1977. Towards automatic visual obstacle avoidance, th of a single camera, which can be created using our stitching Proceedings of the 5 International Joint Conference on Artificial algorithm. The method presented outlines a cost effective and Intelligence - IJCAI’77, 2:584. noninvasive strategy to document an underwater landscape. Negahdaripour, S., and A. Khamene, 2000. Motion-based The data is detail-rich and accurate compared to other nonin- compression of underwater video imagery for the operations of unmanned submersible vehicles, Computer Vision and Image vasive search methods. Ultimately, we found that the method Understanding, 79(1):162–183.

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING March 2016 231

03-16 March Peer Reviewed Revised.indd 231 2/17/2016 12:13:06 PM Ruppé C.V., and J. Barstad, 2002. International Handbook of Zhukovsky, M.O., V.D. Kuznetsov, and S.V. Olkhovsky, 2013. , Springer. Photogrammetric techniques for 3-D underwater record of the Santos-Victor, J. and J. Sentieiro, 1994. The role of vision for under- antique time ship from Phanagoria, ISPRS-International Archives water vehicles, Proceedings of the 1994 Autonomous Underwater of the Photogrammetry, Remote Sensing and Spatial Information Vehicle Technology Symposium - AUV’94, pp. 28–35. Sciences, 1(2):717–721. Telem, G., and S. Filin, 2013. Photogrammetric modeling of the relative orientation in underwater environments, ISPRS Journal (Received 23 April 2015; accepted 09 December 2015; final of Photogrammetry and Remote Sensing, 86:150–156. version 21 December 2015) Telem, G., and S. Filin, 2010. Photogrammetric modeling of underwater environments, ISPRS Journal of Photogrammetry and Remote Sensing, 65(5):433–444.

Delivered by Ingenta IP: 192.168.39.151 On: Thu, 30 Sep 2021 05:15:31 Copyright: American Society for Photogrammetry and Remote Sensing

232 March 2016 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

03-16 March Peer Reviewed Revised.indd 232 2/17/2016 12:13:06 PM