Quick viewing(Text Mode)

A Novel Approach to Estimate Size of Coral Reefs

A Novel Approach to Estimate Size of Coral Reefs

Vision Based Underwater Environment Analysis: A Novel Approach to Estimate Size of Coral Reefs

1Abhishek Maurya, 2Rajat Govekar , 3Pramod Maurya and 4Anirban Chatterjee 1,2,4 Department of Electronics and Communication Engineering National Institute of Technology Goa, Farmagudi, Ponda, Goa-403401, India 3 Department of Marine Instrumentation National Institute of , Dona Paula, Goa-403004 E-mail : [email protected], [email protected], [email protected], [email protected]

Abstract This work is aimed at detecting and estimating the size of the coral underwater. Deep learning architecture is proposed to detect the coral reefs. Using the detected corals information, size is estimated using distance based algorithm. The distance-related problem of changing image size is formulated as a non-linear function and optimized accordingly. This algorithm is used in real-time underwater environment analysis. From the analysis, it is evident that the proposed method is accurate in estimating the area occupied by coral reefs underwater.

Keywords: MobileNet Architecture, Coral Reefs, Pixel-per-metric ratio, Normalized Coordinates, patches, Convolution Neural Network.

1. Introduction Coral reefs are one of the world's most biological marine . Coral reefs, ecologically species diversity and bio productivity in the are important because they are the counterpart to tropical rainforest. Since the past decades it is increasingly at risk of the wide diversity of animal and plant species contributing to their system and its genetic heritage. allows for the formation of associated ecosystems, which enable basic , and livelihoods to be created. Therefore it is important to monitor the coral reef . The automated analysis process for underwater visuals would enhance the investigation of coral reef dynamics and processes through reduction of the analytic bottleneck and the full use of large image data sets by researchers. Alsmadi et al. [1] contributed to the recognition and classification of fish based on digital image and the development and implementation of a novel prototype for the recognition of fish using global extraction features, image segmentation and geometric parameters. Beijbom et al. [2] implemented a multi-scale algorithm using color descriptor and texture to better estimate the area of the reef covered by corals, sand and algae. Clement et al.[3] presented and evaluated the application of Local Binary Pattern to classify Crown-Of-Thorns Starfish from an image as a simple but powerful feature descriptor. At higher altitudes, it had relatively less accuracy [3]. Padmavathi et al. [4] presented an algorithm combining Kernel Principal Component Analysis and SIFT, which provides better accuracy for the classification of extracted underwater images. Purser et al.[5] presented the results showing that the machine-learning approach is suitable for estimating cold- coral . Katy et al. [6] presented a method for recognizing fish species using background segmentation, selecting adaptive scale key points and describing them with Opponent Sift using a binary linear Support Vector Machines classifier to learn about each species. In his work, J. Matai et al.[7] emphasized efforts to automate the fish detection and recognition process using computer vision algorithms such as SIFT from a video or still camera source. But in Qiang et al.[8] suggested integrating the approach of cascade-of-rejectors with the features of Histograms of Oriented Gradients (HoG) to achieve a rapid and accurate detection system and concluded that Histogram of Gradient resulted in better results than SIFT. In his paper, Sebastien Villon et al.[9] combined two methods for detecting coral reef fish, namely traditional HOG + SVM classification and deep learning of fish detection. Author discovered that deep learning was more accurate than the previous method. The aim of this work is to analyze the underwater environment. To detect coral reef and sand patches, Convolution Neural Network [10] (MobileNet) is implemented. A distance-based algorithm was used to estimate the size of detected corals. Coral reef detection and estimation of size was also carried out in real time.

2. Methods Analysis of the underwater coral reef was carried out, initially by acquiring images of the underwater environment. Different coral types have been detected using MobileNet architecture, which uses depth-separable convolution to produce faster results. The size covered by coral reefs is obtained by using the information of pixel values from detection, which also includes non- linear effects due to water. Analytical methods involved are discussed below. Fig. 1. Methods of detection of underwater coral reefs 2.1. Image Acquisition Underwater environment was captured at an approximate height of 3.5 ft by taking video using Gopro hero 4 camera. There are sand patches, fish and various types of living coral reefs in this environment. Images were taken using the OpenCV library in Python to extract the frames from the video. Extraction was done at 20 frames per second to avoid overlapping the same images.

2.2. Image Enhancement

Underwater images are enhanced by Contrast Limited Adaptive Histogram Equalization (CLAHE) [11]. This approach significantly improves the visual quality of underwater images by enhancing contrast, as well as reducing noise and artifacts.

2.3. Detection of coral reefs and sand patches Using convolution neural network architecture, sand patches and coral reefs are detected. It has been found that Mobilenet is best suited for underwater imaging. The MobileNet model is based on depth-separable convolutions, a form of factorized convolutions that factor a standard convolution into a depthwise convolution and a 1×1 convolution called a pointwise convolution. The standard convolution layer is parameterized by the DK × DK × M × N convolution kernel, where DK is the spatial dimension of the kernel that is supposed to be square with M as number of input channels and N as the number of output channels. The architecture for MobileNet is defined in [10]. ReLU nonlinearity follows all layers except the final fully connected layer that has no nonlinearity and feeds into a classification softmax layer. Down sampling is handled in both the depth convolutions and the first layer with strided convolution. A final average pooling in front of the fully connected layer reduces the spatial resolution to 1.

The architecture used in MobileNet [10] to extract image frame features is employed in this work before it is given to the network of ssd detection. The first column shows stride used by the convolving filter, with s1 being one and s2 being two steps. Corresponding second column shows the size of the filter used for convolution (dw = Depthwise). The third column of the table shows the input size in pixels for each convolution. Batch Normalization and ReLU follow all layers of convolution as shown in Fig. 2.

(a) (b)

Fig.2. Different convolution layers. (a) Standard convolution layers. (b) Depthwise Seperable convolution layers.

The Standard convolution layers and the Depthwise Separable convolution layers used in MobileNet architecture are shown in Fig. 2 (a) and (b) respectively. MobileNet architecture is fast in its computation due to fewer operations in Depthwise convolution. In Depthwise Separable layers of convolution, first Depthwise convolution is performed with Batch Normalization and ReLU, followed by 1×1 point wise convolution with Batch Normalization and ReLU.

2.4. Size Estimation After coral reef detection, neural network coordinates were used to create a bounding box around detected coral reef. Standardized coordinates are changed to absolute pixel values and the number of pixels occupied by the coral's length and breadth is found.

2.4.1. Experimental setup and data

The pixel graph per metric ratio with different distance is plotted to make the algorithm adaptive with different distance in underwater image. The following experiment will take sample points.

1. Setup:

Scale of ‘S’ cm (known size) is taken and put into the tumbler of water at the distance of ‘D’ cm ( known distance). GoPro Hero 4 is used to take the image of the scale with the lens inside the water at a distance of ‘D’ cm. The experiment is done with different distances between lens and scale to observer the variation of distance with size of the object. Since the distances are changed it also accommodates the effect of in underwater images.

2. Image Processing:

Image taken is resized into 400x300 pixels. Using OpenCV library in python, canny edge detection algorithm is applied to the image to detect the edge of the scale. To close the gaps between edges, dilation and erosion operation is done. Using Coordinates of the contours, number of pixel taken by edge (Lengthwise and Widthwise) is recorded. This repeated for all images with different distance. 3. Data Acquisition:

Let the pixels taken by scale lengthwise be ‘L’. Therefore PixelPerMetric=S÷L , for the distance ‘D’ , with image taken from GoPro Hero 4. The value of ppm is calculated for different distance and a graph of Pixel_Per_Metric vs Distance is plotted.

Using the obtained values, a sigmoidal 5PL function is used and fitted optimally as shown in Fig. 3.

Formula used for Fitting: Sigmoidal 5PL a−d y=d+ m x b (1) 1+ ( ( c ) )

Obtained value after optimization: a=20.7794 ; b= 32.8038 ; c= 10.5057 ; d= 0.023534 ; m= 0.0276115

Real object size is obtained using equation 2.

Objectsize ( cm )=Objectsize ( pixels ) ÷ Pixelpermetric (2)

Fig. 3. Variation of Pixel_per_Metric ratio with Distance. 2.4.2. Size Estimation Algorithm

Algorithm: Distance based size estimation

Input: Distance, Pixel Values Output: Object Size Steps: Step 1: Normalized pixels obtained are converted to absolute pixels. Step 2: Pixels covered by object in length and width is found using the information of absolute pixels. a−d y=d+ m Step 3: Sigmoidal 5PL function x b is applied on the values of Pixel per metric with 1+ ( ( c ) ) different distance. Step 4: The function is optimized and values of constants are obtained such that curve is fitted accurately. Step 5: Accurate Pixel per metric ratio is obtained for desired Distance. Step 6: Size of object is obtained using the information of pixels covered and pixel per metric obtained using equation 1.

Fig. 4. Flowchart of Coral reef size estimation.

Fig. 4 shows the detailed flowchart of size estimation of coral reef. Normalized coordinates are pixel values of detection boxes in an image frame.

4. Result & Discussions For training the convolution neural network Google Collab was used. It uses 1xTesla k80 GPU having 2496 CUDA cores having 12 GB GDDR5 VRAM. It uses a 1x1xsingle core hyper threaded i.e(1 core, 2 threads) Xeon Processors @2.3Ghz CPU. It has a 12.6 GB RAM. It took 2 hours to train the network. Fig. 5. Coral reef detection.

Fig. 6. Elkhorn coral detection. Fig. 5 shows detection of coral reef patches from an underwater image frame. Starting from top first patch is detected with 96% accuracy, with another two patch detected as 93% and 90% respectively. Fig. 6 shows the detected coral reef from a different data set. It had an accuracy of 95% signifying that our trained model is accurate.

Results of training of Neural Network and testing of it were obtained as follows: 1. Number of epochs are 5000 and Validation Loss is 1.79. 2. Average Training accuracy of images is 100%. 3. Average Test accuracy of images is 93%.

M. Marcos et al.[12] used feedforward backpropagation neural network to classify close-up images of coral reef components into three benthic categories: living coral, dead coral and sand, reaching a success rate of 86.7% while their two-step classifier, on the other hand, reached 79.7%. A. Shihavuddin et al.[13] achieved overall accuracy of 85.5 percent on the dataset of Moorea-labeled corals (MLC). The network used to train our dataset outperformed the coral reef detection techniques implemented in the past.

Fig. 7. Coral Reef detection. Fig. 8 Table coral detection

Fig. 7 shows the detection of Brain corals underwater image frame and size estimation of area covered by coral reef is tabulated below. First Patch shows detected reef with an accuracy of 95%, and second patch is detected with accuracy of 85%. Its Standard size ranges upto 1.8 m or 5.9 ft. After applying the proposed size estimation algorithm, the size is estimated as 1.72 m, which verifies our result correctly. Fig. 8 shows the detected Table corals, species of Acropora genus which has standard width upto 2.2 m. With the detection of table coral in Fig. 8 its size is estimated as 2.48 m , after applying the algorithm which validates our result.

Table 2: Estimated Size of Coral Reefs

Reef Pixels Taken (in pixels) Size of Coral (cm) Total Area No. Covered Length Width Length Width (cm2)

1 342.09 338.60 97.113 96.122 9334.74

2 426.60 605.97 121.124 172.05 20839.72

3 397 875 112.72 248.43 28003.97

Table 2 shows the analysis of coral reef detected in an underwater image frame, in terms of numerical values to find out the area covered by detected coral reef patches, in real scenario. On the scale of 1920 pixels lengthwise and 1080 breadthwise, values in Pixels Taken column gives the number of pixels taken by the detected coral reef patch. Using algorithm with other known parameters, size of coral is estimated (length and width). Area covered under coral reef is estimated by the product of length and width of detected coral reef patch. In this table reef no. 1 and 2 are from fig. 7 and reef no. 3 is from fig 8. Table 3 : Estimated length of coral vs actual length Coral type Estimated length (m) Actual length (m) Percentage Error Brain coral (Fig. 7) 1.72 1.8 4.65 % Table coral (Fig. 8) 2.48 2.2 12%

Table 3 shows the comparison between the estimated length and actual length of corals in underwater system. Error is found to be minimal after comparison.

4. Conclusion Coral reef analysis in 400×300 size image frame is performed at 2 fps from underwater environment video for each second image from the extracted frames. In MobileNet architecture, detachable convolution is done before giving features to fully connected layers to reduce the complexity of extraction of features. Detected coral reefs provide relevant information about the pixels occupied longitudinally and breadthwise in an image used to estimate the size. Non-linear function, mapping the pixel values to the metric distance ratio is optimized to give the desired distance near accurate ratio value. The observations obtained using this algorithm provides the reliable values of the area covered in an image frame by the coral reefs.

5. References

[1] Alsmadi, M.K.S., K.B. Omar, S.A. Noah and I. Almarashdeh, “Fish recognition based on the combination between robust features selection, image segmentation and geometrical parameters techniques using artificial neural network and decision tree,” International Journal of Computer Science and Information Security, vol.6, pp. 215-221, 2009.

[2] O. Beijbom, P. J. Edmunds, D. I. Kline, B. G. Mitchell and D. Kriegman, “Automated annotation of coral reef survey images,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 1170– 1177, 2012.

[3] R. Clement, M. Dunbabin, and G. Wyeth, “Toward Robust Image Detection of Crown-of-thorns Starfish for Autonomous Population Monitoring,” Australasian Conference on Robotics & Automation (ACRA), 2005.

[4] G. Padmavathi , M. Muthukumar, and Thakur, “S. Kernel Principal Component Analysis Feature Detection and Classification for Underwater Images,” 3rd International Congress on Image and Signal Processing (CISP), vol. 2, pp. 983-988 ,Oct. 2010.

[5] A. Purser, M. Bergmann, T. Lundälv, J. Ontrup , T.W. Nattkemper, “Use of machine-learning algorithms for the automated detection of cold-water coral habitats: a pilot study,” Marine Ecology Progress Series, vol. 397, pp. 241-251, 2009.

[6] Katy Blanc, Diane Lingrand, and Frédéric Precioso, “Fish Species Recognition from Video using SVM Classifier,” 3rd ACM International Workshop on Multimedia Analysis for Ecological Data (MAED '14), pp. 1-6, 2014.

[7] J. Matai, R. Kastner, G.R. Cutter and D.A. Demer, “Automated Techniques for Detection and Recognition of using Computer Vision Algorithms,” National Marine Fisheries Service Automated Image Processing Workshop, pp. 35-37, 2010.

[8] Qiang Zhu, Mei-Chen Yeh, Kwang-Ting Cheng and S. Avidan, “Fast Human Detection Using a Cascade of Histograms of Oriented Gradients,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1491-1498, 2006.

[9] Sebastien Villon, Marc Chaumont, Gerard Subsol, Sebastien Villeger, Thomas Claverie, David Mouillot, “Coral reef fish detection and recognition in underwater videos by supervised machine learning: comparison between deep learning and HOG+SVM methods,” Advanced Concepts for Intelligent Vision Systems, vol. 10016, pp. 160–171, 2016.

[10] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” Computing Research Repository (CoRR), 2017.

[11] M. S. Hitam, E. A. Awalludin, W. N. Jawahir Hj Wan Yussof and Z. Bachok, "Mixture contrast limited adaptive histogram equalization for underwater image enhancement," 2013 International Conference on Computer Applications Technology (ICCAT), Sousse, pp. 1-5, 2013.

[12] M. Marcos, M. Soriano, and C. Saloma, "Classification of coral reef images from underwater video using neural networks," Opt. Express 13, pp. 8766-8771 , 2005.

[13] A. Shihavuddin, N. Gracias, R. Garcia, A. C. R. Gleason, and B. Gintert, “Image-based coral reef classification and thematic mapping,” Remote Sensing, vol. 5, no. 4, pp. 1809–1841, 2013.