
Vision Based Underwater Environment Analysis: A Novel Approach to Estimate Size of Coral Reefs 1Abhishek Maurya, 2Rajat Govekar , 3Pramod Maurya and 4Anirban Chatterjee 1,2,4 Department of Electronics and Communication Engineering National Institute of Technology Goa, Farmagudi, Ponda, Goa-403401, India 3 Department of Marine Instrumentation National Institute of Oceanography, Dona Paula, Goa-403004 E-mail : [email protected], [email protected], [email protected], [email protected] Abstract This work is aimed at detecting and estimating the size of the coral reef underwater. Deep learning architecture is proposed to detect the coral reefs. Using the detected corals information, size is estimated using distance based algorithm. The distance-related problem of changing image size is formulated as a non-linear function and optimized accordingly. This algorithm is used in real-time underwater environment analysis. From the analysis, it is evident that the proposed method is accurate in estimating the area occupied by coral reefs underwater. Keywords: MobileNet Architecture, Coral Reefs, Pixel-per-metric ratio, Normalized Coordinates, Sand patches, Convolution Neural Network. 1. Introduction Coral reefs are one of the world's most biological marine ecosystems. Coral reefs, ecologically species diversity and bio productivity in the ocean are important because they are the counterpart to tropical rainforest. Since the past decades it is increasingly at risk of the wide diversity of animal and plant species contributing to their system and its genetic heritage. Coral reef allows for the formation of associated ecosystems, which enable basic habitats, fish and livelihoods to be created. Therefore it is important to monitor the coral reef ecosystem. The automated analysis process for underwater visuals would enhance the investigation of coral reef dynamics and processes through reduction of the analytic bottleneck and the full use of large submarine image data sets by researchers. Alsmadi et al. [1] contributed to the recognition and classification of fish based on digital image and the development and implementation of a novel prototype for the recognition of fish using global extraction features, image segmentation and geometric parameters. Beijbom et al. [2] implemented a multi-scale algorithm using color descriptor and texture to better estimate the area of the reef covered by corals, sand and algae. Clement et al.[3] presented and evaluated the application of Local Binary Pattern to classify Crown-Of-Thorns Starfish from an image as a simple but powerful feature descriptor. At higher altitudes, it had relatively less accuracy [3]. Padmavathi et al. [4] presented an algorithm combining Kernel Principal Component Analysis and SIFT, which provides better accuracy for the classification of extracted underwater images. Purser et al.[5] presented the results showing that the machine-learning approach is suitable for estimating cold-water coral density. Katy et al. [6] presented a method for recognizing fish species using background segmentation, selecting adaptive scale key points and describing them with Opponent Sift using a binary linear Support Vector Machines classifier to learn about each species. In his work, J. Matai et al.[7] emphasized efforts to automate the fish detection and recognition process using computer vision algorithms such as SIFT from a video or still camera source. But in Qiang et al.[8] suggested integrating the approach of cascade-of-rejectors with the features of Histograms of Oriented Gradients (HoG) to achieve a rapid and accurate human detection system and concluded that Histogram of Gradient resulted in better results than SIFT. In his paper, Sebastien Villon et al.[9] combined two methods for detecting coral reef fish, namely traditional HOG + SVM classification and deep learning of fish detection. Author discovered that deep learning was more accurate than the previous method. The aim of this work is to analyze the underwater environment. To detect coral reef and sand patches, Convolution Neural Network [10] (MobileNet) is implemented. A distance-based algorithm was used to estimate the size of detected corals. Coral reef detection and estimation of size was also carried out in real time. 2. Methods Analysis of the underwater coral reef was carried out, initially by acquiring images of the underwater environment. Different coral types have been detected using MobileNet architecture, which uses depth-separable convolution to produce faster results. The size covered by coral reefs is obtained by using the information of pixel values from detection, which also includes non- linear effects due to water. Analytical methods involved are discussed below. Fig. 1. Methods of detection of underwater coral reefs 2.1. Image Acquisition Underwater environment was captured at an approximate height of 3.5 ft by taking video using Gopro hero 4 camera. There are sand patches, fish and various types of living coral reefs in this environment. Images were taken using the OpenCV library in Python to extract the frames from the video. Extraction was done at 20 frames per second to avoid overlapping the same images. 2.2. Image Enhancement Underwater images are enhanced by Contrast Limited Adaptive Histogram Equalization (CLAHE) [11]. This approach significantly improves the visual quality of underwater images by enhancing contrast, as well as reducing noise and artifacts. 2.3. Detection of coral reefs and sand patches Using convolution neural network architecture, sand patches and coral reefs are detected. It has been found that Mobilenet is best suited for underwater imaging. The MobileNet model is based on depth-separable convolutions, a form of factorized convolutions that factor a standard convolution into a depthwise convolution and a 1×1 convolution called a pointwise convolution. The standard convolution layer is parameterized by the DK × DK × M × N convolution kernel, where DK is the spatial dimension of the kernel that is supposed to be square with M as number of input channels and N as the number of output channels. The architecture for MobileNet is defined in [10]. ReLU nonlinearity follows all layers except the final fully connected layer that has no nonlinearity and feeds into a classification softmax layer. Down sampling is handled in both the depth convolutions and the first layer with strided convolution. A final average pooling in front of the fully connected layer reduces the spatial resolution to 1. The architecture used in MobileNet [10] to extract image frame features is employed in this work before it is given to the network of ssd detection. The first column shows stride used by the convolving filter, with s1 being one and s2 being two steps. Corresponding second column shows the size of the filter used for convolution (dw = Depthwise). The third column of the table shows the input size in pixels for each convolution. Batch Normalization and ReLU follow all layers of convolution as shown in Fig. 2. (a) (b) Fig.2. Different convolution layers. (a) Standard convolution layers. (b) Depthwise Seperable convolution layers. The Standard convolution layers and the Depthwise Separable convolution layers used in MobileNet architecture are shown in Fig. 2 (a) and (b) respectively. MobileNet architecture is fast in its computation due to fewer operations in Depthwise convolution. In Depthwise Separable layers of convolution, first Depthwise convolution is performed with Batch Normalization and ReLU, followed by 1×1 point wise convolution with Batch Normalization and ReLU. 2.4. Size Estimation After coral reef detection, neural network coordinates were used to create a bounding box around detected coral reef. Standardized coordinates are changed to absolute pixel values and the number of pixels occupied by the coral's length and breadth is found. 2.4.1. Experimental setup and data The pixel graph per metric ratio with different distance is plotted to make the algorithm adaptive with different distance in underwater image. The following experiment will take sample points. 1. Setup: Scale of ‘S’ cm (known size) is taken and put into the tumbler of water at the distance of ‘D’ cm ( known distance). Digital Camera GoPro Hero 4 is used to take the image of the scale with the lens inside the water at a distance of ‘D’ cm. The experiment is done with different distances between lens and scale to observer the variation of distance with size of the object. Since the distances are changed it also accommodates the effect of light in underwater images. 2. Image Processing: Image taken is resized into 400x300 pixels. Using OpenCV library in python, canny edge detection algorithm is applied to the image to detect the edge of the scale. To close the gaps between edges, dilation and erosion operation is done. Using Coordinates of the contours, number of pixel taken by edge (Lengthwise and Widthwise) is recorded. This repeated for all images with different distance. 3. Data Acquisition: Let the pixels taken by scale lengthwise be ‘L’. Therefore PixelPerMetric=S÷L , for the distance ‘D’ , with image taken from GoPro Hero 4. The value of ppm is calculated for different distance and a graph of Pixel_Per_Metric vs Distance is plotted. Using the obtained values, a sigmoidal 5PL function is used and fitted optimally as shown in Fig. 3. Formula used for Fitting: Sigmoidal 5PL a−d y=d+ m x b (1) 1+ ( ( c ) ) Obtained value after optimization: a=20.7794 ; b= 32.8038 ; c= 10.5057 ; d= 0.023534 ; m= 0.0276115 Real object size is obtained using equation 2. Objectsize ( cm )=Objectsize ( pixels ) ÷ Pixelpermetric (2) Fig. 3. Variation of Pixel_per_Metric ratio with Distance. 2.4.2. Size Estimation Algorithm Algorithm: Distance based size estimation Input: Distance, Pixel Values Output: Object Size Steps: Step 1: Normalized pixels obtained are converted to absolute pixels. Step 2: Pixels covered by object in length and width is found using the information of absolute pixels. a−d y=d+ m Step 3: Sigmoidal 5PL function x b is applied on the values of Pixel per metric with 1+ ( ( c ) ) different distance.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages14 Page
-
File Size-