
J Intell Robot Syst https://doi.org/10.1007/s10846-017-0721-4 Object Recognition and Semantic Mapping for Underwater Vehicles Using Sonar Data Matheus dos Santos1 · Paulo Drews Jr.1 · Pedro Nu´nez˜ 2 · Silvia Botelho1 Received: 19 December 2016 / Accepted: 29 September 2017 © Springer Science+Business Media B.V. 2017 Abstract The application of robots as a tool to explore Experimental results demonstrate the robustness and accu- underwater environments has increased in the last decade. racy of the method described in this paper. Underwater tasks such as inspection, maintenance, and monitoring can be automatized by robots. The understand- Keywords Robot vision · Underwater robot · Semantic ing of the underwater environments and the object recog- mapping · Object recognition · Forward looking sonar nition are required features that are becoming a critical issue for these systems. On this work, a method to pro- vide a semantic mapping on the underwater environment 1 Introduction is provided. This novel system is independent of the water turbidity and uses acoustic images acquired by Forward- The ability to construct a map while the robot moves is Looking Sonar (FLS). The proposed method efficiently essential for performing autonomous tasks and has been segments and classifies the structures in the scene using extensively studied in the literature. Map building allows geometric information of the recognized objects. Therefore, the robot to develop autonomous skills such as navigation, a semantic map of the scene is created, which allows the interaction with environment and self-localization, among robot to describe its environment according to high-level others. The scientific community has been studying new semantic features. Finally, the proposal is evaluated in a real ways of representing the map of the environment in the last dataset acquired by an underwater vehicle in a marina area. decades (an interesting review about mapping can be found in [18]). Most of the approaches proposed in the literature to solve this problem explore the spatial information of the environment (e.g., geometric features like segment lines or Matheus dos Santos occupancy cells). However, only with the spatial represen- [email protected] tation of the environment is difficult to perform other tasks successfully. Now, this tendency is changing and the sci- Paulo Drews Jr. [email protected] entific community is experiencing an increasing interest in so-called semantic solutions, which integrate geometrical Pedro Nu´nez˜ [email protected] information and semantic knowledge [10]. Recently, several advances were made in the semantic Silvia Botelho [email protected] mapping. Generally, ground robots that are able to per- form tasks planning usually combines semantic knowledge 1 NAUTEC, Intelligent Robotics and Automation Group - Center in their maps (e.g., places classification, such as rooms, of Computational Science, Univ. Federal do Rio passageways or garden, and labels of objects) [10]. How- Grande - FURG, Rio Grande, Brazil ever, there are very few works in underwater robotics that consider the semantic map to predict changes in the envi- 2 ROBOLAB, Robotics Laboratory - Computer and Communication Technology Department, ronment and make high-level decisions. In fact, the problem Universidad de Extremadura, Caceres,´ Spain of underwater mapping has typically been treated with J Intell Robot Syst geometric information extracted from acoustic or optical generated evaluating the solution on real data acquired by sensors like sonar and RGB cameras [1, 7, 15]. FLS in a marina. The Fig. 1 demonstrates the kind of In order to semantically describe and recognize an under- information that can be obtained by the approach water environment, a robot needs a system able to extract high-level knowledge from the scene. Typically, the RGB sensors have been used in the literature to extract and char- 2 Acoustic Image from a Forward Looking Sonar acterize the robot’s environment. However, in underwater scenarios, these RGB images provide little information due The Forward Looking sonars (FLS) are active devices that to water turbidity. create acoustic waves. The waves spread through the under- The sonar offers the benefit to be invariant to the water water environment in the forwarding direction until striking turbidity, however, its images are noisy and have distor- an object or being completely assimilated by the medium. tion problems that make the processing a challenge. The According to the object composition, a portion of the data captured by a sonar can be summarized in a picture waves that striked the object are reflected back to the sonar. with an untextured set of ranges whose the most notable The reflected waves that achieve the sonar are recorded by characteristic is the shape of the objects. an array of hydrophones. The signal is processed and dis- Some works propose strategies to identify objects on cretized in intensity values called bins. The bins are indexed acoustic images as [4–6, 11, 14]. However, none of them in an image according to its return direction θbin and trav- recognize objectsandcreate semantic maps in these scenarios. eled distance rbin as show in Fig. 2. An acoustic image In the work presented in this paper, a method for seman- acquired in a marina area of the Yacht Clube of Rio Grande, tic mapping is provided. The proposal is able to detect and Brazil, is shown in Fig. 1b. recognize objects in the scene allowing the robot to build Although the sonars have the benefit of being indepen- a semantic map. The acoustic images are segmented, and dent of turbidity, their data have some characteristics that the shape of each cluster is described geometrically. Each make it difficult to process and extract information. These shape is then classified into six different classes (Pole, Boat, characteristics can be summarized in: Hull, Stone, Fish and Swimmer) using the well-known Sup- port Vector Machine (SVM) algorithm. Besides, a tool was – Non-homogeneous resolution: The bin resolution in a developed to annotate the sonar data, allowing the training number of pixels changes according to its range rbin during the supervised model. to the sonar. An illustration is shown in Fig. 2,where This approach was developed to integrate with the topo- two bins are overlapped by a box. The orange box is logical graph proposed in a previous work [12], making it farther than the blue box, then, the orange box cover a possible to construct more reliable maps for the localization bigger area. Hence, the resolution of acoustics images problem. Since it would be possible to establish a reliabil- decreases according to the bin distance rbin. This fact ity relation for each object detected based on its behavior in causes image distortion and objects stretching making the environment. For example, static objects such as stones their recognition harder. and poles have more confidence than dynamic objects such – Non-uniform intensity: It is not guaranteed that an as fish, boats, and swimmers for the localization problem. object will always be represented with the same pixel This work also extends our previous contributions [16], intensities on the acoustic images. Because of the bringing a new statistical results and a new segmentation signal attenuation caused by the water, distant objects stage. A local adjustment of the segmentation parameters is tend to have a lower intensity than near objects. Typi- performed automatically based on the average intensity of cally this problem is mitigated with a mechanism that the acoustic bins. Besides, this paper describes with details compensates the signal loss according to the traveled the experiments that validate the proposal: new results were distance. However, the intensity variations can also be a b Fig. 1 An example of the semantic map created from an acoustic can not obtain data because of the high-level turbidity conditions). In image collected in a marina. Both images have a visual intersection. b an underwater image captured by a Forward Looking Sonar. The In a an RGB image captured on the surface (underwater RGB cameras highlighted objects in red are poles and in green are boat hulls J Intell Robot Syst Fig. 2 A representative scheme of image formation of an FLS. Each bin can be identified on the polar coordinate system (θbin,rbin) and has an angular resolution θbeam and a range resolution ρbin. For this reason, the most distant bins have a lower resolution than the nearest bins. This effect can be visualized on the blue and orange highlight polygons caused by changing the sonar tilt angle or by sensitivity 3.1 Image Enhancement differences between its transducers. – The speckle noise: The FLS has a low signal-to-noise In this step, an image correction process is applied on the ratioandthespeckle noise in the acoustic image are caused image to mitigate the non-uniform intensity problem. First, by mutual interference of the sampled acoustic returns. the sonar insonification pattern is computed by averaging a – Acoustic shadow: The Acoustic shadow is caused by significant number of images captured by the same sonar. objects that block the passage of acoustic waves gen- The averaged image shows the regions where the pixels have erating a region of occlusion in the image. Because it almost the same intensity values in all images. These regions is an active device, the sonar displacement moves the represent constant problems associated with sensitivity dif- acoustic shadows and the occlusion areas significantly ference between the sonar transducers, the overlapping of changing the scene. acoustic beams and the loss of signal intensity. The insoni- – Acoustic reverberation and multipath problem: A trans- fication pattern is applied in each acoustic image in order mitted wave may travel through indirect paths due to to normalized theses constant problems. This approach is secondary reflections. Depending on the environment similar to the proposed in [9]and[8]. it can generate different effects that include the cre- The Fig.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages11 Page
-
File Size-