Diver Remote Sensing Technologies in the Underwater
Total Page:16
File Type:pdf, Size:1020Kb
Grant Agreement No. 611373 FP7-ICT-2013-10 D2.1. Diver remote sensing technologies in the underwater Due date of deliverable: 31.05.2014. Actual submission date: 31.05.2014. Start date of project: 01 January 2014 Duration: 36 months Organization name of lead contractor for this deliverable: UNIZG- FER Revision: version 1 Dissemination level PU Public X PP Restricted to other programme participants (including the Commission Services) RE Restricted to a group specified by the consortium (including the Commission Services) CO Confidential, only for members of the consortium (including the Commission Services) FP7 GA No.611373 Contents 1 Introduction ........................................................................................................................................... 2 2 Available technologies and principles of operation .............................................................................. 2 2.1 Mono camera ................................................................................................................................ 2 2.1.1 Bosch FLEXIDOME IP starlight 7000 VR ................................................................................. 2 2.2 Stereo camera ............................................................................................................................... 3 2.2.1 Point Grey Bumblebee XB3 ................................................................................................... 5 2.3 Multi-beam sonar .......................................................................................................................... 5 2.3.1 Basic single beam sonar principle of operation .................................................................... 6 2.3.2 Multi-beam sonar operation and advantages ....................................................................... 6 2.3.3 Sound Metrics ARIS Explorer 3000 ........................................................................................ 7 2.4 DiverNet ........................................................................................................................................ 8 3 Modification of systems for underwater use ........................................................................................ 9 4 State of the art in underwater perception .......................................................................................... 10 5 Processing ............................................................................................................................................ 11 5.1 DiverNet ...................................................................................................................................... 11 5.2 Stereo camera ............................................................................................................................. 12 5.3 Sonar ............................................................................................................................................ 13 6 Data acquisition ................................................................................................................................... 14 6.1 Equipment used and initial tests ................................................................................................. 14 6.2 Experiments conducted ............................................................................................................... 16 6.2.1 Stereo camera and multi-beam sonar experiments............................................................ 16 6.2.2 DiverNet .............................................................................................................................. 18 7 Advantages and disadvantages ........................................................................................................... 18 8 Conclusion ........................................................................................................................................... 19 9 References ........................................................................................................................................... 20 1 Deliverable D2.1. FP7 GA No.611373 1 Introduction A crucial part in creating a cognitive diving assistant robot is to have a diver sensing mechanism, a way for the robot to perceive the diver, to be aware of its location and movement, communicate with him/her by recognizing symbols or realizing if the diver is experiencing a hazardous state such as nitrogen narcosis, or any other similar problem. Because of all that, diving assistant (BUDDY) will be equipped with a number of devices used to sense the diver. A stereo camera will be used to provide the means of locating the diver in the surroundings and creating a 3-dimensional model of the diver. It will also be used for recognizing diver’s symbolic gestures and behaviour. In addition to stereo camera, a multi-beam sonar will be used as well for the purpose of detecting the diver. Also, a bottom-facing mono camera will be mounted for the purpose of seafloor mapping. While the cameras will be used for close range detection of the diver or parts of the diver, sonar sensing is planned to be used at larger distances or in situations when camera does not provide clear image, such as murky waters. To detect diver’s movement a network of inertial sensors, DiverNet, was developed. This will allow for better tracking of divers behaviour and detecting if something is wrong. The purpose of this deliverable is to explain the technology that will be used in more detail, how it will be used in the CADDY project and which equipment specifically will be used. Modifications made to allow the underwater use will be shown. Some popular principles in underwater image processing, computer vision and gesture recognition, as well as work in human body tracking with inertial sensors will be mentioned. A general plan for data processing and algorithm implementation will be introduced. Finally, a report on the initial data gathering will be given. 2 Available technologies and principles of operation In this chapter equipment used for remote diver sensing will be described. Basic principles of operation will be explained for each of the used technologies, as well as some details specific to the equipment used in CADDY project. 2.1 Mono camera An ordinary (mono) video camera is a device used for electronic motion picture acquisition. It has a single image sensor that converts optical image into electronic signal that, in case of digital cameras, gets converted to sampled (digital) output. Two of the most common sensor types are charge-coupled devices (CCD) and MOS transistor based sensors (most commonly CMOS). Mono camera’s main purpose will be seafloor mapping. However, in case of emergency (e.g. if the forward-facing stereo camera loses the diver) its image might be used for other purposes as well. 2.1.1 Bosch FLEXIDOME IP starlight 7000 VR Bosch dome mono camera will be used. Some of its basic features include: ● ⅓ inch CMOS HD 1.4MP sensor ● 60 FPS @ 720p ● motorized focus and zoom adjustment ● light sensitivity: 0.017 lx in color, 0.0057 lx in monochrome More details about the camera can be found in [1]. One of the main advantages is its very high light sensitivity, specific for the Bosch “starlight” camera series. This should provide above-average performance in low light conditions. The camera is not intended for underwater use so a modification will be made. 2 Deliverable D2.1. FP7 GA No.611373 Fig. 1 shows the stock camera photo. Modifications for underwater use will be described in a later chapter. Fig. 1. Bosch FLEXIDOME IP starlight 7000 VR camera 2.2 Stereo camera Stereo video camera has multiple (typically two or three) imaging sensors. This enables recording of the same scene from different viewpoints at the same time, which can be utilized to create a 3 dimensional image in the same way as the human brain combines picture from both eyes through a process called stereopsis to create the perception of depth. Stereo computer vision algorithms designed to do that are based on finding matching objects in multiple images in the same video frame. For this type of algorithm to work, some pre-processing is needed: ● Removing distortion - different kinds of image distortion may occur, causing, for example, straight lines being captured as curves due to distortions from wide-angle lenses. [2] ● Epipolar rectification - each sensor on a stereo camera captures the image from its own point of view. Epipolar rectification is a process of projecting images into a common plane. [3] Fig. 2 gives a visual explanation of epipolar geometry principles. Fig. 2. Epipolar Geometry: Epipoles (eL and eR), Epipolar Line for XL (red). All candidates for correspondence of XL are located on the line in the other image, reducing the search space significantly. 3 Deliverable D2.1. FP7 GA No.611373 These quality of the pre-processing steps have a great impact on the success of stereo computer vision algorithms. Specifically, the epipolar rectification is a measure of computational optimization - after rectification, all epipolar lines are horizontal, thus allowing easy access of pixels on the epipolar line for matching by iterating through pixels horizontally in the rectified image. A stereo camera will be used in the CADDY project to create a point cloud model of the observed diver. A point cloud model is a representation of an object (in this case a 3 dimensional object) with a set of points in space.